When you create a variable with a number value, that is assigned a Number type.

JavaScript supports positive and negative numbers:

const one = 1
const minusOne = -1
const bigNumber = 5354576767321

A number can be defined using hexadecimal syntax by using the 0x prefix:

const thisIs204 = 0xCC //204

There is an octal syntax too, which is removed in strict mode so I won’t talk about it.

We can define decimals too:

const someNumber = 0.2
const pi = 3.14

Internally, JavaScript has just one type for numbers: every number is a float. You might be familiar with other languages that define integers and other number types. JS only has one.

This means one big consequence: some numbers cannot be represented exactly.

What does this mean in practice?

No problem for integers, numbers defined without a decimal part: 1, 2, 100000, 2328348438.. up to 15 digits. Staring from 16 digits you’ll have approximation issues.

Decimal numbers are the ones that gives the most problems.

JavaScript stores numbers as floats, and floats cannot be represented with full precision by the computer, technically.

Here’s a simple example of what this means:

2.2*2 //4.4
2.2*20 //44
2.2*200 //440.00000000000006 (???)
2.2*2000 //4400
2.2*20000 //44000

Another example:

0.1 * 0.1

You might expect 0.01 from this operation? No, the result is 0.010000000000000002.

You might never run into problems but you might, so you need to keep this in mind.

The problem is generally solved by avoiding to process numbers as decimals:

(0.1 * 10) * (0.1 * 10) / 100

But the problem is best avoided by not storing decimals at all, and using calculations to just render numbers as decimals to the user instead of storing them as such.


Go to the next lesson