OK, this all gets really confusing really fast if you try to discuss symbols and numbers and values. Suffice to say that computers don't think the way we do. We humans use 0-9 and arrange them in colums to represent numbers. When we run out of numbers, we add one to the next column to the left and start over.
If you add decimal one to decimal nine you get ten:
9 + 1 ==== 10
If you add decimal one to decimal ninety nine, you get one hundred:
99 + 1 ===== 100
In other words, in a decimal system, we only have ten unique symbols (0-9), and when we run out of them, we create a new column on the left and keep right on counting. Each column is worth ten times the column to the right of it.
Written another way, you could view decimal numbers like this:
10^3 | 10^2 | 10^1 | 10^1 |
thousands | hundreds | tens | ones |
1000 | 100 | 10 | 1 |
Why am I bothering to tell you something you already know? Because I am going to compare it to how the computer counts in binary in another next section. Unlike us, computers are limited to just two symbols: 0 (zero) and 1 (one).