# Binary to Decimal Conversion

Remember that we wanted to read the binary
number 1101, and we were going to apply a process similar to what we did with the base-10
number 1308. If you think you understand where this is going, and have abstracted the process,
go ahead and pause the lesson and try it on your own first. So let’s write 1101, and show it as the sum
of powers of 2. We write 1 times 2^3, 1 times 2^2, 0 times 2^1, and 1 times 2^0. You will
want to get very comfortable with powers of 2 as a computer scientist, and they’re pretty
easy because getting from one to the next just involves doubling. Going from right to
left, we have 1, 2, 4, and 8. Binary conversion is easy when you consider that zero times
any number is zero. We don’t need to worry about adding any place with a zero. Looking
at the three ones, we have 8, 4, and 1 which sum to 13. The binary number 1101 represents
the decimal number 13. Now let’s look at another binary number, this
time we’ll use some more lingo to relate it to computers. Each binary digit is known as
a bit. A set of eight binary digits is known as a byte. If you have a 32-bit instruction
set, this means each instruction consists of 4 bytes. The binary number we’re going
to translate now is one byte of data, 00100111. Once again, challenge yourself by pausing
the lesson and trying this on your own. To convert this number, we need to add the
powers of 2 associated with each of the one bits in the byte. Those are the positions
0, 1, 2, and 5, counting from right to left. These thus represent 2^0=1, 2^1=2, 2^2
=4, and 2^5=32. Add those four numbers together and you have 39. What is the largest number you can represent
with 8 bits? Hopefully you noticed from the chart that every power of 2 is represented
by a leading one followed by zeros. To find the largest number in 8 bits, we take 2^9
and then subtract 1. This is 256 – 1=255. In the next lesson, we’ll look at converting
decimal numbers to binary.