In my debut CS101 post I discussed Boolean algebra, and how any Boolean function can be modeled by a real-life electronic circuit. In this post we’ll explore how this fact provides computers with the ability to do arithmetic.
To do arithmetic, you first need to be able to represent numbers. The most basic numbers are the natural numbers, the whole numbers we use for counting things. To write down a counting number we can simply represent it as a list of ‘things’ of that size. Say we pick the hash symbol # to represent a ‘thing’, then we can write down the counting numbers as #, ##, ###, ####, ##### and so on. Of course we’d also need some symbol, like 0, to represent “no things”.
The problem with this representation is that it’s very unwieldy. It becomes impractical to represent more than a small number of things this way. The solution is to assign shorter symbols, or ‘digits’ to the various counting numbers. So, for example, we use ‘3’ to mean ### and ‘7’ to mean #######. These digits are much shorter and more convenient. The problem now is that there are infinite counting numbers but only so many symbols we can assign, write and remember, so it’s impractical to have a unique digit symbol for every number. The solution is simple: have only a few digits, and combine them to form larger numbers. Because we have ten fingers (digitus in Latin), and counting on fingers is a ubiquitous habit, most human societies settled on ten digit symbols, that we write as 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
So what happens when we want to represent more than 9 things? We count off groups of ten and use one digit to represent how many such groups we had, and then another digit to represent the remainder. This allows us to represent up to 9 tens and a remainder of 9, or 99. And if we want to represent more than 99 things? We add a third digit to count groups of ‘ten groups of ten’, which we call ‘a hundred’. And so on. This is what we refer to as ‘base ten’ representation.
Computers Don’t Have Fingers
This is all presumably intuitive and well-understood by all readers. But the crucial point to recognize is that the choice of ten digits, 0-9, is purely cultural. There is nothing fundamental or ‘mathematical’ about it. This observation is important because electronic circuits don’t have fingers. They have little wires that can either contain current or not. So electrical circuits can fairly straightforwardly represent only two states. Can we do useful math with only two states? Why yes we can!
Just as we were able to represent any natural number using the digits 0-9, so can we represent them using just the two digits 0-1, in a similar way. So 0 means ‘no items’, 1 means #, and now we have to carry to another digit, so 10 means ##, 11 means ###, 100 means #### and so on. This base two, or binary, representation of numbers is the cornerstone of digital (there’s that word digit again) computation. Note that these are the same counting numbers as before. We’re just writing them differently. In one notation ############# is 13, and in the other it’s 1101, but these are just two different ways of representing “the number of hash symbols in #############”. We’re really used to seeing that written ‘13’, but it’s still an arbitrary notation. ‘1101’ is just as valid. To avoid confusion between the two representations, we’ll suffix binary numbers with a ‘b’ in the remainder of this post. So, e.g., 13 = 1101b.
In binary notation the two digits 0 and 1 and referred to as ‘Binary digITs’, or ‘bits’. By convention, bits are commonly grouped together in eights, called ‘bytes’. Each byte can represent a number in the range 0b-11111111b, which is 0-255 in decimal notation. You’re probably used to thinking of the powers of ten: 10, 100, 1000 as round numbers. But in binary, the powers of two: 2, 4, 8, 16 etc. are the round numbers. To a computer scientist 1024 (two to the power of ten) is a round number and 1000 is not. After all, in binary 1024=10000000000b, while 1000=111110100b, which is not particularly round-looking.
As we saw in the previous post, Boolean algebra has two states we called ‘false’ and ‘true’. But in a mathematical context it’s more useful to call them 0 and 1. And just as we were able to build physical circuits to compute Boolean functions (functions with inputs that are either ‘false’ or ‘true’) so can we build circuits to compute functions with inputs that are either 0 or 1. Which means that basic mathematical operations on natural numbers, such as adding them together, can be implemented by physical electronic circuits! For example, a circuit to add two eight-bit numbers together would have eight input wires to represent the first number, eight input wires to represent the second number, and eight output wires to represent the result. We set the input bits to be 1 or 0 by passing current only through the appropriate input wires, and we measure the 1s and 0s of the result by checking which output wires have current and which don’t.
It’s a Little Trickier Than That
We’ve seen how, thanks to binary representation, it’s possible to build physical electronic circuits that can do simple math on whole numbers. However there remain a great many complications. Some of these are:
- What should happen when a computation overflows the number of output wires? In our example above, 8 wires is enough to represent the binary numbers 0b-11111111b, which are 0-255 in decimal. But suppose we add 10010110b + 11001000b, which in decimal is 150 + 200. That would be 350, which is 101011110b, requiring nine digits, while our circuit only has eight output wires.
- How to represent non-counting numbers: negative numbers, fractions, and irrational numbers such as pi.
- How to perform more complicated mathematical operations, such as division and taking roots.
Explaining these is beyond the scope of this post, but I may address them in a future post, if there’s enough interest.
For now, at least, we’ve established that it’s possible to build physical electronic circuits that can not only compute logical statements, as we saw last week, but also count and do basic arithmetic, all through the magic of binary numbers.
There’s one more element that we need to put in place before we can really call our circuit a ‘computer’, and we’ll talk about that in the next post.