8 posts tagged CS101
8 posts tagged CS101
In my last CS101 post I described how programming languages are an intermediary between human language and machine code: the logic operations implemented by a computer’s circuits. In this, and my next few posts, we’ll look at programming languages in more detail, and discuss different language designs and capabilities.
The earliest and simplest programming languages were assembly languages. An assembly language is just a slightly more human-readable version of a system’s machine code. It uses mnemonics to represent individual logic instructions, and allows the programmer to use labels to refer to memory locations. A program called an assembler turns assembly language code into proper machine code.
As we’ve discussed in previous installments, computer programs are sequences of instructions that tell computers what to do. Any software that runs on a computer - be it Mac OS X, Google Chrome, the Apache web server or Candy Crush Saga - is a program, and someone has to write that program.
The problem is that people speak English(*) while computers understand the 0s and 1s that trigger their circuits. This is where programming languages come in.
Meeting The Computer Half Way
A programming language is an intermediary between English and the low-level instructions computers understand. It’s a compromise between the looseness of natural languages and the structured formality required for machine interpretation.
For example, a human might say:
I want to print all the whole numbers from 0 to 9.
And another human might express the same idea with a different sentence:
I want to print all the non-negative integers less than ten.
This loose informality of natural language makes it unsuitable for communicating with computers. A computer natively understands only the very low-level machine code instructions baked into its circuits. But humans can’t easily compose machine code directly.
Instead, the human writes instructions as a program, in a programming language.
For our example, we’ll use the programming language Python:
for number in range(0, 10): print(number)
This program is structured enough for a computer to interpret, but also “English-y” enough for a human to write. In this case, even with no programming experience at all, you can probably figure out what it means.
If your computer has Python installed you can see for yourself that computers have no problem understanding this program. On a Mac, go to Finder -> Applications -> Utilities -> Terminal, type
python -c “for number in range(0, 10): print(number)”
into the terminal window and hit enter.
Compilation and Execution
What’s going on here?
"python" is a command (**) that runs programs written in the Python programming language. When you run it as above it does two things:
Compilation is the act of turning the program from Python into machine code. Execution is the act of applying the machine code to the computer’s circuits. If the program is written correctly then the execution will yield the result the programmer intended.
The program above compiles into machine code that looks something like this:
0 SETUP_LOOP 28
3 LOAD_GLOBAL 0
6 LOAD_CONST 1
9 LOAD_CONST 2
12 CALL_FUNCTION 2
16 FOR_ITER 11
19 STORE_FAST 0
22 LOAD_FAST 0
27 JUMP_ABSOLUTE 16
31 LOAD_CONST 0
Note that even this machine code is illuminated using English-y words. What the computer really sees, of course, is just a lot of zeros and ones.
Programming is the art and science of turning informal ideas into a description just formal enough for a computer to understand. The computer then takes it the rest of the way:
In my next post I’ll discuss some of the programming languages in common use today (and also explain that I slightly cheated in describing the output of Python compilation as machine code).
(*) No anglocentrism intended. But in practice programming languages are always English-based.
(**) This so-called command is itself a program: a special kind of program whose job is to run other programs! You may be wondering what language the “python” program itself is written in? It can’t be written exclusively in Python because then what would run it? This and more will be discussed in my next post.
My previous CS101 post explained what operating systems are, and what services they provide. This post offers a quick tour through some basic operating systems concepts, and explains in more detail how the OS provides certain services.
I’ll describe in turn how the OS manages each of the four basic resources: CPU, disk, memory and network. You’ll probably have heard of some of the concepts before, but not known exactly what they referred to. So now you’ll know!
It’s been a while, but CS101 now resumes…
In a previous post I mentioned that modern computer systems consist of layer upon layer of increasingly complex building blocks. In this post I’ll talk briefly about the most basic of these building blocks: the operating system.
An operating system (OS) is a piece of software that provides a set of common services to all the other software running on a computer. These services primarily involve managing shared resources, notably CPU, disk, memory and network.
Two classes of OS dominate the desktop world: Microsoft Windows, and UNIX-like OSes. UNIX was an OS originally developed over 40 years ago at Bell Labs. It inspired a host of descendants, and its design lives on to this day, including in popular OSes such as Linux, FreeBSD and Mac OS X.
Why are OS services important? For two main reasons: interface and coordination.
You suffered admirably through my necessary but dense preliminary discussions of boolean logic, binary arithmetic and memory hierarchy. Now comes the payoff - a series of posts about things you’ve actually heard of. First up: software.
I’m sure you have at least a rough idea of what hardware and software are. In fact, if you’re reading this, you probably know a lot of people who write software for a living. But you may be wondering what it means to “write software” or “run a program”. Or you may still marvel at how it is that we can make a pile of electronic circuits into some magical device that can show us pictures of kittens on skateboards. Read on to find out more!
In my first two CS101 posts (here and here) we discussed the basic electronic circuits used to compute logic conditions and basic arithmetic. At the end of my last post I alluded to a third element we need before we can construct something worth calling a ‘computer’. That element is memory.
Memory gives us the ability to have the current computation be influenced by the result of past computation. It’s what allows us to compose sequences of basic operations to produce ad-hoc, complex computations. These sequences of instructions are called programs.
In my debut CS101 post I discussed Boolean algebra, and how any Boolean function can be modeled by a real-life electronic circuit. In this post we’ll explore how this fact provides computers with the ability to do arithmetic.
To do arithmetic, you first need to be able to represent numbers. The most basic numbers are the natural numbers, the whole numbers we use for counting things. To write down a counting number we can simply represent it as a list of ‘things’ of that size. Say we pick the hash symbol # to represent a ‘thing’, then we can write down the counting numbers as #, ##, ###, ####, ##### and so on. Of course we’d also need some symbol, like 0, to represent “no things”.
Several people have suggested that I write a series of “CS101” posts explaining computer science and software engineering fundamentals, unrelated to anything in the news cycle. This idea appealed to my inner didact, so I’ll try a few posts along those lines and see what responses I get. Feel free to comment and/or suggest topics.
Hopefully, reading these posts will provide non-engineers in the tech industry with a better sense of what goes on under the hood, of what their engineering colleagues do, and of why software engineering is such a fascinating, intricate discipline. To any engineers reading this: you’ll have to forgive my simplification of many concepts, in the interest of clarity and brevity.
I’ll kick off, appropriately, by discussing the lowest level concept in Computer Science…