5 101.
6 110.
7 111.
8 1000.
9 1001.
10 1010.
However, the binary system has at least one major advantage over the decimal system: the arithmetic tables are extremely small and simple. For addition, there are only four entries: 0 + 0 = 0.
0 + 1 = 1.
1 + 0 = 1.
1 + 1 = 10.
(where "10" is binary for the number 2, not decimal for ten).
For multiplication they are even simpler, since there is no need for a carry; the table looks the same as it does in decimal: 0 0 = 0.
0 1 = 0.
1 0 = 0.
1 1 = 1.
The information theorist Claude Shannon was the first to shorten the phrase "binary digit" to "bit," which was a play on words because it also was the smallest bit of information that could be represented: yes or no, on or off, one or zero, true or false.
Making automatic devices that have two states is much simpler than making devices that have ten states. The two states could be a wire in an electric circuit being at one of two voltages, or a mechanical lever being in one of two positions, for example. The design problem for building an automatic multiplier changes from having to somehow mimic the entire ten-by-ten table one learned in grade school, to this: If both inputs are 1, then the answer is 1. Otherwise, the answer is 0.
If the automatic computing device is mechanical, like the first Zuse computers, then this rule means something like letting a pin slip through two holes only if both are lined up on the right. If the device is electronic, like the Atanasoff-Berry Computer, then this rule means building a circuit that allows current to flow only if both switches in series are closed.
The early decimal machines, like the ENIAC and the Harvard Mark I, still used on-off states in their circuits but bundled them into subcircuits that represented the decimal digits 0 to 9. They were essentially electrical versions of the earlier mechanical calculators that used wheels to count in decimal. Both ENIAC and the Mark I were the size of a room, largely because of the inherent inefficiency of decimal representation, whereas the Zuse and Atanasoff designs were the size of a desk. It was not because the larger machines held more data; the ENIAC needed 18,000 vacuum tubes to compute with a maximum of twenty ten-decimal-digit numbers. The Atanasoff-Berry Computer needed only 600 vacuum tubes, yet it computed with a maximum of thirty fifteen-decimal-digit numbers-50 percent more numbers, each with 50 percent greater precision.
Because very few people can look at a representation like 0001110001000111 and grasp its numerical value, all computers that use binary arithmetic also provide a way of accepting human-readable decimal representations as input and converting their answers to decimal output. That conversion is a small price to pay for an overwhelming simplification of the design of the computing device, so computers that use decimal arithmetic internally have disappeared from the computing landscape in favor of the binary arithmetic approach used by Atanasoff and Zuse.
Appendix C | Electronic Switches.
In a biological organism, a neuron might cause a muscle to contract or convey a sensation, but a neuron can also trigger other neurons. A brain is a collection of neurons, interconnected so the firing of one can cause or suppress the firing of many others. That capability is what permits organisms to exhibit such complex and intelligent behavior. For any artificial device to rival the computing abilities found in nature, it must similarly have control elements that trigger other control elements.
The control elements of early electronic computing progressed from relays to vacuum tubes (or valves, the British term) to transistors. Such devices, sometimes called "switching elements," mimic the behavior of neurons in that they are switches that can control many other switches.
When something closes a switch to complete a circuit, the current that flows can operate another switch, either to open it or close it. The small amount of current needed to operate a switching element can result in a lot more current either flowing or not flowing, so a switching element can operate several other switching elements, not just one. That is why switching elements are often referred to as "amplifiers": the power they control is larger than the power it takes to operate them. Switching elements let electronic devices perform binary arithmetic (see appendix B) because their on-off state can represent the binary digits 0 and 1.
Imagine a waterfall with a gate at the top that can dam up the flow of water. When the gate is open, water spills down with much more energy than that needed to operate the gate. That flow of water could operate mechanisms that open or close other gates of other waterfalls, creating quite a complicated series of events. If, say, water not flowing represents the binary digit 0 and water flowing represents a 1, then a set of waterfalls could represent numbers in binary. Changing the gates performs operations on those numbers. What electrical devices do is similar, but with a flow of electrons instead of a flow of water.
An everyday example of switches in combination is what electricians call a double throw switch. Most homes have at least one room with two entry points and separate wall switches by each entry that can operate the overhead light. Perhaps you have had the experience of entering a dark room at one door when someone else enters another door, and you both hit the wall switches at the same time, which keeps the room dark. The pair of wall switches performs the logical operation that computer scientists call an "exclusive OR," because the light turns on if one or the other switch flips, but not when both flip.
An example of the logical "AND" operation is an appliance plugged into a power strip that has a switch. You have to flip on the power strip switch AND the appliance switch for the appliance to work. Switches, properly coupled, can represent logical operations, and logical operations in turn can mimic arithmetic operations on binary digits.
Suppose we want to build a circuit that takes two binary inputs a and b, which can be 0 or 1, and produces their sum c in binary. If we allow two digits in the result, the addition table looks like this: a + b = c: 0 + 0 = 00.
0 + 1 = 01.
1 + 0 = 01.
1 + 1 = 10.
The right-hand digit of the sum c is the "exclusive OR" of the values of a and b. It has value 1 if a or b is 1, but not both. The left-hand digit is the "AND" of a and b. It is 1 only if a and b are both 1. So that says we could build a circuit where the inputs a and b flip switches to indicate their 0 or 1 value, and the two digits of c would immediately "light up" at the speed of electricity. Instead of operating lights, the flow of electricity in the result then operates yet other switches, so that the arithmetic can cascade to perform arithmetic on numbers with many (binary) digits.
One type of switch that can be operated by electricity is called a "relay." A relay is a simple device, one made from ordinary iron and wire. If you wind the wire around a piece of iron and run electricity through the wire, the iron becomes an electromagnet and remains that way until the electricity is off. That electromagnet can pull on something else made of iron such that it mechanically closes or opens a switch. Compared to mechanical switching elements, relays are quite fast, but they are electromechanical and not electronic. In the early days of telephone technology, telephone companies used relays to route calls, so relays were a ma.s.s-manufactured part even by the 1930s. Howard Aiken used them in his Mark I computer. Konrad Zuse used inexpensive mechanical linkages in his first designs to represent binary numbers but later used electromechanical relays.
Relays are inexpensive but not very uniform in their response; a group of relays attached to a single source of electricity doesn"t switch at the same time, which means a computer designer can only operate the system as fast as its slowest relay. What is worse is that relays are not very reliable, because on rare occasions the switch sticks in position after the electricity turns off. In a computer system that has many thousands of relays, the odds are good that at least one relay will fail.
An electronic switch moves only electrons, not ma.s.ses of metal. The first device discovered that could accomplish this was the vacuum tube. The glowing tube in a neon sign has a low-pressure gas that conducts electricity between two electrodes, one at either end. If you create a nearly perfect vacuum, it is still possible to get electricity to flow, but the electrodes have to be closer together, and it helps to heat one of the electrodes to "boil" electrons out of it to jump across the vacuum. It is very much like the waterfall a.n.a.logy, with electrons responding to the pull of electrical forces instead of water responding to the pull of gravity.
Like the waterfall a.n.a.logy, it is possible to insert a "gate" that controls the flow. If the vacuum tube has a third electrode in the form of a screen placed between the other two, then its voltage can control how much current flows. Like the waterfall gate, closing a gate stops all flow; electrons and water will not flow "uphill" even when a huge downhill waits on the other side. Thus, a vacuum tube serves as a switching element. Because only electrons and not mechanical parts are moving, vacuum tubes can switch on and off in microseconds, and a vacuum tube is more reliable than a relay. They can still burn out, however, much like an incandescent lightbulb burns out.
In the early decades of electronic computing, vacuum tubes were by far the most costly components in a computer system. The invention of the transistor made it possible to replace vacuum tubes with small, solid-state devices. Today, the switching elements of computers are transistors that are "printed" (lithographed) by the billions onto a piece of silicon, so each transistor in an integrated circuit costs less than a millionth of a penny.
Appendix D | Differential Equations.
Whereas everyone learns arithmetic, geometry, and some algebra in a general education, the next conceptual climb is a steep one that only those pursuing technical degrees usually undertake: calculus. Since much of the motivation for the early computers was to solve problems arising in calculus, this appendix gives an overview of the applications that give rise to calculus problems and explains why they are so difficult to solve using pencil-and-paper methods alone.
In elementary school, children learn the counting numbers 1, 2, 3, ..., then fractions and decimals, then negative numbers and sets of numbers (like three-dimensional coordinates or statistical results). The concept that marks the transition to higher math is that what calculus shows us how to manipulate are not just numbers, but functions. A function is the operation performed on a set of numbers, like taking the cube root of x for all values of x between 3 and 7, or taking the cosine of x and adding 17 and then taking the square root of that whole expression. The actual value of x is not the focus, and neither is the numerical value that results from applying the function to any particular x. This can be disconcerting after experiencing a decade of math teachers demanding the answer to how operations change numbers into numbers. In calculus, the operations change functions into functions.
In the mid-1600s, two brilliant men independently invented the mathematics we now call calculus. Just as the question of who contributed most to the invention of the modern computer is the subject of argument, so is the question of who deserves the most credit for developing calculus. Isaac Newton in England and Gottfried Leibniz in Germany both made groundbreaking contributions but were not aware of each other"s work.
Newton developed calculus as a way to describe physics in mathematical terms. For example, if a mathematical expression describes the position of an object as a function of time, what is the mathematical expression that describes the speed of the object? Determining speed from position means taking the "derivative" of the position function, also called "differentiating" the function. Differentiation is a calculus operation that has its own collection of memorized rules and methods just as elementary arithmetic has rules for multiplying many-digit numbers. The inverse question, that of determining the position if you know the speed, means taking the "integral" of the speed function, or "integrating" the function. In general terms, differential calculus is used to find the rate at which things change, and integral calculus is used to find how things acc.u.mulate, like the area or volume of objects described by functions.
Ordinary Differential Equations (ODEs).
As in the example mentioned above, differentiating the position function gives the speed function. Differentiating the speed function gives the acceleration function. A situation that often arises in physics is that an equation relates the position, the speed, and the acceleration. Since that equation involves differentials of a function (with respect to just one variable) as well as the function itself, it is an "ordinary" differential equation, or ODE.
Here are some examples of physical problems that are expressible as ODEs: A rocket projectile accelerates as it burns fuel, but it also becomes lighter with time so that it takes less fuel to make it go faster. It slows with air resistance, some of which is proportional to the speed and some of which is proportional to the speed squared. The physical laws lead to ODEs that mathematicians can express with just a few symbols but that are very difficult to solve. Such calculations were of great interest to the computer developers of the World War II era, when hitting a target with a missile was a challenge involving a lot of trial and error. The intimidating and blackboard-filling math for this problem may be the source of the expression "it"s not rocket science," since rocket science of this sort really is difficult.
As an asteroid moves through the solar system, it accelerates under the gravitational forces of the sun, planets, and other ma.s.ses. Thus, the second derivative of the position (its acceleration) relates to the position by an expression that sums all those forces, giving rise to an ODE. Solving that equation is of great interest if the question is whether the asteroid might strike the earth in the near future.
A pendulum, or a ma.s.s hanging on a spring, moves according to an ODE. The more the ma.s.s moves away from equilibrium, the greater the acceleration in the opposite direction. This situation arises so often in physics that the ODE for it has a name: the harmonic oscillator equation.
Partial Differential Equations (PDEs).
Problems in physics are rarely so simple that they involve only a single equation involving one function that depends on one variable (like time). Consider the complexity of the airflow around an airplane wing, for example. Pressure, air speed, air density, and temperature all are functions of the position (in three dimensions) and the time, and those are just a few of the things that enter the equations that determine the lift and drag forces on the wing. Fundamental laws of physics say that the total energy, momentum, and matter cannot change with time. Each of those quant.i.ties is expressed with derivatives, and the fact that they are conserved creates a system of three PDEs that must be solved simultaneously. PDEs involve differentiation with respect to more than one variable, like both the time and the x direction.
One type of PDE problem is to find the steady state of a system. Suppose a room has a heater in one corner and an open window on the other side; what is the temperature at every point in the room? Mathematically, the problem is a PDE that involves differentiating the temperature in each of the three spatial dimensions. The temperature in the room at any point is the average of the temperatures in the immediate neighborhood of the point, except where the heater or window forces it to be a particular temperature. Another steady-state PDE problem is that of finding the shape of a trampoline when standing still somewhere on the mat. The depression of the trampoline at any point is the average of the depressions immediately around the point, except for the frame and under the feet of the person standing on it. For this kind of PDE, there is no need to consider time as a variable.
The other type of PDE involves time. Time-dependent PDEs can formulate how a physical situation evolves. In striking a note on a piano, for instance, a hammer hits the string, which causes complex sideways motion of the string as a function of both the time and the position along the length of the string. That problem is a close cousin to the harmonic oscillator problem described above for ODEs, except that both time and position are variables.
PDEs arise in many technical areas, not just physics. They can describe how populations of species grow and decline in an ecosystem; what the climate will be a century from now; how atoms bond to form molecules; how to design a suspension bridge to be strong with minimum materials; and how galaxies evolve over millions of years. They even find use in determining the best price for financial instruments, like put and call options. Economists use PDEs in macroeconomic theory. A famous remark by physicist Max Planck was that in his youth he considered going into economics but had to change to physics because the mathematics was too difficult.
Computers for Differential Equations.
Differential equations are easy to express but usually fiendishly difficult to solve. At the beginning of the twentieth century, a handful of simplified examples were all that mathematicians could point to as amenable to pencil-and-paper a.n.a.lysis. The a.n.a.lytical solutions might work if the problem geometry was a sphere or a square plate or other idealized shape, but there was little hope of finding a solution, say, to the PDE that expresses the mechanical stresses on something in the shape of a wrench.
The approach that works with broad generality is to pick so many sample points in the function that the problem becomes one of working with lists of ordinary numbers, not functions. Using sample points gives the "discrete form" of differential equations, which are sometimes called difference equations because simple subtraction suffices to estimate the rate of change with respect to increments of time and s.p.a.ce. For the piano string example, imagine that instead of a string, there are point ma.s.ses evenly distributed along the length of the string that follow the simple rules of how ma.s.ses behave when tugged on. This eliminates the calculus and gets us back to elementary arithmetic, but with a catch: to use enough points that the sampling is accurate requires a very large number of additions, multiplications, subtractions, and divisions. The sampling approach, or "numerical a.n.a.lysis" method, seems to offer the possibility of solving just about any problem that can be expressed as an ODE or PDE, but it begs for a way to do all that arithmetic at speeds far beyond what a human can do.
In the trampoline example, suppose the trampoline is square and sampled with a five-by-five grid. The amount the trampoline depresses at each grid point is approximately the average of the depression of the points around it. That leads to a set of twenty-five linear equations in twenty-five unknowns. Solving that type of system is what Atanasoff had in mind in designing his computer, since the total work to solve twenty-five equations is more than ten thousand calculations, intractable even with Marchant or Monroe desktop calculators. Atanasoff"s design could solve such a system in about fifteen hours.
The ENIAC design suggests that ODEs were its main target, and missile trajectory calculation is the most commonly cited application for that computer. The ENIAC could store only twenty variables, but it could apply changes to them very quickly to simulate how a physical system might evolve through time, with the time sampled in discrete steps. Thus, the ODEs that describe a missile become a repeated procedure; at time 0, the missile is at a given position on the ground and experiences a given thrust. Arithmetic says how it accelerates as a function of time, if we ignore the fact that its ma.s.s is decreasing as it burns fuel and that gravity is pulling it into a curved path back toward the ground. At time 0.01 second later, the velocity and thrust and position and ma.s.s are sampled again and used to compute what it will do at time 0.02 second, and so on. Since the ENIAC could do thousands of calculations per second on its small set of variables, the calculation of the complete flight path of the missile could finish in reasonable time.
The progress in computer technology in the last seventy years has increased speed and storage by a factor of more than a trillion. This allows us to obtain close, high-resolution approximations of the solutions to a vast range of differential equations, not just the handful that can be solved with pencil-and-paper a.n.a.lytical methods.
* Units of area have been translated into English units, for readability.
Notes.
Introduction.
1 MIT Inventor Archive: "Inventor of the Week Archive," "I had reached the Mississippi River": Mollenhoff, p. 157.
3 "a practical limitation on the size of systems": Alt, p. 283.
Chapter One.
1 "to teach such branches of learning": t.i.tle 7, U.S. Code, Sec. 304. 2004 ed. Legal Information Inst.i.tute, Cornell University,
usc_sec_07_00000304----000-.html.
2 "Hurrying toward his destination": Burton, p. 61. Tammara Burton is Atanasoff"s granddaughter and her book is my main source of information about his childhood and personal life.
3 "for their fundamental theoretical investigations": n.o.bel Prize Citation, 1977, "If you had been here in the first half of the semester": Mollenhoff, p. 26.
5 "There were perhaps twenty-five graduate students in the cla.s.s": Ibid.
6 "He appeared to lack the patience necessary": Hodges, p. 38.
7 "It was all the same thing to him": Ibid., p. 9.