PART 2: PROTO-GEEKS AND DIGITAL PIONEERS
"Consider a future device for individual use, which is a sort of mechanized private file and library. . .It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. . . "
- Vannevar Bush, 1945
Nothing of any real significance happened in the development of computers for the next thirty years or so. Inventors were too busy thinking up telephones, automobiles, airplanes, and radios to pay much attention to anything as unexciting to the general public as calculating machines. We did get a few accurate and relatively efficient mechanical adding machines to help with the tedious calculation of basic math problems, but no one tried to do it electronically. Of course, the science of electronics was still in its infancy in the twenties and thirties.
In 1927, the brilliant, visionary scientist and engineer, Vannevar Bush, designed a mechanical analog calculator that would solve simple equations. The Product Intergraph, as he called it, was built by his students at MIT. In 1932, he designed and built a larger version of his calculating machine, called a Differential Analyzer <<PIC>>, which performed calculus operations. While it was driven by electrical motors, its actual operation was still mechanical. A couple of years later, he built an even larger machine with electro-mechanically shifted gears. This was definitely not a desk-top calculator. Bush's second machine weighed in at 20,000 lbs (check for accuracy - some accounts say it weighed 100 tons) and contained 2000 vacuum tubes, 150 motors and several hundred miles of wire.
In 1945, Vannevar Bush wrote an incredibly insightful and prophetic essay on the future of technology which was published in the July, 1945 issue of The Atlantic Monthly. You'll find it in the appendix of this book. Go read it and be amazed. [SIDEBAR 1]
British mathematician/philosopher Alan Turing was another important voice in the development of the concepts of computer operation. While he never acutally worked with computers, his ideas had a profound influence on the proto-geeks who did. In 1936, while a graduate student at Princeton, he wrote a paper entitled, On Computable Numbers. In it, he described a hypothetical machine called, appropriately, a Turing Machine, that performed logic operations and could read and write symbols on an infinite paper tape. With the exception of the problem of finding that infinite paper tape, his basic concept was pretty simple - that symbols representing instructions are no different from symbols representing numbers. Logical operations could be expressed with symbols, just as in mathematics. A series of these symbols or instructions would be given to the machine and it would perform the mechanical task of interpreting the instructions and carrying them out. Any number of these machines could perform the same task, given the same set of symbols. It seems obvious today, but in 1936 it was an entirely new concept, and it started the proto-geeks in the right direction.
If you hang around geeks, sooner or later you're going to hear someone mention the Turing Test. No, this isn't something you have to take in order to prove you're a seasoned traveler - it's a way of determining whether or not a machine can be said to think. Here's how it worked: The person or persons performing the test would carry on a dialogue with a human and a computer, both of which would be out of sight in another room. If the tester was unable to tell which responses were from the human and which were from the machine, the machine would be assumed to be an intelligent, thinking entity, no less than the human. Here's the way Turing described it in his famous 1950 article entitled, "Computing Machinery and Intelligence":
"It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B . . . We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original "Can machines think?"
Alan Turing was one of the few tragic figures in computer history. He returned to England in 1938 and during the war worked with the government as a code breaker, leading the team that successfuly cracked Germany's famous Enigma Code, an important factor in the defeat of the Nazis. After the war he became involved with the design of a computer at the National Physics Laboratory, and continued his research at the University of Manchester. But his brilliant career ended with his suicide in 1954. In 1952, he was arrested for the crime of being openly gay. Found guilty of "gross indecency" and forced to undergo psychoanalysis and "treatments" to cure him of his "sickness", Alan Turing ultimately found his cure in a cyanide-laced apple.
Another important event of the thirties was the rediscovery of Boolean Algebra and its application to the new science of computing. Boolean Algegra was a great concept, but it languished in an academic obscurity known only to scholars of philosophy and symbolic logic until Claude Shannon published a paper in 1938 which he based on his MIT master's thesis. His paper described how Boole's concepts of TRUE and FALSE could be used to represent the actions of switches in electronic circuits. This is one of those things about which you don't need to know the the exact details other that the fact that it gave early computer researchers the concept they needed to proceed with the development of digital electronic circuits. Of course, if you want to be totally convincing as a FauxGeek, you can learn the details of how Boolean Algebra is used in computers by studying the explanation in the box on the facing page. I've tried to make it as simple as possible, but you'll still have to think a little. Don't worry, it won't hurt much. (Sidebar explaning the way Boolean Algebra is used in computer theory - first give an example of a complicated Boolean expression in English, then translated into Boolean logic, then show how a computer could read the predicates off punched cards and produce an output using NAND gates or C statements.) Claude Shannon was also one of the pioneers of geek eccentricity and was known for riding a unicycle through the halls at Bell Laboratories while juggling. He's also noted in the annals of science for his later work as the inventor of the rocket-powered Frisbee.
Among the first experimental computers of the thirties was the digital calculating machine built in 1937 by George Robert Stibitz, another of the smarties at Bell Laboratories. It was constructed with relays, flashlight bulbs and bits of metal from old tin cans - Stibitz called it the Model K, because he built it on his kitchen table. He was also responsible for the first remote operation of a computer. In 1940 he gave an astounding demonstration at a conference in New Hampshire, when he connected a teletype printer to his computer in New York via telephone. He entered problems on the teleprinter and a few moments later the answers were sent back by his computer and printed out to the amazement of his associates.
Before we get to the first true digital electronic computer, let's take a quick look at the two guys who began the tradition of high-tech companies being started by grad student geeks in family garages. Bill Hewlett and Dave Packard were friends and classmates in the graduate Electrical Engineering program at Stanford University. They both had dreams of starting a company to develop new inventions and when Bill developed an audio oscillator as part of his degree requirements, they set up shop in Dave's garage after graduation in 1939. Their first contract for the oscillators came from Walt Disney for use in the production of Fantasia, and Hewlett-Packard was on the road to becoming one of the largest computer companies in the world.
The man now credited with the invention of the first digital electronic computer went unrecognized until a lawsuit between the Sperry Rand Corporation and Honeywell brought John Atanasoff's name to light in 1967. Until then, the first electronic computer was generally considered to be the Colossus, which was built in England by mathematicians Alan Turing and M.H.A. Newman in 1943. It was successfully used during W.W.II to break the German Enigma code - an event which decisively altered the course of the war in favor of the good guys. The second true electronic computer was thought to be the Electronic Numerical Integrator and Computer, or ENIAC, built by John W. Mauchly and J. Presper Eckert at the University of Pennsylvania in 1945. This machine was also responsible for exposing geeks to the military fetish for acronyms, which they adopted with enthusiasm.
But it was an obscure physics professor, John V. Atanasoff, who, together with a graduate student, Clifford E Berry, actually built the first electronic computer, the Atanasoff-Berry Computer or ABC, between 1937 and 1942. How his contributions were revealed is an interesting story in itself. Sperry Rand had bought the patent for the ENIAC computer and was charging royalties to other manufacturers. Honeywell decided to stop paying royalties and was sued by Sperry Rand. In keeping with the traditions of high-tech litigation, Honeywell then sued Sperry Rand for violating anti-trust regulations. (Sound familiar? Can you say "Microsoft"?) Anyway, in the course of preparing the case, Honeywell's lawyer weasels came across mention of Atanasoff's ABC computer. They tracked him down and when Atanasoff compared his earlier work with the ENIAC, he realized that the ENIAC patent was derived from his ABC and (surprise!) from information he had shared with John Mauchly during a visit in 1941. When they presented their evidence, the ENIAC patent was declared invalid and Atanasoff's name went down in history as the inventor of the electronic computer. Mauchly, of course, denied being influenced in any way by Atanasoff's work and continued to do so until his death, at which time the ongoing denial was taken up by his heirs.
It would be entirely too geeky, not to mention boring, to go into a lengthy description of Atanasoff's work, but you should know a little more about his specific contributions in case anyone asks. Besides, the story of Atanasoff's computer is also a good lesson in how real geniuses come up with their ideas.
John Atanasoff began thinking about finding a simpler way to perform long, complex calculations when he was a student at the University of Wisconsin. His thesis on the electronic structure of helium required him to spend weeks of tedious number-crunching with a mechanical desk calculator and, like any good student, he dreamed of an easier way. After he graduated in 1930 and began teaching at Iowa State College, he continued to ponder the problem. By 1937 he had worked out some of the general principles - like separating the computing functions from the memory or storage functions, and using a mathematical base of something other than our conventional base 10 or decimal system. But that's as far as he got. It wasn't enough to build a computer on yet, but at that point he got bogged down and couldn't figure out where to go with it next.
One winter night when he was particularly frustrated, he threw down his work, got in his car and went for a drive to get his mind off his problems. He put the pedal to the metal, spaced out behind the wheel and a few hours later found himself 200 miles away in Illinois, where he stopped at a roadhouse to warm up with a stiff drink. One drink led to another, and before long he was warm, happy and considerably less frustrated. It was then that his mind turned to the design of his computer and those little light bulbs started to appear over his head. His first insight was to base his design on electronic rather than mechanical devices - something that had never been done before. His second great insight was to use a binary or base 2 system of numbers (if you really want to know about the binary system, you can find an almost understandable explanation in the chapter called Bits, Bytes and Binary Blather) and to have his machine operate on rules of logic rather than direct counting. Before he left the bar, he had also worked out a way of using the positive and negative charges in capacitors to represent the 0's and 1's that make up the binary system. This line of thinking led to what was to be his most significant achievement: the development of an electronic switch known as a logic circuit.
Now don't you think that achievement should put an end to the weenie arguments against drinking?
Atanasoff got together with Clifford Berry, a graduate student who was also obsessed with the idea of an electronic computer, and by the fall of 1939, they had their first prototype. It wasn't much to look at, and wasn't particularly practical either, since it was considerably slower than working things out the old-fashioned way with pencil and paper, but it proved that the principles were correct. Immediately they began to work on what was to become the Atanasoff-Berry Computer, and by 1941 they had a functioning machine. It bore little resemblance to what we think of as a computer, because it still relied on rotating mechanical parts, had no keyboard or monitor, and it was designed to perform a single computing task - that of solving simultaneous linear equations. In spite of its limited capability, Atanasoff "knew we could build a machine that could do almost anything in the way of computing".
Unfortunately, their work was brought to a halt by the outbreak of W.W.II and the original machine was ultimately dismantled and cannibalized for parts. Atanasoff and Berry went on to other physics projects and the ABC computer, which they never patented, was forgotten until the Honeywell lawsuit finally garnered them the recognition they deserved. It's interesting to note in closing that the US. government //dumped over $500,000 into the development of the ENIAC, but Atanasoff and Berry managed to develop the theories upon which it was based for about $6000. There's a lesson in there somewhere. We'll take a look at the ENIAC and see what the government got for their half-million bucks, but first we'll consider some of the other computer developments that were instigated by the war that brought Atanasoff's work to a halt.