9.1 Computer Usability Problems

No one saw [them] . . . coming. No one, that is, in my field, writing science fictions. Oh, a few novels were written about these Big Brains, a few New Yorker cartoons were drawn showing those immense electric craniums that needed whole warehouses to THINK in. But no one in all of future writing foresaw those big brutes dieted down to fingernail earplug size so you could shove Moby Dick in one ear and pull Job and Ecclesiastes out the other.

Ray Bradbury, “Three Bright Mice; and More, on the Run!”

Small Is Beautiful.

 

The history of computers would have been quite different if Atanasoff had publicized his electronic calculating machine. Not only would he have been recognized as one of the inventors of the computer, but we might have had small and inexpensive computers from the very beginning. That’s not what happened, of course. It was ENIAC, not Atanasoff’s ABC, that set the pattern, and most of the computers of the 1950s and 1960s were built in ENIAC’s giant mold. Only the largest institutions universities, corporations, research institutes, and government agencies – could afford them. If the medium is the message, then these machines reflected the darker side of our institutions. Big and costly, they were the very symbols of entrenched and centralized power – arrogant, haughty, impersonal, inefficient, and inaccessible.

One of the most annoying problems, from the user’s point of view, was the computer’s inaccessibility. “For the first two decades of the existence of the high-speed computer,” wrote John Kemeny, the mathematician and co-author of the BASIC programming language,

machines were so scarce and so expensive that man approached the computer the way an ancient Greek approached an oracle.. . . A man submitted his request…and then waited patiently until it was convenient for the machine to work out the problem. Only specially selected acolytes were allowed to have direct communications with the computer. In the original mode of using computers, known as batch processing, hundreds of computer requests were collected by the staff of a computation center and then fed to the machine in a batch.

Actually, batch processing didn’t come into being until the mid-1950s, five to seven years after the invention of ENIAC. In the first computing centers – those housing ENIAC, EDSAC, the Manchester University Mark I, the IAS machine, UNIVAC, and so on – users generally ran their own programs. The fraternity of users was small enough to give everyone (everyone, that is, with a bona fide reason to employ the machines) direct access to the oracles.

But when multimillion-dollar computing centers opened up in universities, corporations, institutes, and government agencies all over the world, another approach was necessary. You couldn’t let just anyone push the buttons and flip the switches; the flow of material into and out of the machines had to be combined and processed in efficient batches. Consequently, computers were sequestered in air-conditioned, glassed-in rooms, and a corps of professional computer operators arose to service them.

Therefore, if you, the user, wanted to appeal to the oracle, the first thing you had to do was record your program on punch cards, using any of the punch card machines at the computing center. Then you handed the deck of cards to an operator – usually found sitting at a desk behind a window – and picked up your printout later in the day or, if the computer was busy or had broken down, in the week. Because most programs contained several errors, ranging from a misplaced comma to a chain of ineffective commands, your first printout was usually an abbreviated one, containing such cryptic messages as “syntax error in line 3.” Perhaps you had used an inappropriate instruction. Or perhaps you had misplaced a parenthesis. (If you couldn’t spot the mistake, the operators, busy with their own work, weren’t likely to help you.) So you would type up a new card for line 3, resubmit the pack, and return for the second printout. This time, there was an error in line 20…

The situation got worse as more people started using computers. Scientists and engineers began experimenting with computers that could serve many people simultaneously. However, the technical problems were formidable. With one central processor and control unit, a von Neumann computer can execute only one task at a time. (In an effort to break this computational bottleneck, researchers are experimenting with computers that are equipped with multiple processors and control units. But “non-von Neumann” computers are difficult to program and progress has been slow.) How, then, could you get a computer to run a scientific program for one user, analyze a financial plan for another, and playa game of chess with a third?

The solution, developed at MIT in the late 1950s and early 1960s, was a brilliant idea called time-sharing. It took advantage of a computer’s forte, speed. A time-sharing system consists of a computer linked to any number of terminals (a teletype or a monitor with a keyboard), each of which services a user. As each user fiddles with his or her own terminal, playing chess, running a scientific program, analyzing a financial plan – whatever – the computer switches from one terminal to another at a very high clip, executing a small portion of each user’s program at every step. Since the machine can perform a single operation in a few millionths of a second, the users – slow-moving humans that they are – are none the wiser, and the computer seems to be giving everyone its undivided attention. Actually, it is being as single-minded as ever.

A time-sharing computer isn’t the same thing as a multiprocessing computer, and the distinction is important. Although Whirlwind could support many terminals and perform many tasks at the same time, it couldn’t run different programs simultaneously. It could carry out only certain predetermined tasks such as tracking aircraft and plotting interception courses – that had been accounted for by the machine’s internal programs. However, time-sharing computers are, like Whirlwind, real-time systems, capable of responding instantaneously to the actions of a human or a machine.

The advent of time-sharing computers led to the establishment of commercial time-sharing services. The customers – say, a high school or an engineering firm – would hook up its terminals to a time-sharing computer via the phone lines and buy “time” on the machine, paying for access by the minute. By the late 1960s, time-sharing was the fastest growing segment of the computer industry. It also seemed to be the wave of the future. There was a good deal of talk, by people who should have known better, of the inevitability of “information utilities” – vast centralized data banks that would be a cross between the library and the phone company. However, mainframes, or big computers, are expensive to buy and maintain, and although time-sharing services are still an important part of the computer business, they lost ground to another technological innovation, the minicomputer.

Back to Chapter Eight                                                                                                Continue

Apple II computers roll off a production line at Carrollton, Texas