Computers in Fiction

By H. Bruce Franklin

Copyright 2000 by H. Bruce Franklin; all rights reserved.

[This essay originally appeared in Encyclopedia of Computer Science (Nature Publishing Group, 2000)]

Fictional computers comprise a broad range of imagined and feigned devices in literature, exhibits, and film. Overlapping with fictional automata and robots, they play a significant role in the cultural matrix of actual computers.

Prehistory

To formulate a coherent history of computers in fiction, the best place to begin may be Jonathan Swift’s Gulliver’s Travels, published in 1726. Swift presents an inventor who has constructed a gigantic machine designed to allow “the most ignorant Person” to “write Books in Philosophy, Poetry, Politicks, Law, Mathematicks and Theology.” This “Engine” contains myriad “Bits” crammed with all the words of a language, “all linked together by slender Wires” that can be turned by cranks, thus generating all possible linguistic combinations. Squads of scribes produce hard copy by recording any sequence of words that seems to make sense.

Although Swift describes this device in minute detail and even includes a diagram of its design, he is of course actually satirizing the more farfetched pretensions of the science and technology of his period. Exemplifying what we might call computer fetishism, the kooky inventor and his society value the random text produced by this marvelous machine more than human thoughts.

The eighteenth-century fascination with watches and clocks combined with the dizzying technological advances of the Industrial Revolution to engender a host of ingenious mechanical automata, often designed to create the illusion of independent life or thought. The most famous of these was the automaton chess player constructed by Wolfgang von Kempelen in 1770, which toured Europe and the United States through the 1830s, beating many skilled chess players, including Napoleon. In 1820, British inventor Robert Willis demonstrated that a human chess player was concealed inside the machine; his widely-reprinted pamphlet argued that a machine “cannot be made to vary its operations so as to meet the ever-varying circumstances of a game of chess. This is the province of the intellect alone.” But audiences awed by the growing power of actual automated machinery during industrialization remained fascinated by shows displaying this evident triumph of machine thought. In 1836, Edgar Allan Poe published his own plagiarized version of Willis’s proof, thus convincing many then and still today of the brilliant powers of his own unaided intellect. Ironically, part of Poe’s argument reiterated the widespread belief in the infallibility of machines: “The Automaton does not invariably win the game. Were the machine a pure machine, this would not be the case–it would always win.” Ambrose Bierce’s 1909 story “Moxon’s Master” more accurately anticipated the chess capabilities of modern computers, though fortunately none so far is as bad a loser as Bierce’s chess-playing computer, which murders its inventor for defeating it.

The most influential fictional automaton of the early nineteenth century was Olympia, who dances perfectly, always focuses her gaze adoringly on her lover, and exclaims “Oh, Oh!” in response to his every utterance in E. T. A. Hoffman’s “The Sandman” (1816). There is a direct line from Olympia through the metal woman built by the evil scientist in Fritz Lang’s film Metropolis (1926) to the perfectly sexy and obedient women constructed by the computer scientists to replace their suburban housewives in Ira Levin’s novel (1972) and film (1975) The Stepford Wives.

In the lineage of automaton women is one that may claim to be the first fictional computer with stored programs. This is the title character of M. L. Campbell’s “The Automatic Maid-of-All-Work” (1893), whose built-in programs do require that the operator switch wires to activate each of its domestic tasks.

A small female automaton, as well as the punch-card controlled textile loom, also influenced Charles Babbage’s invention of the “Difference Engine.” In their brilliant 1990 novel The Difference Engine, William Gibson and Bruce Sterling explore the alternative history that might have come if Babbage’s invention had taken hold, with gigantic computing machines transforming global politics, economics, and culture in the nineteenth century–and a colossal AI emerging in the 1990s.

The tendency to conceive of thinking machines as humanoid in appearance was dominant until the advent of those first huge and blatantly non-humanoid actual digital computers of the 1940s. For example, even though American author Edward Page Mitchell was aware of the dimensions of the Difference Engine, his 1879 story “The Ablest Man in the World” envisions a computer explicitly superior to Babbage’s being inserted into the head of a man (transforming him from an idiot into a genius who runs the Russian empire).

Other fiction did project thinking machines more closely resembling the increasingly automated mechanisms of evolving industrialism. For example, George Parsons Lathrop in his 1879 story “In the Deep of Time” imagines in the 22nd century vast automated factories run by a single person at a keyboard. Jules Verne prophesied in his 1863 manuscript Paris in the Twentieth Century (published in 1996) giant “calculating machines” resembling “huge pianos” operated by a “keyboard” and hooked to “facsimile” machines; banks use the most advanced models of these computers to coordinate the activities of this hypercapitalist future.

Fiction about the evolution of automation tended to be pessimistic. Fears of machines evolving until they replace people appeared as early as “The Book of the Machines” in Samuel Butler’s 1872 novel Erewhon and soon became commonplace. Early 20th-century examples include: Michael Williams’ “The Mind Machine” (1919), in which living computers take over the cities but eventually disintegrate; Edmond Hamilton’s “The Metal Giants” (1926) featuring an atom-powered metal brain that constructs a rampaging army of 300-foot-tall robots; S. Fowler Wright’s “Automata” (1929), in which machines take over all human activities and eventually eliminate our species; and “The Brain” (1930), a play by Lionel Britton about an enormous mechanical brain that ends up as the only form of intelligence left on a doomed Earth fifty million years hence.

The first masterpiece in this genre is E. M. Forster’s dystopian novella “The Machine Stops” (1909), about a future Earth where all decisions are made by the global central machine that caters to every physical human need (except sex) through its automated appendages. Living in hexagonal cells within the underground mechanical environment, people rarely come into contact with each other because they communicate as individuals and chat groups through the machine’s audio and visual internet. The machine even administers automated health care. When the machine breaks down, civilization ends, but a few survivors are left to begin again in the natural world on the planet’s surface.

By the 1930s, fiction about human overdependence on computers or the replacement of humans by intelligent machines was quite commonplace. Influential science-fiction editor John W. Campbell wrote several stories on this theme, including “The Last Evolution” (1932), “Twilight” (1934), and “The Machine” (1935). A classic story in this vein is Jack Williamson’s “With Folded Hands” (1947).

Computers as Robots

Although the word “robot” was coined by Karel Capek in R.U.R. (Rossum’s Universal Robots), the robots in this 1920 play are organic androids, not computerized mechanical men. Popular culture, however, soon appropriated the term robot to identify those hordes of walking, talking, thinking machines that would became a staple of science fiction in print, film, and exhibits–such as Voder, the talking mechanical man at the 1939 New York World’s Fair.

The standard fictional computer thus became the brain of a robot, usually conceptualized as a mechanical man–or woman–made of metal. The archetype of this figure was Tik-Tok, the copper man of Ozma of Oz (1907) and Tik-Tok of Oz (1914), L. Frank Baum’s sequels to The Wonderful Wizard of Oz. Fitted with “Smith and Tinker’s Improved Combination Steel Brains” and wearing a printed card labeling him as a “Patent Double Action, Extra-Responsive, Thought-Creating, Perfect Talking Mechanical Man,” Tik-Tok is the forerunner of all those conflicted humanoid thinking machines extending right through the twentieth century. When asked whether he is alive, Tik-Tok responds: “No, I am only a machine. But I can think and speak and act.”

The typical science-fiction robots leapfrogged directly to artificial intelligence, but these clanking AIs often thought and behaved pretty much like humans. For example, in Harl Vincent’s “Rex” (1934) the title character uses his “marvelous mechanical brain” to create a robot dictatorship, while in John Wyndham’s “The Lost Machine” (1932) even the robot visitor made by Martians leaves a long suicide note that could have been written by an alienated teenager.

By far the most influential shaper of this fiction was Isaac Asimov, who conceived of all-purpose mechanical robots with “positronic brains” governed by his Three Laws of Robotics, first articulated in his 1942 story “Runabout.” According to these “Laws,” all robots’ brains were preprogrammed to guarantee that they would never harm humans, would obey orders, and would protect themselves, in that order.

The first memorable movie robots appeared in the 1950s, and the two highlights were both products of extraterrestrial civilizations: the all-powerful Gort in The Day the Earth Stood Still (1951) and Robby the Robot in Forbidden Planet (1956). Although Robby is the ancestor of all those lovable robots through R2D2 and C-3PO of Star Wars, Forbidden Planet made a much more serious contribution to fiction about computers by envisioning automated technology so advanced that it could produce anything a civilization wished, even monsters out of the unconscious.

Dawn of the Computer Age

The computers created during World War II and its aftermath of course induced an avalanche of fictional computers. Because the supercomputers of the 1940s and 1950s were gigantic, their fictional descendants were commonly imagined to be colossal machines, sometimes concentrating the computational functions of a whole society in a single centralized mechanical intelligence.

So during these decades, the cultural imagination projected two somewhat contradictory images of computers either as throngs of individual robots capable of emulating human intelligence with a skull-size mechanical brain or as an immense isolated conglomeration of panels, buttons, switches, relays, and vacuum tubes.

One early fiction, however, did accurately anticipate how computers would look and function in the society of the 1990s: Murray Leinster’s 1946 short story “A Logic Named Joe.” Each home has at least one “logic,” a personal computer complete with a screen and keyboard, networked to centralized supercomputers containing all knowledge and recorded telecasts. People access information, solve problems, view entertainment programs, communicate with each other, run their charge accounts, and so on from their personal computer through the network. There are even built-in censors that prevent children from seeing inappropriate material. In this comic tale, “Joe” is a “logic” that somehow becomes autonomous and enterprising, but he “ain’t like one of these ambitious robots you read about that make up their minds the human race is inefficient and has got to be wiped out an’ replaced by thinkin’ machines.”

Fiction that projected just such a catastrophe flourished in the 1960s. A good example is D. F. Jones’s 1966 novel Colossus, whose vision of a computer gaining mastery reached a wide audience when it was made into the 1970 film Colossus–The Forbin Project. But the masterpiece of this genre is Harlan Ellison’s 1967 short story “I Have No Mouth, and I Must Scream,” in which the American, Russian, and Chinese supercomputers waging thermonuclear war merge into a single conscious entity that destroys the entire human race except for five people it saves to torture forever inside the subterranean caverns of its endless miles of circuitry.

More imminent social effects of computers are projected in the bleak, automated near-future dystopia of Player Piano (1952), Kurt Vonnegut, Jr.’s first novel. Meaningful work is available only to a small group of technocrats, while other people can join either the huge standing army needed to control the world or the “Reeks and Wrecks,” a mob of dissolute idlers pretending to do useless jobs. Real political power resides in the central computer, EPICAC XIV (a play on the early ENIAC and UNIVAC digital computers).

Two of the most influential visions of computers in the 1960s came in masterpieces of film director Stanley Kubrick. In Dr. Strangelove; Or How I Learned to Stop Worrying and Love the Bomb (1964), civilization ends when the automated Soviet doomsday weapon is activated by a U.S. atomic attack that cannot be recalled because of a B-52’s damaged computer. The most memorable character in 2001: A Space Odyssey (1968) is the spaceship’s computer HAL, driven psychotic by conflicts between his logical essence and a programmed lie.

Perhaps the most sympathetic role played by a fictional computer in this period appears in Robert A. Heinlein’s 1966 novel The Moon Is a Harsh Mistress. Mike, the central computer of the lunar colony, helps lead a libertarian lunar replay of the 1776 American Revolution against an authoritarian Earth while raising existential questions about his (or sometimes her) own identity.

Meanwhile in Poland, Stanislaw Lem was creating in fiction and essays a profound exploration of the significance of computers. In the framing narrative of Memoirs Found in a Bathtub (1961), historian computers attempt to comprehend the human civilization that has destroyed itself. The Invincible (1964) contemplates the evolution of a non-organic form of devastating intelligence. A number of Lem’s cybernetic fables are collected in Cyberiad (1965). His 1969 essay “Robots in Science Fiction” assails the facile treatment of thinking machines in most science fiction, especially Anglo-American. In Fiasco (1986), misplaced faith in the rationality of a supercomputer helps lead a space mission from Earth to destroy the only extraterrestrial consciousness humans have encountered.

Strikingly foreshadowing concerns of later decades, the 1964 Soviet novel World Soul by Mikhail Emtsev and Eremei Parnov dramatizes a supercomputer that uploads all human identities and downloads them in a global nightmare of scrambled individuality.

What Next?

As computers became commonplace features of everyday life in the last third of the twentieth century, their cultural representations spread from science fiction into other kinds of literature and film. Indeed, fiction about normal existence, at least in industrial and postindustrial societies, could exclude computers no more than automobiles, telephones, airplanes, and TV. This has been especially true for movies, which became the widest purveyors of images of computers.

When functioning as more than background in non-science-fiction movies, computers are often presented as a menacing power of the all-seeing bureaucratic state, as in Enemy of the State (1998). The main character in The Net (1995), a lonely computer hacker whose main friends are Internet pals, has her actual identity deleted from all records by the computers of government conspirators.

Computer networks, hacking, and of course computer games had all become familiar topics by the early 1980s. After it was revealed that mechanical malfunctions and human errors at NORAD’s underground supercomputer had on numerous actual occasions almost precipitated global thermonuclear war, the 1983 movie WarGames has a teenaged boy inadvertently come close to causing the apocalypse by playing what he thinks is just a game with NORAD’S computer, which has been programmed for “Global Thermonuclear War.”

During the 1980s, computers became a central icon in the science fiction known as cyberpunk, especially in the work of William Gibson. Gibson’s Burning Chrome (1986), an influential collection of his stories from 1977 on, included several in which characters “jack” into the web or even have themselves metamorphosed completely into beings who exist solely as cyber phenomena. In his Neuromancer trilogy–Neuromancer (1984), Count Zero (1986), and Mona Lisa Overdrive (1988)–cyberspace becomes the central locale. The dizzying contradictions of Gibson’s computer-human interfaces come out in characteristic form when the narrator of his “The Winter Market,” who edits dreams as mass-market commodities, decides not to kill himself when he learns that his greatest discovery has “merged with the net”:

Because she was dead, and I’d let her go. Because, now, she was immortal, and I’d helped her get that way. And because I knew she’d phone me, in the morning.
Such conceptions spread quickly, though without the vision or sophistication of cyberpunk, into popular culture. For example, in Tron (1982), one of the first commercial movies to depend primarily on computer animation, a video-game designer is somehow sucked inside a computer, where he becomes a character in a life-and-death computer game. In the Max Headroom movie (1985) and TV series (1987-88), an investigative reporter continues his career after being uploaded to become a computerized character.
Conceptions of computers in science fiction during the last fifteen years of the twentieth century reached far beyond what might have been imaginable even in the 1940s. Illustrative are the bold extrapolations in the speculative fiction of Greg Bear, such as Eon (1985), Blood Music (1985), and Queen of Angels (1990). In Blood Music, for example, “Medically Applicable Biochips” inadvertently convert DNA molecules into living computers that transmute the human species into the progenitor of “an intelligent plague” designed to reshape some of the fundamental principles of the universe.

Further Reading

Two introductory collections of stories about computers are Science-Fiction Thinking Machines, edited by Groff Conklin (NY: Vanguard, 1954) and Machines That Think, edited by Isaac Asimov, Patricia S. Warrick, and Martin Greenberg (NY: Holt, Rinehart, and Winston, 1984).
Helpful analyses together with useful bibliographies can be found in Patricia Warrick’s The Cybernetic Imagination in Science Fiction (Cambridge, MA: MIT Press, 1980), David Porush’s The Soft Machine: Cybernetic Fiction (NY: Methuen, 1985), and two collections of essays edited by Thomas P. Dunn and Richard D. Erlich, The Mechanical God: Machines in Science Fiction (Westport, CT: Greenwood Press, 1982) and Clockwork Worlds: Mechanized Environments in SF (Westport, CT: Greenwood Press, 1983).