A recurrent question concerning the man-machine relationship is who is serving whom? Do machines serve our purposes or do we serve theirs? To be strictly accurate suggesting that we serve the 'purposes' of machines is to assign to them the dynamic purposes of autonomous beings rather than the fixed functions of mere tools. But this is not to deny that in our use of tools many of us have experienced some sense of being acted on or used by our tools. And in our stories there has long been uneasiness about our increasing dependence on technology and a growing insistence that humankind is serving technology rather than the other way about.
Asimov argued that from our earliest days we have harboured several related fears about powerful forces (1981, pp. 117ff). And we dramatise such fears in our myths. We fear powerful forces over which we have no control, as in the case of storms, floods, droughts, and earthquakes. Asimov suggests that
The clear progression away from direct and immediate control made it possible for human beings, even in primitive times, to slide forward into extrapolation, and to picture devices still less controllable, still more independent than anything of which they had direct experience. (Asimov 1981, p. 130)
Hephaistos, the Greek god of the forge, is depicted in the Iliad as having golden mechnical women helping him in the palace. And even the mortal Daedalus created Talos, the bronze warrior which guarded the shores of Crete, making him the prototype of the inventor of mechanical beings.
As the human control decreases, the machine becomes frightening in exact proportion... It is a simple task for ingenuity to look forward to a time when the machine may go out of control altogether, and the fear of that can be felt in advance. (Asimov 1981, p. 131)
Loss of control is a fear dramatized in the medieval legend of the Golem of Prague that runs amok and in Goethe's ballad, 'The Sorcerer's Apprentice' (popularized in Walt Disney's 1940 film, Fantasia). In these tales what is created has no will of its own and although out of control, is controllable and these creations are indeed ultimately brought under human control. The story of the genie in the bottle highlights the importance of predicting the consequences of your actions. Loss of control is also dramatized in Frankenstein, where a technological creation acquires autonomy which makes demands on its creator, just as the publication of Mary Shelley's novel gave it, like any book, autonomy from its author.
In its treatment of the theme of loosing horrors on the world, Frankenstein is in the tradition of the story of Pandora's box. Outraged by Prometheus's theft of fire, Zeus created Pandora, the first woman, and sent her as a gift to Epimetheus, who welcomed her, despite Prometheus's warning not to accept gifts from Zeus. It is worth noting that the name Prometheus means forethought whilst the name Epimetheus means afterthought. Pandora brought with her a great vase that she was forbidden to open, but she was unable to resist the temptation, and released the misery of the Spites into the world, leaving only Hope behind. In this story the consequences were irreversible and there were no second chances.
Fire warms you, gives you light, cooks your food, smelts your ore - and, out of control, burns and kills. Your knives and spears kill off your animal enemies and your human foes and, out of your control, are used by your foes to kill you... There has never been any human activity which, on getting out of control and doing harm, has not raised the sign among many of, 'Oh, if we had only stuck to the simple and virtuous lives of our ancestors who were not cursed with this newfangled misery.' (Asimov 1981, pp. 131-2)
E M Forster's 'The Machine Stops' (1909) revolves around what happens when society has grown to depend absolutely on machines, and these machines break down. Even amongst sf writers in the heady days of technological optimism before 1945 there was a strong awareness of possible technological disasters. Jack Williamson's 'With Folded Hands' (1947) can be seen as a variation on this theme: humanoid robots are charged 'to serve man, to obey, and to guard men from harm', but they take their mission literally and in the interests of general happiness are prepared to use permanent tranquillization or lobotomy. Quite apart from the possibility of malevolent computers a recurrent theme is the fallibility of the computer-based systems on which we increasingly depend in everyday life.
In Gordon R Dickson's 'Computers Don't Argue' (1965, in Asimov, Greenberg & Waugh 1986), a cumulative catalogue of errors and misunderstandings turns an invoicing error into a capital case. In a tale told entirely through an exchange of letters and circulars, a book club member is sent a book he doesn't want, which happens to be 'Kidnapped' by Robert Louis Stevenson. He sends it back but is invoiced for it again. The account is eventually handed over to a debt-collecting agency and court proceedings are threatened. A record card is miscoded, and the case becomes a criminal one. The defendant is accused of kidnap, and later, since the records say that Stevenson is dead, of murder. Despite the apparent disappearance of any evidence on file, murder warrants the death penalty. His innocence and the errors are accepted in due course and a pardon is issued. However, due to miscoding, it fails to arrive in time. This Kafkaesque tale takes the kind of error we have all been subject to, and magnifies the consequences into a nightmare not that unlike those we all seem to have heard of in news stories. Now all of the errors in the story could be called human errors rather than computer errors, but they represent the dangers inherent in the inflexibility of over-automated processes.
In the film Westworld (1973) robots go spectacularly out of control for no apparent reason. And in Stanley Kubrick's film 2001: A Space Odyssey (1968), the computer named HAL - arguably the most human character in the film - appears to undergo a nervous breakdown. Michael Shallis offers this account: 'HAL... decided to take complete command of the space mission because it felt that the astronauts did not have the mission's success at heart. It therefore interpreted their subsequent alarm and counter-measures as evidence for its own viewpoint' (Shallis 1984, p. 136). HAL thus exhibited what in a human would be called badly 'psychotic' behaviour. The computer kills one of the crew and threatens another until some of its circuits are dismantled by the only survivor.
That a computer should be credited with neurosis or psychosis reflects a common tendency towards anthropomorphism in our treatment of machines. The artificial intelligence researcher Marvin Minsky declared that he was 'inclined to fear most the HAL scenario':
However, the philosopher Edward Ballard also refers to HAL thus: 'Hal was so complex and his program so clever that he conceived hostility for his operator, and thus he was motivated to eliminate his operator. This feeling and the decision which interpreted it seemed to be the consequence of the computer's great complexity and, of course, of its very clever programming. Notice that the computer's decision freed the computer of its original (mechanical) program. Notice also that to suppose mere increase in complexity and in programming frees a mechanism of mechanism is a confusion... Sophisticated circuitry and astute programming may make computers converge towards intelligent results or human-like behavour (I refer to the kind of process or behaviour which a machine can imitate). But to suppose the limit actually to be reached and the machine actually to become human would suggest making an irrational leap' (Ballard 1978, p. 248).
In the superbly satirical film Dark Star (1974), directed by John Carpenter and novelized by Alan Dean Foster, an electrical fault generates a computer malfunction, and an 'intelligent' bomb has to be talked out of exploding prematurely inside the spaceship which is carrying it. The bomb dramatizes the dangers arising from the computer's literal character.
Mankind has always chosen to counter the evils of technology, not by abandonment of technology, but by additional technology [the technological fix]. The smoke of an indoor fire was countered by the chimney. The danger of the spear was countered by the shield. The danger of the mass army was countered by the city wall.
The first moment, when the magnitude of possible evil was seen by many people as uncounterable by any conceivable good, came with the fission bomb in 1945. Never before has any technological advance set off demands for abandonment by so large a percentage of the population.
In fact, the reaction to the fission bomb set a new fashion. People were readier to oppose other advances they saw as unacceptably harmful in their side effects - biological warfare... certain genetic experiments... breeder reactors, and so on. And even so, not one of these items has yet been given up... No, it is when the machine threatens all mankind in any way so that each individual human being begins to feel that he, himself, will not escape, that fear overwhelms love...
Technology has begun to theaten the human race as a whole only in the last thirty years. (Asimov 1981, pp. 132-3)
The fear of physical destruction is of course dramatized in Mary Shelley's Frankenstein and Capek's play R.U.R.
Asimov suggests that our deepest fear is 'the general fear of irreversible change' (Asimov 1981, p. 133). Above all we are haunted by the knowledge that we grow old and die, and so do our friends, whilst others take our places in a Universe that carries on without us. Asimov traces this to a fear of supplantation. However, related to a fear of irreversible change is the fear that technological change may destroy the Earth by using up its resources or by poisoning it with waste. It also offers humankind the means of self-destruction.
What is lacking in the early tales of the sorcerer's apprentice and the Golem is conscious will. Such tales emphasize that human errors in the use of their technologies can lead to disastrous consequences. In later tales complex human technological creations possess the potential to amplify the effects of error and miscalculation. And both the nature and possible consequences of miscalculations in such complex machines are increasingly more difficult to predict. Furthermore more recent machines are often seen as acquiring consciousness and independent wills. They are seen as making decisions as well as simply applying existing rules. And with the acquisition of consciousness comes the possibility of malevolence, whether generated by exploitative treatment by humans or by an inherent desire for power. Tales of malevolent robots and computers tap our anxieties over whether technology has become an autonomous force over which we have limited control.