We need robots to have morals. Could Shakespeare and Austen help? | John Mullan | Opinion
When he wrote the stories in I, Robot in the 1940s, Isaac Asimov imagined a world in which robots do all humanityâs tedious or unpleasant jobs for them, but where their powers have to be restrained. They are programmed to obey three laws. A robot may not injure another human being, even through inaction; a robot must obey a human being (except to contradict the previous law); a robot must protect itself (unless this contradicts either of the previous laws). Unfortunately, scientists soon create a robot (Herbie) that understands the concept of âmental injuryâ. Like a character in a Thomas Hardy novel or an Ibsen play, the robot soon finds itself in a situation where truthfully answering a question put to it by the humans it serves will cause hurt â" but so will not answering the question. A logical impasse. The robot screams piercingly and collapses into âa huddled heap of motionless metalâ.
As we enter what many are predicting will be a new age of robotics, artificial intelligence researchers have started thinking about how to make a better version of Herbie. How might robots receive an education in ethical complexity â" how might they acquire what we might call consciences? Experts are trying to teach artificial intelligences to think and act morally. What are the examples that can be fed to robots to teach them the right kind of behaviour?
A number of innovators in the field of AI have come to believe that these examples are to be found in stories. Scientists at the School of Interactive Computing at the Georgia Institute of Technology are developing a system for teaching robots to learn from fictional characters. With what is presumably a mordant sense of irony, they call their system Quixote. Don Quixote, of course, was the honourable but deluded Spanish gentleman who came to believe that the world was exactly as depicted in the chivalric romances that he loved reading. With disastrous â" if comical â" consequences.
If an artificial intelligence is to draw lessons from many of the stories with which we like to divert ourselves, there are some tough practical problems for the programmers to circumvent. Much fiction and drama will dizzyingly mislead poor robots about the world in which they have to make their decisions. Our favourite stories abound in ghosts, demons, wizards, monsters and every kind of talking animal. Human beings travel through time and fly through the air and get into or out of trouble by the use of magic. Most culturesâ myths and legends do indeed encode some of the most elemental human conflicts and predicaments that an electronic intelligence may need to understand, but they are populated with supernatural beings and would tend to teach the surely dangerous principle that there is always life after death.
Perhaps we can exclude such narrative material from robot reading lists, and be sure to ban Gulliverâs Travels (talking horses are better than humans) and Aliceâs Adventures in Wonderland and any kind of magical realism. Yet even our less fantastic tales are potentially misleading. Quixote apparently encourages robots to behave like the admirable characters in the stories they are fed. But of course a literary work may be morally instructive without having a single character that you would ever want to imitate. Where is the person you would want a robot to use as a role model in Middlemarch or Othello or The Iliad? Where there is a clear protagonist, Quixote apparently learns that it will be rewarded when it acts like him or her. Steer clear, then, of many of the classics of the late 20th century: The Talented Mr Ripley (the protagonist is a resourceful and amoral killer) and John Updikeâs Rabbit novels (the protagonist is a lascivious and greedy philistine) and Lolita (no c omment needed).
According to the AI scientist Mark Reidl: âThe thought processes of the robot are those that are repeated the most often across many stories and many authors.â For him, published stories can provide robots with the lessons that human beings learn slowly over decades. Literature gives a computerised intelligence âsurrogate memoriesâ on which to base future decisions.
The scientistsâ faith that a cultureâs narratives provide a repository of human values would be cheering, if the values were not so often thwarted or doomed. Even the most idealistic robot tutors may want to keep their charges away from King Lear or Jude the Obscure. Theatre directors were so convinced of the lack of moral direction of the former that until the mid-19th century the play was often performed with a rewritten âhappyâ ending, in which Cordelia survives and gets to marry Edgar. Victorian critics were so antagonistic to the moral nihilism of the latter that Thomas Hardy decided to abandon novel writing altogether when he saw the response.
Those narratives that do come with a strong sense of right and wrong may be even more confusing. The novels of Charles Dickens, and many of his Victorian peers, will demonstrate that all efforts possible should be undertaken to dissuade any young woman from sex before marriage, her fate if âfallenâ being death, prostitution or emigration to Australia. Then what about books that end well? Tricky too. AÂ robot steeped in the greatest comedies from the last five centuries of European literature will certainly believe that marriage is the happiest end of all human endeavours. It will also get the idea that men and women can readily disguise themselves as someone else and that those who follow their hearts usually get a large cash reward to boot.
Some great literary works at least teach practical lessons, if not moral ones. The most common is: do not trust what people tell you. From the very glibness with which Goneril and Regan produce their testimonies of love, it is clear to any perceptive reader that they care for Lear not one jot. How does a robot reader get this? Or learn that, as is evident in Jane Austenâs novels, certain kinds of smoothness or plausibility (particularly in young men) should always be distrusted? And can it ever be made clear to an artificial intelligence that everything Mr Collins says reveals him to be a pompous twerp?
So maybe the robots should be given simpler set texts. What about Aesopâs Fables? Or the parables of the New Testament? Or the stories of Enid Blyton? The first may work if computer brains can grasp the conceit of animal characters. The second will be fine if the robots believe in God. The third, one fears, may introduce some dubious moral judgments. Among sub-literary genres, perhaps only the traditional detective story has a reliable moral arc, even if it will give our robot an utterly misanthropic view of human behaviour.
Do the best books make us better? I have my own slightly gloomy testimony to offer. As an English literature academic, I can report that those of us paid to spend their careers reading and then rereading the greatest literary narratives in the language are not obviously morally better, socially more skilled or psychologically more adept than our fellow citizens. If we were robots, we would be blundering robots. Perhaps it is wisest just to stick with Isaac Asimovâs simple but elegant rules.
0 Response to "We need robots to have morals. Could Shakespeare and Austen help? | John Mullan | Opinion"
Posting Komentar