From a previous post …
haves will indirectly control artificial intelligence agents, while the majority have nots will be required to obey the whims of these agents, and their overlord handlers.
Post-modern bread and circus equivalents will pacify the great unwashed. If that doesn’t work even more direct, negative action will be taken.
Neural networks will live a life of their own, so it may not be possible for even the “haves” to exercise direct control over these agents. However, one proposed approach is to frame control, with an AI agent constitution, based on Isaac Asimov’s (1920-1992), Three Laws of Robotics.
In this post, these and other robotic laws will be examined and commented upon.
Isaac Asimov’s Three Laws of Robotics were proposed in a 1942 short story Runaround. According to Asimov they had their origin in a meeting between himself and John W. Campbell on 1940-12-23.
The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Normality 2020
Voice activated actuators wait patiently to serve you. Invisible logistic trails ensure an orange (Citrus × sinensis) is peeled, split into boats, if that is your preference, and placed in front of you on a dish of your own choosing, when you demand it.
Your own speech controls your digital assistants, not only commercial varieties such as Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, but also open source Lucida (formerly Sirius), Abot, Jarvis and Jasper.
On a clear day, the sun shines, oranges are peeled and served, and there is no need to reflect on the laws of robotics underpinning digital assistants.
A snake in the garden of Eden
I have nothing against snakes, personally, but use the term to describe an unwelcome intruder into Eden, subsonic commands hidden in music, videos, or even white noise. This is done by using software to cancel out the sound that the speech recognition system was supposed to hear and replacing it with sound at subsonic frequencies that would be transcribed differently. Instead of an orange, an apple (Malus pumila) is peeled, sliced and served on a dish of someone else’s choice. A humorous change in the eyes of many, but in our family, some people are allergic to apples. Other substitutions can be more serious, even deadly. There can be significant security risks. It is at this stage that laws of robotics, or their AI equivalent, need to be applied.
One challenge with these three laws, is the assumption that all human actions are legitimate. What happens if a human wants to harm another human? With these laws, it would be impossible for a robot to intervene on behalf of the person being harmed. So, it will undoubtedly not go many milliseconds before some enterprising hacker ensures that these three laws are voided.
Asimov was well aware of this shortcoming, which he would have undoubtedly described as a feature. He has referenced Arthur Hugh Clough’s (1819-1861) satirical poem on the ten commandments, The Latest Decalogue, as its inspiration: “Thou shalt not kill; but needst not strive officiously to keep alive:”
Asimov introduced a zeroth law in Foundation & Earth (1986) but it seems of limited use in conflict situations:
0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
In western films, the cliché is that the good guys always wear white Stetson hats! In real life, it is more difficult to distinguish good people from evil doers, or white hackers from black hackers.
These laws have been modified many times, by Asimov as well as others. One extreme is represented by Jack Williamson’s (1908-2006) novelette With Folded Hands (1947), rewritten as the novel The Humanoids (1949), deals with robot servants whose prime directive is “To Serve and Obey, And Guard Men From Harm.” The Williamson robots take the robotic laws to the extreme, by protecting humans from everything, including unhappiness, stress, unhealthy lifestyle and all potentially dangerous actions. All humans may do is to sit with folded hands.
Some feel three laws are insufficient.
The Lyuben Dilov (1927-2008) novel, Icarus’s Way (alternative title, The Trip of Icarus) (1974) added:
4. A robot must establish its identity as a robot in all cases.
This law appears to have been violated in the celebrated Google Duplex restaurant reservation (2018-05-17): https://mashable.com/2018/05/17/google-duplex-dinner-reservation/#X7ChNbJ3baqw
Harry Harrison (1925-2012) also produced a fourth law, found in the short story, The Fourth Law of Robotics, in the tribute anthology Foundation’s Friends (1989):
4. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.
Reproduction, here, is asexual but sensational. Why not a fourth law requiring a robot to maintain itself, by undertaking necessary hardware and software repairs? There are robots who can and do reproduce themselves, the most famous being RepRap, a low-cost, self-replicating 3D printer, initially made at the University of Bath in 2005: http://www.reprap.org/
Nikola Kesarovski (c. 1935-2007) published the book The Fifth Law of Robotics (1983):
5. A robot must know that it is a robot.
I am not quite sure why. Is it so that it knows that it isn’t human? Should it know that it is a particular type of robot? For example, a vacuum robot, rather than a lawn mowing robot.
Roger MacBride Allen (1957-) wrote a trilogy set within Asimov’s fictional universe. Caliban (1993), Inferno (1994) and Utopia (1996) are each prefixed with “Isaac Asimov’s”. Here, there are four New Laws, which treat robots as partners rather than slaves to humanity.
1. A robot may not injure a human being or allow a human being to come to harm.
2. A robot must cooperate with human beings except where such actions would conflict with the First Law.
3. A robot must protect its own existence.
4. A robot may do whatever it likes as long as this does not conflict with the first three laws.
Discussion
The various robotic laws are very vague, with concepts such as human and robot undefined. This can give rise to people or equipment being regarded as something other than what they are, such as cyborg or actuator, respectively, in an attempt to avoid following the laws. Ambiguity is a literary device that is masterly exploited by Asimov, and other science fiction authors.
Another challenge with the Asimov approach, is that it is only concerned about the adversarial relationship between two groups – robots and people. Nothing else matters. Robots do not seem to have any ethical obligations with respect to the environment, for example.
Even if the laws were amended or expanded to take other aspects of the world into consideration, these laws would still not work. The only reason for positing laws is to have them fail, in interesting ways. It is not the laws, but the storytelling that is important. The lesson to be learned is that it is not possible to restrict ethics to a set of a few simple rules. If one does, the entire system will at some point fall apart.
In many science fiction worlds, robots only have mental capabilities that are less than, or equal to, their human controllers, for lack of a better word. What happens when artificial intelligence advances beyond human levels? Superintelligence is a key challenge, a situation in which artificial intelligence, or machine intelligence to distinguish it from organic intelligence, will require more advanced ethical considerations, than those that can be stated in a literary work.
Deontology judges the morality of an action based on rules. It is a field I know almost nothing about, except that it is regarded by many professional philosophers as a dead end.
Perhaps it should be stated here and now that robots are another dead end. The future belongs not to robots but to Artificial General Intelligences (AGI). See: https://en.wikipedia.org/wiki/Artificial_general_intelligence These are machines with consciousness, intuitive, flexible and adaptive, even in terms of ethics. Like humans, AGIs do not rely on rote knowledge of rules, ethical or otherwise, but use them – if at all – as guidelines to nudge ethical instincts and intuitions. It is a situation highly dependent on the environment people and AGIs are brought up in.
As an ethical amateur, I am attracted more to virtue-ethics than deontology. It is in the discussion of virtues, individually and collectively, that one can relate to behaviour that is beneficial, as well as that which is less so.
Rosalind Hursthouse writes in https://plato.stanford.edu/archives/fall2013/entries/ethics-virtue/ :
A virtue such as honesty or generosity is not just a tendency to do what is honest or generous, nor is it to be helpfully specified as a “desirable” or “morally valuable” character trait. It is, indeed a character trait—that is, a disposition which is well entrenched in its possessor, something that, as we say “goes all the way down”, unlike a habit such as being a tea-drinker—but the disposition in question, far from being a single track disposition to do honest actions, or even honest actions for certain reasons, is multi-track. It is concerned with many other actions as well, with emotions and emotional reactions, choices, values, desires, perceptions, attitudes, interests, expectations and sensibilities. To possess a virtue is to be a certain sort of person with a certain complex mindset. (Hence the extreme recklessness of attributing a virtue on the basis of a single action.)
Yes, this is a difficult act for a machine to follow, but absolutely essential if one is to have autonomous cars, autonomous surgeons and other tools that will interact intimately with humans.
The one recent book on ethics that I have enjoyed the most is After Virtue, by Alasdair MacIntyre. But that is another story …
Notes
- I taught Artificial Intelligence (AI) at Nord-Trøndelag Regional College from 1988 to 1991. My focus was on expert systems.
- I do not normally enjoy reading science fiction. However, I do find it rewarding to read about the topic.
- Currently, my main interest in AI relates to robotics in general, and assistive devices in particular. However, I also see a need to look beyond the present to a future where machines acquire a form of consciousness.
- Personally, if I needed extensive care in the future, I would prefer that care given to me by a robot rather than a human.
If you prefer to be taken care of a robot than a human, given the need for extensive care, I’m going to assume that you’ve never been a patient in a hospital, or you’ve had a very bad experience. The human touch, especially in healthcare, is essential. Robots will never replace nurses. A machine is incapable of caring.
Yes, Charles, you are correct. I have a certain hesitation about surgery, having had a tonsillectomy and a circumcision performed. I’m not sure either of them would be encouraged today. Otherwise, I have never been a patient in a hospital.
One of the specializations at the University of Tromsø, where I studied, is telemedicine. Today, artificial intelligence is allowing much faster interpretation of images, compared to humans. Haptics is also allowing remote surgery. At the personal level, I am awaiting a robotic arm that can inject B12 into my muscle tissue.
You do raise an important issue. Nursing is essentially care giving, an art that cannot be replaced by a robot.
Some hours after I posted this weblog post, the Guardian has written this, which emphasizes Charles’ point: https://www.theguardian.com/commentisfree/2018/jul/02/robo-carers-human-principles-technology-care-crisis