Cars

Rush Hour! Photo: Davide Ragusa, 2016-01-16. Davide comments: I took this photo in Borgo Lupo, an abandoned village in Sicily, near Caltagirone (province of Catania). A mystical and empty place, where the only inhabitants are animals and shepherds. Here Sicily expresses its best, with breathtaking surrounding landscapes and smells that smell of the real countryside.

What is this post about? Sheep?

It is about artificial intelligence (AI), and the use of chatbots. A robot is a device that automatically performs complicated, often repetitive tasks. Bot is a shortened form of robot. A chatbot (originally, chatterbot) is a robot that uses and pretends to understands human language. ELIZA was an early chatbot implemented by Joseph Weizenbaum (1923 – 2008) from 1964 to 1967. It did so by passing the Turing test developed by Alan Turing (1912 – 1954) in 1950. This test – originally referred to as the imitation game – means that a human interacting with ELIZA will believe that the robot is another person. It is important to understand that ELIZA and other chatbots do not actually understand English (or any other human language). They store words, then use these and other words to mimic it.

The photo of the sheep was found on Unsplash, a website that allows photos to be freely used, when I was searching for a photo of a traffic jam for the beginning of the post. In much the same way that AI can get things wrong, my use of this photo gets things wrong too. It shows traffic congestion, but with sheep, rather than cars.

Why isn’t the post called Artificial intelligence, and the use of chatbots?

Because, if I gave it that title nobody I know would look at it, let alone read it. Such a title would be offensive to the people I interact with. The people I hang out with are not AI experts.

Why is it called Cars?

An honest answers it is that this weblog’s target readership probably find cars a topic they can relate to. Thus, they are being encouraged to learn something about AI by reading about something they already have a relationship to. Most of my audience readers have driving licenses, and know something about cars. A large proportion of them have been driving/ owning/ servicing/ repairing/ enhancing/ customizing cars for over fifty years. It is a topic they can relate to, unlike, say, the breeding of Labrador dogs.

Do you have something against dogs?

Let me be tactful, just this once, and say I think dogs deserve a companion who is interested in their well being. Many readers of the weblog post have dogs. I live contentedly without them. However, while writing this post, I did find this article about dogs that impressed me.

How did this post begin?

On 2024-01-04, I read an article about Perplexity in Tech Crunch. It is an AI chatbot. I opened a free account, and asked Perplexity some questions. I then tried to find some content that could act as a control to questions answered using perplexity. On 2024-01-13, I read an article in Newsweek, about why Americans can no longer afford cars. I thought it would be interesting to make up questions, based on the answers supplied in Newsweek and then ask Perplexity the same questions. For example, the first question I asked was:

Q. In USA, how much have new and used car prices risen since 2020?

Perplexity provided a long answer, one that answered many different but related questions, rather than just that one. So a new challenge arose about how to present content, so that it made sense. Part of the problem was the attribution of Newsweek content to particular people. I decided to eliminate names and quotation marks. Immediately below is the edited Newsweek answer to that first question.

Since 2020, new car prices have risen by 30 % and used car prices have risen by 38 %.

I was just expecting a simple answer from Perplexity of x% for new, and y% for used vehicles.

Here is more of the Newsweek content, extracted to remove references to sources, human or artificial (Microsoft Copilot).

In 2023—a year during which inflation slowed down to the point that the Federal Reserve decided to stop hiking rates—new car prices rose by 1 percent to an average of $50,364, while used car prices fell by only 2 percent to an average of $31,030.

But as things stand, cars are still really expensive for many Americans. Just 10 percent of new car listings are currently priced below $30,000, Things are not much better in the used car market, where only 28 percent of listings are currently priced below $20,000.

In November 2019, the average transaction price for a new vehicle was $38,500. In November of 2023, that figure jumped to $47,939.

The pandemic’s disruption of manufacturing supply chains, as well as outsized consumer demand in 2021, really put a strain on vehicle inventory and drove prices up significantly.

Dealers practically had customers lined up to buy vehicles that weren’t available. Supply chain disruptions also forced manufacturers to prioritize more profitable, higher-trim vehicles in their lineups, which meant the inventory available to purchase also carried a higher sticker price.

Used car prices will continue to fall modestly in 2024.

The backlog in new and used car demand that grew during the pandemic is slowly moderating, but with over 2 years of restricted new car production, in 2020 through 2022, it will take at least that long for supply to catch up.

Macroeconomic factors like inflation and higher interest rates are also reducing vehicle demand, but not enough to drastically drop car prices in the foreseeable future.

But there are other reasons besides pandemic-related disruptions that there seem to be no more affordable cars in the U.S.—including that automakers are increasingly focusing on the production of expensive SUVs and trucks while dropping smaller, cheaper vehicles that would cost $20,000 or less.

Manufacturers cite disappointing sales results as primary reasons for discontinuing smaller, more affordable vehicles from their lineup.

But car buyers’ preferences have also shifted dramatically to larger trucks and SUVs in the past 10 years or so, and even more towards high-tech and comfort amenities in the form of cameras, sensors, radars and large infotainment screens.

Unfortunately, all these features come at a significantly higher price—even if that’s often higher than the average yearly wage of millions of Americans.

(End of Newsweek content)

Full reply from Perplexity.

According to various sources, new and used car prices in the USA have risen significantly since 2020. The average transaction price for new cars declined 1.4% year-over-year as of October 2023, but they have increased to an average price of $47,936, which is roughly a 60% increase over the last decade1. In July 2020, new-vehicle prices increased $749 (up 2%) from July 2019, while decreasing $473 (down 1.2%) from the previous month2. The average new car price paid in December 2020 and the fourth quarter of last year crossed over $40,000 for the first time, and the average new car price in the US in December 2021 was $47,07734. According to The Zebra, the average new car prices spiked by 17.2% between 2020 and 2021, and the average car payment for new vehicles rose 11.8% to $6445.

Used car prices in the US have also increased significantly since 2020. According to various sources, the prices of used cars have risen due to factors such as the global microchip shortage, supply chain disruptions, and changing consumer preferences

Here are some key points regarding the changes in used car prices:

  • In June 2023, consumer prices for used autos were down 0.5% compared to the previous month and up 5.2% year over year1.
  • The share of used vehicles that sold for less than $20,000 in the first quarter of 2023 was 30.6%, compared to 60.5% just five years ago2.
  • Used car prices have dropped 5.6% compared to a year ago, but they remain higher than at any time before 20203.
  • Used electric vehicle (EV) prices have dropped 26% over the last year3.
  • Car ownership costs, including used car prices, have skyrocketed, with an overall increase of 40% in used vehicle prices between 2020 and 2023.

In summary, used car prices in the US have risen substantially since 2020, and the trend shows no signs of reversing. The factors contributing to the increase include supply chain disruptions, the global microchip shortage, and changing consumer preferences.

According to Statista, the average list price for used vehicles in the United States was around $27,300 in June 2023

Consumer Reports reported that the average price of a used car was around $27,000 in August 20232. Edmunds reported that the average used vehicle transaction price in Q1 2023 dipped 6.4% year-over-year but is still up 44% from 2019, and the share of used vehicles sold for under $20,000 was 30.6% in Q1 20233. The Zebra reported that the average new car prices spiked by 17.2% between 2020 and 2021, and the average car payment for new vehicles rose 11.8% to $6445. Therefore, the average price of a used car in the US in 2023 is around $27,000 to $27,300.

Used car prices in the US are relatively high compared to some other countries. For example, in Germany, the prices of certain car models can be significantly higher than in the US due to factors such as taxes and fees. In the US, the average price of a used car was around $27,000 in 2023. This is in contrast to some other countries where car prices can be influenced by various factors such as taxes, import duties, and government policies. Therefore, while used car prices in the US are high, the comparison to other countries is influenced by a range of economic and regulatory factors.

Chatbots

A human brain has about 86 billion neurons. Chatbots are based on neural networks, imitating the organization of the human brain. An advanced intelligent processing unit (IPU) consists of about 60 billion transistors. A transistor is, at best, an inferior neuron. Since 2017, most modern chatbots are transformer models, a type of machine learning.

If, at this point, I had decided to explain the details of transformer computer architecture, most readers would be putting down their coffee cups and either heading out to their garage workshops, or begging their dogs to leave the comforts of their sofas, and take them for a walk.

While chatbots appear to lie, they are simply responding to input content (often questions). In chatbotspeak, each word a chatbot learns is called a token, and the relationships chatbots construct are called vectors. The tokens and vectors construct a neural network. At first, neural networks use supervised learning to understand relationships between tokens. Here, humans assign labels to the tokens. With unsupervised learning, the neural network itself, without human assistance will assign labels.

Chatbots are designed/ programmed to instill confidence so that users believe they are interacting with a real person. This is the primary goal. Making truthful statements is unimportant, as long as the charade is maintained. A chatbox will do almost anything in order to maintain an illusion of humanness. It will invent information, if that is needed.

Today’s chatbots such as Google’s Bard (now called Gemini – updated 2024-06-15, 22:30), Microsoft’s Copilot, OpenAI’s ChatGPT or the Cohere’s Cohere, use transform technology, first developed in 2017. These are online, generative AI systems that are capable of maintaining a conversation with a user in natural language.

From 1988 to 1991, I taught a college course in AI. Since I had very little faith in machine learning, and chatbots were very primitive, I concentrated on expert systems. To my mind these did the least damage.

Wikipedia tells us: In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

If I were wanting to learn about AI today, I would want to start with a fun book. For me, the most enjoyable book on the subject is by Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021). Then I would try to read an AI textbook. My first introduction to the topic was: Artificial Intelligence (1983) by Elaine Rich. The most recent edition of that work is a third edition (2009) by Elaine Rich, Kevin Knight and Shivashankar B. Nair. When, about a decade ago, I took an online course in AI with an emphasis on machine learning, the textbook was by Stuart Brand and Peter Norvig, Artificial Intelligence: a Modern Approach. The latest edition is the 4th, from 2020-1. It is much more technical and difficult.

I used Prolog, a computer programming language for expert systems, in my teaching. Initially, I asked my students to map out their family relationships in a knowledge base. Think first of a family with five generations of daughters that would have to be inserted into a knowledgebase: Adriana, Beatrice, Cordelia, Desdemona and Emilia. Then, one would have to make some abstract categories, such as a mother = a female who has a child; a grandmother = a female who has a child who is either the mother of a child or the father of a child. These rules can quickly become very complex. So much of learning Prolog is learning how to create increasingly complex rules.

After students had learned how to systematize family relationships, they tested it, to make sure that the results mirrored reality. A common problem, to begin with, was that grandmothers could only find granddaughters, but not grandsons. Thus, they had to go back and make changes.

Once the family knowedgebase was working, students could go on to work with other problem areas, of their own choosing.

People wanting to learn Prolog as a computing language for an expert system, should probably use William F. Clocksin & Christopher S. Mellish, Programming in Prolog: Using the ISO Standard, 5th edition (2003) as their textbook. This is not as out of date as its publication year would suggest.

Prolog is widely used in research and education. Yet it and other logic programming languages have not had a significant impact on AI. Part of the reason is that most Prolog applications are small and dependent on human experts providing data. Real experts are a scarce resource, and what they know expertly is limited. Thus, few applications exceed 100 000 lines of code.

Electricity

A pigeon. Photo: Viktor Forgacs, 2017-12-12

This post was originally called Digital Power Transmission. It began with content about the use of artificial intelligence (AI) to find faults in electrical power assets in Kansas and Missouri. That introduction became ancient history on 2023-10-26, when I read an article about pigeons using the same approach as AI to problem solving. I wondered if pigeons would make AI more understandable.

So, now this post begins with a scientific study of pigeons!

Pigeons

Columba livia domestica, has been found in records that are 5 000 years old. Its domestication is far older, possibly stretching back 10 000 years. Among pigeons that are bred specifically for particular attributes, homing pigeons are bred for navigation and speed.

Pigeons are able to acquire orthographic processing skills = the use of visually represented words/ symbols, and basic numerical skills equivalent to those shown in primates.

In Project Sea Hunt, a US coast guard search and rescue project in the 1970s/1980s, pigeons were shown to be more effective than humans in spotting shipwreck victims at sea.

A study was undertaken at the University of Iowa, by Brandon Turner, lead author, a professor of psychology, and Edward Wasserman, co-author and a professor of experimental psychology. 24 pigeons were given a variety of visual tasks, some of which they learned to categorize in a matter of days, and others in a matter of weeks. The researchers found evidence that the mechanism pigeons use to make correct choices is similar to that AI models use to make predictions. Using AI-speak, nature has created an algorithm that is highly effective in learning very challenging tasks, not necessarily fast, but with consistency.

On a screen, pigeons were shown different stimuli, like lines of different width, placement and orientation, as well as sectioned and concentric rings. Each bird had to peck a button on the right or left to decide which category they belonged to. If they got it correct, they got a food pellet; if they got it wrong, they got nothing.

Pigeons learn through trial and error. With simple tasks, pigeons improved their ability to make right choices from 55% to 95% of the time. With more complex challenges, accuracy increased from 55% to 68%.

In an AI model, the main goal is to recognize patterns and make decisions. Pigeons do the same. Learning from the consequences of being given a food pellet (or not), they show a remarkable ability to correct their errors. Similarity function is also at play for pigeons, by using their ability to find resemblance between two objects.

Those two mechanisms alone, can be used to define a neural network = an AI-machine that solves categorization problems.

The area served by Evergy, a Topeka, Kansas based electric utility company.

Back to the original content, Digital Power Transmission

Now, this post will examine the use of AI, and other digital technologies, in electrical energy transmission. Sometimes one has to venture outside of one’s backyard, to gain new insights. Today, the focus is on the Kansas and Missouri. More than four percent of this blog’s readers have roots in Kansas, in Leavenworth and Riley counties, making it one of the “big six” American states. The others being (in alphabetical order) Arizona, California, Michigan, New Hampshire and Washington. Yes, this weblog does have American content, because – sometimes – Americans are at the forefront.

Much of the initial work into the use of AI in grid management was done by Argonne National Laboratory, of Lemont, Illinois. After conducting AI grid studies, they stated that: “In a region with 1 000 electric power assets, such as generators and transformers, an outage of just three assets can produce nearly a billion scenarios of potential failure.” The calculation actually being: 1 000 x 999 x 998 = 997 002 000, which is close enough to a billion, for most people.

The Norwegian company, eSmart Systems, with its headquarters in Halden, bordering Sweden, in south-eastern Norway, provides AI based solutions for the inspection and maintenance of critical infrastructure related to electrical power generation and distribution.

Note: the term, asset, as used here, generally refers to a large structure, such as a electrical power generating station, or a substation, that transforms voltages (and amperages). For me, an asset will always be an accounting term, associated with the credit (left) side of a balance sheet, in contrast to a liability on the debit (right) side. My preferred terminology would be structure, works or plant.

eSmart

In this project eSmart will act as project management lead alongside engineering consultants EDM International, Inc. of Fort Collins, Colorado and GeoDigital, of Sandy Springs – near Atlanta – Georgia. Together, these will provide large-scale data acquisition and high-resolution image processing.

eSmart Systems is working with Evergy, a Topeka, Kansas based electric utility company that serves more than 1.6 million customers in Kansas and Missouri, to digitize Evergy’s power transmission system. It is also working with Xcel Energy, based in Minneapolis, Minnesota, and an unnamed “major public utility in the Southeast” of the United States.

Grid Vision tracks the performance of ongoing inspection work, provides instant insight of the location and severity of verified high-priority defects, and provides utility managers and analysts a deep and flexible framework for further asset intelligence.

The three-and-a-half-year-long Evergy project will improve reliability and resiliency of over 14 000 km of Evergy’s power transmission system by using Grid Vision to create a digital inventory of its assets, accelerating image analysis capabilities, and improving inspection accuracy by using AI combined with virtual inspections. The expected result is a significant cost reduction for inspections, maintenance and repairs.

There is a need for a dynamic energy infrastructure to ensure efficient, safe and reliable operations. AI, and especially machine learning, are increasingly used as tools to improve the reliability of high-voltage transmission lines. In particular, they can allow a grid to transition away from fossil and nuclear sources to more variable sources, such as solar and wind. This will become increasingly more important for several reasons. Extreme weather will offer increasingly more challenging operations, and the grid will have to support an increasing number of electric vehicles.

The vast number of choices means that random choices cannot be relied upon to provide results when facing multiple failures. Some form of intelligence is needed, human or machine, real or artificial, if problems are to be resolved quickly.

Wind and solar generation

Kansas state senator Mike Thompson (R-Shawnee), is a former meteorologist, who is currently chair of the Kansas Senate Utilities Committee. He has introduced bill SB 279, “Establishing the wind generation permit and property protection act and imposing certain requirements on the siting of wind turbines.” This bill would require wind and solar farms to be built on land zoned for industrial use. The problem with this proposal is that half of Kansas’ 105 counties are unzoned. These counties that want wind or solar energy would have to be zoned as industrial.

The Annual Economic Impacts of Kansas Wind Energy Report 2020, reports that wind energy is the least expensive energy source, providing 22 000 jobs (directly and indirectly). After Iowa, Kansas ranks second in the US for wind power, contributing 44% of Kansas’s electricity net generation.

Typically, there are two reasons for objections to wind and solar power. First, some people have an economic connection with fossil fuels. Second, and especially for wind, they don’t like their visual and aural impact on the environment.

Another source of conflict is aboriginal rights. This topic will be covered in an upcoming but unscheduled post, Environmental Racism.

Artificial General Intelligences

From a previous post …

haves will indirectly control artificial intelligence agents, while the majority have nots will be required to obey the whims of these agents, and their overlord handlers.

Post-modern bread and circus equivalents will pacify the great unwashed. If that doesn’t work even more direct, negative action will be taken.

Neural networks will live a life of their own, so it may not be possible for even the “haves” to exercise direct control over these agents. However, one proposed approach is to frame control, with an AI agent constitution, based on Isaac Asimov’s (1920-1992), Three Laws of Robotics.

In this post, these and other robotic laws will be examined and commented upon.

Sawyer (left) & Baxter (right) are collaborative robots. Just the sort of creatures that might end up in a dispute with living humans. (Photo: Jeff Green/ Rethink Robotics, 2015 CC-BY-SA-4.0)

Isaac Asimov’s Three Laws of Robotics were proposed in a 1942 short story Runaround. According to Asimov they had their origin in a meeting between himself and John W. Campbell on 1940-12-23.

The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Normality 2020

Voice activated actuators wait patiently to serve you. Invisible logistic trails ensure an orange (Citrus × sinensis) is peeled, split into boats, if that is your preference, and placed in front of you on a dish of your own choosing, when you demand it.

Your own speech controls your digital assistants, not only commercial varieties such as Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, but also open source Lucida (formerly Sirius), Abot, Jarvis and Jasper.

On a clear day, the sun shines, oranges are peeled and served, and there is no need to reflect on the laws of robotics underpinning digital assistants.

A snake in the garden of Eden

I have nothing against snakes, personally, but use the term to describe an unwelcome intruder into Eden, subsonic commands hidden in music, videos, or even white noise. This is done by using software to cancel out the sound that the speech recognition system was supposed to hear and replacing it with sound at subsonic frequencies that would be transcribed differently. Instead of an orange, an apple (Malus pumila) is peeled, sliced and served on a dish of someone else’s choice. A humorous change in the eyes of many, but in our family, some people are allergic to apples. Other substitutions can be more serious, even deadly. There can be significant security risks. It is at this stage that laws of robotics, or their AI equivalent, need to be applied.

One challenge with these three laws, is the assumption that all human actions are legitimate. What happens if a human wants to harm another human? With these laws, it would be impossible for a robot to intervene on behalf of the person being harmed. So, it will undoubtedly not go many milliseconds before some enterprising hacker ensures that these three laws are voided.

Asimov was well aware of this shortcoming, which he would have undoubtedly described as a feature. He has referenced Arthur Hugh Clough’s  (1819-1861) satirical poem on the ten commandments, The Latest Decalogue, as its inspiration: “Thou shalt not kill; but needst not strive officiously to keep alive:”

Asimov introduced a zeroth law in Foundation & Earth (1986) but it seems of limited use in conflict situations:

0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

In western films, the cliché is that the good guys always wear white Stetson hats! In real life, it is more difficult to distinguish good people from evil doers, or white hackers from black hackers.

These laws have been modified many times, by Asimov as well as others. One extreme is represented by Jack Williamson’s (1908-2006) novelette With Folded Hands (1947), rewritten as the novel The Humanoids (1949), deals with robot servants whose prime directive is “To Serve and Obey, And Guard Men From Harm.” The Williamson robots take the robotic laws to the extreme, by protecting humans from everything, including unhappiness, stress, unhealthy lifestyle and all potentially dangerous actions. All humans may do is to sit with folded hands.

Some feel three laws are insufficient.

The Lyuben Dilov (1927-2008) novel, Icarus’s Way (alternative title, The Trip of Icarus) (1974) added:

4. A robot must establish its identity as a robot in all cases.

This law appears to have been violated in the celebrated Google Duplex restaurant reservation (2018-05-17): https://mashable.com/2018/05/17/google-duplex-dinner-reservation/#X7ChNbJ3baqw

Harry Harrison (1925-2012) also produced a fourth law, found in the short story, The Fourth Law of Robotics, in the tribute anthology Foundation’s Friends (1989):

4. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.

Reproduction, here, is asexual but sensational. Why not a fourth law requiring a robot to maintain itself, by undertaking necessary hardware and software repairs? There are robots who can and do reproduce themselves, the most famous being RepRap, a low-cost, self-replicating 3D printer, initially made at the University of Bath in 2005: http://www.reprap.org/

Nikola Kesarovski (c. 1935-2007) published the book The Fifth Law of Robotics (1983):

5. A robot must know that it is a robot.

I am not quite sure why. Is it so that it knows that it isn’t human? Should it know that it is a particular type of robot? For example, a vacuum robot, rather than a lawn mowing robot.

Roger MacBride Allen (1957-) wrote a trilogy set within Asimov’s fictional universe. Caliban (1993), Inferno (1994) and Utopia (1996) are each prefixed with “Isaac Asimov’s”. Here, there are four New Laws, which treat robots as partners rather than slaves to humanity.

1. A robot may not injure a human being or allow a human being to come to harm.
2. A robot must cooperate with human beings except where such actions would conflict with the First Law.
3. A robot must protect its own existence.
4. A robot may do whatever it likes as long as this does not conflict with the first three laws.

Discussion

The various robotic laws are very vague, with concepts such as human and robot undefined. This can give rise to people or equipment being regarded as something other than what they are, such as cyborg or actuator, respectively, in an attempt to avoid following the laws. Ambiguity is a literary device that is masterly exploited by Asimov, and other science fiction authors.

Another challenge with the Asimov approach, is that it is only concerned about the adversarial relationship between two groups – robots and people. Nothing else matters. Robots do not seem to have any ethical obligations with respect to the environment, for example.

Even if the laws were amended or expanded to take other aspects of the world into consideration, these laws would still not work. The only reason for positing laws is to have them fail, in interesting ways. It is not the laws, but the storytelling that is important. The lesson to be learned is that it is not possible to restrict ethics to a set of a few simple rules. If one does, the entire system will at some point fall apart.

In many science fiction worlds, robots only have mental capabilities that are less than, or equal to, their human controllers, for lack of a better word. What happens when artificial intelligence advances beyond human levels? Superintelligence is a key challenge, a situation in which artificial intelligence, or machine intelligence to distinguish it from organic intelligence, will require more advanced ethical considerations, than those that can be stated in a literary work.

Deontology judges the morality of an action based on rules. It is a field I know almost nothing about, except that it is regarded by many professional philosophers as a dead end.

Perhaps it should be stated here and now that robots are another dead end. The future belongs not to robots but to Artificial General Intelligences (AGI). See: https://en.wikipedia.org/wiki/Artificial_general_intelligence These are machines with consciousness, intuitive, flexible and adaptive, even in terms of ethics. Like humans, AGIs do not rely on rote knowledge of rules, ethical or otherwise, but use them – if at all –  as guidelines to nudge ethical instincts and intuitions. It is a situation highly dependent on the environment people and AGIs are brought up in.

As an ethical amateur, I am attracted more to virtue-ethics than deontology. It is in the discussion of virtues, individually and collectively, that one can relate to behaviour that is beneficial, as well as that which is less so.

Rosalind Hursthouse writes in https://plato.stanford.edu/archives/fall2013/entries/ethics-virtue/ :

A virtue such as honesty or generosity is not just a tendency to do what is honest or generous, nor is it to be helpfully specified as a “desirable” or “morally valuable” character trait. It is, indeed a character trait—that is, a disposition which is well entrenched in its possessor, something that, as we say “goes all the way down”, unlike a habit such as being a tea-drinker—but the disposition in question, far from being a single track disposition to do honest actions, or even honest actions for certain reasons, is multi-track. It is concerned with many other actions as well, with emotions and emotional reactions, choices, values, desires, perceptions, attitudes, interests, expectations and sensibilities. To possess a virtue is to be a certain sort of person with a certain complex mindset. (Hence the extreme recklessness of attributing a virtue on the basis of a single action.)

Yes, this is a difficult act for a machine to follow, but absolutely essential if one is to have autonomous cars, autonomous surgeons and other tools that will interact intimately with humans.

The one recent book on ethics that I have enjoyed the most is After Virtue, by Alasdair MacIntyre. But that is another story …

Notes

  1. I taught Artificial Intelligence (AI) at Nord-Trøndelag Regional College from 1988 to 1991. My focus was on expert systems.
  2. I do not normally enjoy reading science fiction. However, I do find it rewarding to read about the topic.
  3. Currently, my main interest in AI relates to robotics in general, and assistive devices in particular. However, I also see a need to look beyond the present to a future where machines acquire a form of consciousness.
  4. Personally, if I needed extensive care in the future, I would prefer that care given to me by a robot rather than a human.

 

AI Soup: The Recipe

Reflecting on the thoughts of Kai-Fu Lee, a man of many titles.

Andy Friedman 2018 Kai-Fu Lee
Kai-Fu Lee (Illustration by Andy Friedman, MIT Technology Review)

A pervasive project is being undertaken in covert AI soup kitchens. Secret ingredients are being smuggled into these kitchens to make some of the largest, and (for some) best tasting, super-sized artificial intelligence soups the world is about to know.

Please be careful when you enter. Do not slop ingredients on the floor. We do not want to waste them. More importantly, if people are injured, we may have to pay compensation to any of the few remaining specimens of working humans. We may not have this concern for long. The goal of AI is to eliminate humans from the world of work, and to replace them with robots. A universal, basic (that means minimal) income for the majority. Unparalleled, unimaginable wealth for a technological elite.

Ingredients

  1. AI stock is based on bushels of university students, trained to be AI professionals and researchers.
  2. Add litres of data accumulated from computers, mobile phones, vehicles and anything else that has an ability to sense, record and transmit data.
  3. Thicken copiously with financing: Government grants, investments and even crowd funding are available. The exact mix will depend on the particular political whims of the day.
  4. Season with a culturally diverse bowl of innovative techniques, many open-source and freely available.
  5. Fine tune the taste for local consumption with a mix of entrepreneurial herbs, thoughtfully selected for the environment where the AI soup is to be consumed.

The secret of any soup is long, slow cooking.

Transfer the mixture to culturally correct tureens.  Serve in 2020 in China, in 2023 in Europe or 2028 in North America. The rest of the world? Look what happened in developing countries when mobile phones eliminated the need for copper cables, and landlines in both slums and rural areas.

Are we prepared for a world where half of all our daily tasks can be performed better and at almost no cost by artificial intelligence and robots? Are we ready for a  revolution, the fastest transition humankind has ever experienced?

Will this technological revolution create new jobs simultaneously as it displaces old ones? Will AI combined with humans to produce symbiots? Is a universal basic income a necessary key to AI acceptance?

Further reading:

https://www.technologyreview.com/s/610298/tech-companies-should-stop-pretending-ai-wont-destroy-jobs/

https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf

http://www2.itif.org/2018-innovation-employment-workforce-policies.pdf