A pigeon. Photo: Viktor Forgacs, 2017-12-12

This post was originally called Digital Power Transmission. It began with content about the use of artificial intelligence (AI) to find faults in electrical power assets in Kansas and Missouri. That introduction became ancient history on 2023-10-26, when I read an article about pigeons using the same approach as AI to problem solving. I wondered if pigeons would make AI more understandable.

So, now this post begins with a scientific study of pigeons!


Columba livia domestica, has been found in records that are 5 000 years old. Its domestication is far older, possibly stretching back 10 000 years. Among pigeons that are bred specifically for particular attributes, homing pigeons are bred for navigation and speed.

Pigeons are able to acquire orthographic processing skills = the use of visually represented words/ symbols, and basic numerical skills equivalent to those shown in primates.

In Project Sea Hunt, a US coast guard search and rescue project in the 1970s/1980s, pigeons were shown to be more effective than humans in spotting shipwreck victims at sea.

A study was undertaken at the University of Iowa, by Brandon Turner, lead author, a professor of psychology, and Edward Wasserman, co-author and a professor of experimental psychology. 24 pigeons were given a variety of visual tasks, some of which they learned to categorize in a matter of days, and others in a matter of weeks. The researchers found evidence that the mechanism pigeons use to make correct choices is similar to that AI models use to make predictions. Using AI-speak, nature has created an algorithm that is highly effective in learning very challenging tasks, not necessarily fast, but with consistency.

On a screen, pigeons were shown different stimuli, like lines of different width, placement and orientation, as well as sectioned and concentric rings. Each bird had to peck a button on the right or left to decide which category they belonged to. If they got it correct, they got a food pellet; if they got it wrong, they got nothing.

Pigeons learn through trial and error. With simple tasks, pigeons improved their ability to make right choices from 55% to 95% of the time. With more complex challenges, accuracy increased from 55% to 68%.

In an AI model, the main goal is to recognize patterns and make decisions. Pigeons do the same. Learning from the consequences of being given a food pellet (or not), they show a remarkable ability to correct their errors. Similarity function is also at play for pigeons, by using their ability to find resemblance between two objects.

Those two mechanisms alone, can be used to define a neural network = an AI-machine that solves categorization problems.

The area served by Evergy, a Topeka, Kansas based electric utility company.

Back to the original content, Digital Power Transmission

Now, this post will examine the use of AI, and other digital technologies, in electrical energy transmission. Sometimes one has to venture outside of one’s backyard, to gain new insights. Today, the focus is on the Kansas and Missouri. More than four percent of this blog’s readers have roots in Kansas, in Leavenworth and Riley counties, making it one of the “big six” American states. The others being (in alphabetical order) Arizona, California, Michigan, New Hampshire and Washington. Yes, this weblog does have American content, because – sometimes – Americans are at the forefront.

Much of the initial work into the use of AI in grid management was done by Argonne National Laboratory, of Lemont, Illinois. After conducting AI grid studies, they stated that: “In a region with 1 000 electric power assets, such as generators and transformers, an outage of just three assets can produce nearly a billion scenarios of potential failure.” The calculation actually being: 1 000 x 999 x 998 = 997 002 000, which is close enough to a billion, for most people.

The Norwegian company, eSmart Systems, with its headquarters in Halden, bordering Sweden, in south-eastern Norway, provides AI based solutions for the inspection and maintenance of critical infrastructure related to electrical power generation and distribution.

Note: the term, asset, as used here, generally refers to a large structure, such as a electrical power generating station, or a substation, that transforms voltages (and amperages). For me, an asset will always be an accounting term, associated with the credit (left) side of a balance sheet, in contrast to a liability on the debit (right) side. My preferred terminology would be structure, works or plant.


In this project eSmart will act as project management lead alongside engineering consultants EDM International, Inc. of Fort Collins, Colorado and GeoDigital, of Sandy Springs – near Atlanta – Georgia. Together, these will provide large-scale data acquisition and high-resolution image processing.

eSmart Systems is working with Evergy, a Topeka, Kansas based electric utility company that serves more than 1.6 million customers in Kansas and Missouri, to digitize Evergy’s power transmission system. It is also working with Xcel Energy, based in Minneapolis, Minnesota, and an unnamed “major public utility in the Southeast” of the United States.

Grid Vision tracks the performance of ongoing inspection work, provides instant insight of the location and severity of verified high-priority defects, and provides utility managers and analysts a deep and flexible framework for further asset intelligence.

The three-and-a-half-year-long Evergy project will improve reliability and resiliency of over 14 000 km of Evergy’s power transmission system by using Grid Vision to create a digital inventory of its assets, accelerating image analysis capabilities, and improving inspection accuracy by using AI combined with virtual inspections. The expected result is a significant cost reduction for inspections, maintenance and repairs.

There is a need for a dynamic energy infrastructure to ensure efficient, safe and reliable operations. AI, and especially machine learning, are increasingly used as tools to improve the reliability of high-voltage transmission lines. In particular, they can allow a grid to transition away from fossil and nuclear sources to more variable sources, such as solar and wind. This will become increasingly more important for several reasons. Extreme weather will offer increasingly more challenging operations, and the grid will have to support an increasing number of electric vehicles.

The vast number of choices means that random choices cannot be relied upon to provide results when facing multiple failures. Some form of intelligence is needed, human or machine, real or artificial, if problems are to be resolved quickly.

Wind and solar generation

Kansas state senator Mike Thompson (R-Shawnee), is a former meteorologist, who is currently chair of the Kansas Senate Utilities Committee. He has introduced bill SB 279, “Establishing the wind generation permit and property protection act and imposing certain requirements on the siting of wind turbines.” This bill would require wind and solar farms to be built on land zoned for industrial use. The problem with this proposal is that half of Kansas’ 105 counties are unzoned. These counties that want wind or solar energy would have to be zoned as industrial.

The Annual Economic Impacts of Kansas Wind Energy Report 2020, reports that wind energy is the least expensive energy source, providing 22 000 jobs (directly and indirectly). After Iowa, Kansas ranks second in the US for wind power, contributing 44% of Kansas’s electricity net generation.

Typically, there are two reasons for objections to wind and solar power. First, some people have an economic connection with fossil fuels. Second, and especially for wind, they don’t like their visual and aural impact on the environment.

Another source of conflict is aboriginal rights. This topic will be covered in an upcoming but unscheduled post, Environmental Racism.

Artificial General Intelligences

From a previous post …

haves will indirectly control artificial intelligence agents, while the majority have nots will be required to obey the whims of these agents, and their overlord handlers.

Post-modern bread and circus equivalents will pacify the great unwashed. If that doesn’t work even more direct, negative action will be taken.

Neural networks will live a life of their own, so it may not be possible for even the “haves” to exercise direct control over these agents. However, one proposed approach is to frame control, with an AI agent constitution, based on Isaac Asimov’s (1920-1992), Three Laws of Robotics.

In this post, these and other robotic laws will be examined and commented upon.

Sawyer (left) & Baxter (right) are collaborative robots. Just the sort of creatures that might end up in a dispute with living humans. (Photo: Jeff Green/ Rethink Robotics, 2015 CC-BY-SA-4.0)

Isaac Asimov’s Three Laws of Robotics were proposed in a 1942 short story Runaround. According to Asimov they had their origin in a meeting between himself and John W. Campbell on 1940-12-23.

The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Normality 2020

Voice activated actuators wait patiently to serve you. Invisible logistic trails ensure an orange (Citrus × sinensis) is peeled, split into boats, if that is your preference, and placed in front of you on a dish of your own choosing, when you demand it.

Your own speech controls your digital assistants, not only commercial varieties such as Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, but also open source Lucida (formerly Sirius), Abot, Jarvis and Jasper.

On a clear day, the sun shines, oranges are peeled and served, and there is no need to reflect on the laws of robotics underpinning digital assistants.

A snake in the garden of Eden

I have nothing against snakes, personally, but use the term to describe an unwelcome intruder into Eden, subsonic commands hidden in music, videos, or even white noise. This is done by using software to cancel out the sound that the speech recognition system was supposed to hear and replacing it with sound at subsonic frequencies that would be transcribed differently. Instead of an orange, an apple (Malus pumila) is peeled, sliced and served on a dish of someone else’s choice. A humorous change in the eyes of many, but in our family, some people are allergic to apples. Other substitutions can be more serious, even deadly. There can be significant security risks. It is at this stage that laws of robotics, or their AI equivalent, need to be applied.

One challenge with these three laws, is the assumption that all human actions are legitimate. What happens if a human wants to harm another human? With these laws, it would be impossible for a robot to intervene on behalf of the person being harmed. So, it will undoubtedly not go many milliseconds before some enterprising hacker ensures that these three laws are voided.

Asimov was well aware of this shortcoming, which he would have undoubtedly described as a feature. He has referenced Arthur Hugh Clough’s  (1819-1861) satirical poem on the ten commandments, The Latest Decalogue, as its inspiration: “Thou shalt not kill; but needst not strive officiously to keep alive:”

Asimov introduced a zeroth law in Foundation & Earth (1986) but it seems of limited use in conflict situations:

0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

In western films, the cliché is that the good guys always wear white Stetson hats! In real life, it is more difficult to distinguish good people from evil doers, or white hackers from black hackers.

These laws have been modified many times, by Asimov as well as others. One extreme is represented by Jack Williamson’s (1908-2006) novelette With Folded Hands (1947), rewritten as the novel The Humanoids (1949), deals with robot servants whose prime directive is “To Serve and Obey, And Guard Men From Harm.” The Williamson robots take the robotic laws to the extreme, by protecting humans from everything, including unhappiness, stress, unhealthy lifestyle and all potentially dangerous actions. All humans may do is to sit with folded hands.

Some feel three laws are insufficient.

The Lyuben Dilov (1927-2008) novel, Icarus’s Way (alternative title, The Trip of Icarus) (1974) added:

4. A robot must establish its identity as a robot in all cases.

This law appears to have been violated in the celebrated Google Duplex restaurant reservation (2018-05-17): https://mashable.com/2018/05/17/google-duplex-dinner-reservation/#X7ChNbJ3baqw

Harry Harrison (1925-2012) also produced a fourth law, found in the short story, The Fourth Law of Robotics, in the tribute anthology Foundation’s Friends (1989):

4. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.

Reproduction, here, is asexual but sensational. Why not a fourth law requiring a robot to maintain itself, by undertaking necessary hardware and software repairs? There are robots who can and do reproduce themselves, the most famous being RepRap, a low-cost, self-replicating 3D printer, initially made at the University of Bath in 2005: http://www.reprap.org/

Nikola Kesarovski (c. 1935-2007) published the book The Fifth Law of Robotics (1983):

5. A robot must know that it is a robot.

I am not quite sure why. Is it so that it knows that it isn’t human? Should it know that it is a particular type of robot? For example, a vacuum robot, rather than a lawn mowing robot.

Roger MacBride Allen (1957-) wrote a trilogy set within Asimov’s fictional universe. Caliban (1993), Inferno (1994) and Utopia (1996) are each prefixed with “Isaac Asimov’s”. Here, there are four New Laws, which treat robots as partners rather than slaves to humanity.

1. A robot may not injure a human being or allow a human being to come to harm.
2. A robot must cooperate with human beings except where such actions would conflict with the First Law.
3. A robot must protect its own existence.
4. A robot may do whatever it likes as long as this does not conflict with the first three laws.


The various robotic laws are very vague, with concepts such as human and robot undefined. This can give rise to people or equipment being regarded as something other than what they are, such as cyborg or actuator, respectively, in an attempt to avoid following the laws. Ambiguity is a literary device that is masterly exploited by Asimov, and other science fiction authors.

Another challenge with the Asimov approach, is that it is only concerned about the adversarial relationship between two groups – robots and people. Nothing else matters. Robots do not seem to have any ethical obligations with respect to the environment, for example.

Even if the laws were amended or expanded to take other aspects of the world into consideration, these laws would still not work. The only reason for positing laws is to have them fail, in interesting ways. It is not the laws, but the storytelling that is important. The lesson to be learned is that it is not possible to restrict ethics to a set of a few simple rules. If one does, the entire system will at some point fall apart.

In many science fiction worlds, robots only have mental capabilities that are less than, or equal to, their human controllers, for lack of a better word. What happens when artificial intelligence advances beyond human levels? Superintelligence is a key challenge, a situation in which artificial intelligence, or machine intelligence to distinguish it from organic intelligence, will require more advanced ethical considerations, than those that can be stated in a literary work.

Deontology judges the morality of an action based on rules. It is a field I know almost nothing about, except that it is regarded by many professional philosophers as a dead end.

Perhaps it should be stated here and now that robots are another dead end. The future belongs not to robots but to Artificial General Intelligences (AGI). See: https://en.wikipedia.org/wiki/Artificial_general_intelligence These are machines with consciousness, intuitive, flexible and adaptive, even in terms of ethics. Like humans, AGIs do not rely on rote knowledge of rules, ethical or otherwise, but use them – if at all –  as guidelines to nudge ethical instincts and intuitions. It is a situation highly dependent on the environment people and AGIs are brought up in.

As an ethical amateur, I am attracted more to virtue-ethics than deontology. It is in the discussion of virtues, individually and collectively, that one can relate to behaviour that is beneficial, as well as that which is less so.

Rosalind Hursthouse writes in https://plato.stanford.edu/archives/fall2013/entries/ethics-virtue/ :

A virtue such as honesty or generosity is not just a tendency to do what is honest or generous, nor is it to be helpfully specified as a “desirable” or “morally valuable” character trait. It is, indeed a character trait—that is, a disposition which is well entrenched in its possessor, something that, as we say “goes all the way down”, unlike a habit such as being a tea-drinker—but the disposition in question, far from being a single track disposition to do honest actions, or even honest actions for certain reasons, is multi-track. It is concerned with many other actions as well, with emotions and emotional reactions, choices, values, desires, perceptions, attitudes, interests, expectations and sensibilities. To possess a virtue is to be a certain sort of person with a certain complex mindset. (Hence the extreme recklessness of attributing a virtue on the basis of a single action.)

Yes, this is a difficult act for a machine to follow, but absolutely essential if one is to have autonomous cars, autonomous surgeons and other tools that will interact intimately with humans.

The one recent book on ethics that I have enjoyed the most is After Virtue, by Alasdair MacIntyre. But that is another story …


  1. I taught Artificial Intelligence (AI) at Nord-Trøndelag Regional College from 1988 to 1991. My focus was on expert systems.
  2. I do not normally enjoy reading science fiction. However, I do find it rewarding to read about the topic.
  3. Currently, my main interest in AI relates to robotics in general, and assistive devices in particular. However, I also see a need to look beyond the present to a future where machines acquire a form of consciousness.
  4. Personally, if I needed extensive care in the future, I would prefer that care given to me by a robot rather than a human.


AI Soup: The Recipe

Reflecting on the thoughts of Kai-Fu Lee, a man of many titles.

Andy Friedman 2018 Kai-Fu Lee
Kai-Fu Lee (Illustration by Andy Friedman, MIT Technology Review)

A pervasive project is being undertaken in covert AI soup kitchens. Secret ingredients are being smuggled into these kitchens to make some of the largest, and (for some) best tasting, super-sized artificial intelligence soups the world is about to know.

Please be careful when you enter. Do not slop ingredients on the floor. We do not want to waste them. More importantly, if people are injured, we may have to pay compensation to any of the few remaining specimens of working humans. We may not have this concern for long. The goal of AI is to eliminate humans from the world of work, and to replace them with robots. A universal, basic (that means minimal) income for the majority. Unparalleled, unimaginable wealth for a technological elite.


  1. AI stock is based on bushels of university students, trained to be AI professionals and researchers.
  2. Add litres of data accumulated from computers, mobile phones, vehicles and anything else that has an ability to sense, record and transmit data.
  3. Thicken copiously with financing: Government grants, investments and even crowd funding are available. The exact mix will depend on the particular political whims of the day.
  4. Season with a culturally diverse bowl of innovative techniques, many open-source and freely available.
  5. Fine tune the taste for local consumption with a mix of entrepreneurial herbs, thoughtfully selected for the environment where the AI soup is to be consumed.

The secret of any soup is long, slow cooking.

Transfer the mixture to culturally correct tureens.  Serve in 2020 in China, in 2023 in Europe or 2028 in North America. The rest of the world? Look what happened in developing countries when mobile phones eliminated the need for copper cables, and landlines in both slums and rural areas.

Are we prepared for a world where half of all our daily tasks can be performed better and at almost no cost by artificial intelligence and robots? Are we ready for a  revolution, the fastest transition humankind has ever experienced?

Will this technological revolution create new jobs simultaneously as it displaces old ones? Will AI combined with humans to produce symbiots? Is a universal basic income a necessary key to AI acceptance?

Further reading: