If I were born in the 21st century, I am certain that I would have avoided purchasing most of the 4 000 physical/ paper books that are found in our library. Most, but not all, because I appreciate many books precisely because of their images. While there are technical problems using an e-book readers to view high-definition images, these are ideal tools for reading novels and more general works.

Our physical books are organized using a decimal classification system, first developed by Francis Bacon (1561 – 1626), but expanded upon by Melvil Dewey (1851 – 1931). Some aspects of this topic have been discussed in an earlier weblog post. The issues discussed there will not be repeated, but augmented.

While our starting point for a classification system is Dewey Decimal Classification (DDC). DDC was first published in 1876. The latest printed version, 23, was published in 2011. The online version, WebDewey, is continuously updated. Unfortunately, the DDC system is problematic, much like the personality of Melvil Dewey. This system was originally positively received, and initially almost universally used, especially if the universe is restricted to American public libraries. Its focus is on a masculine, Christian, European-American homophobic world. We have introduced modifications.

First, we use a revised schedule for DDC 200 (Religion), developed by Paul Gerard, which gives a more equal weight to all religions, and provides adequate space for a full treatment of the Bahá’í Faith. This is referred to as the Phoenix schedule, and has been implemented many places, including Cliff Cottage.

Second, our geographical world view is non-standard, with a focus on at least three different geographical areas: Greater Vancouver in Canada, Trøndelag in Norway, and the Bay Area in California. Thus I have developed my own classification system, Geoscheme. The current version, E2, dates from 2016-05-07, and can be accessed below.

Geoscheme E2

While Dewey’s promotion of the metric system can be applauded, other areas of focus were less positively received and less successful, such as his promotion of a spelling reform, resulting in a permanent first name change from Melville, and a temporary last name change to Dui.

At the 2019 American Library Association annual conference, council document #50 presents a resolution on renaming the Melvil Dewey medal to remove Melvil Dewey’s association with the award. It was passed unanimously. Among the reasons cited are: “ … that he did not permit Jewish people, African Americans, or other minorities admittance to the resort owned by Dewey and his wife; … he was censured by the New York State Board of Regents for his refusal to admit Jews to his resort, whereupon he resigned as New York State Librarian; … Dewey made numerous inappropriate physical advances toward women he worked with and wielded professional power over; … during the 1906 ALA [= American Library Association] conference there was a movement to censure Dewey after four women came forward to accuse him of sexual impropriety, and he was ostracized from the organization for decades”.

Other Document Classification Systems

Perhaps the main challenge with library classification systems is their arrangement as hierarchical tree structures. As time progresses, the world of Melvil Dewey becomes less relevant. New categories become increasingly needed as old ones fade into the background. Increasingly, there is co-operation across fields, so that books (and other objects) need to display multiple classifications.

In Europe, the Universal Decimal Classification system dominates public libraries. It was developed by Paul Otlet (1868 – 1944) and Henri La Fontaine (1854 – 1943). They initially created the Répertoire Bibliographique Universel (RBU), starting in 1895. They then wrote to Dewey and received permission to translate his DDC into French. However, instead of translating, they made some radical innovations, such as adapting its enumerative classification approach in which all the subjects are listed and coded, into one that allows synthesis, essentially, the use of compound numbers to represent interrelated subjects. In addition, potential relations between subjects were identified, and symbols assigned to represent them. The result of this work, Manuel du Répertoire bibliographique universel, appeared in 1905. An outline of the UDC is available here.

So far, the important work of Charles Amee Cutter (1837 – 1903) has been ignored, in these weblog posts. Yet, his Cutter Expansive Classification system is important. It uses seven separate schedules, each designed for libraries of different sizes. The first schedule is the most basic. After this, each schedule expands from the previous one. Cutter provided instructions on how a library might expand from one schedule to the next, as it grows.

The Library of Congress Classification (LCC), was developed by Norwegian born librarian J. C. M. Hanson (1864 – 1943) from Cutter’s system, starting in 1897. It replaced the fixed location system developed by Thomas Jefferson (1743 – 1846). The major flaw with LCC is its absence of a sound theoretical basis. Classification decisions were driven by practical needs, rather than epistemology: It is focused on books found in one library’s collection, and does not attempt to classify human knowledge of the world.

Digital Documents

Our digital documents, including text, image and audio files are stored on a server, where several copies are kept in case of disk failure, along with other copies on external hard drives. These can be transferred to other devices as required, including e-book readers. These documents do not do not have the same need of a classification system, because they can be searched for in different ways. It takes only a few seconds to transfer these documents to other devices, such as laptops, stationary machines or e-book readers.

The Five Laws of Library Science

This weblog post is being published on the fiftieth anniversary of the death of Shiyali Ramamrita Ranganathan, (1892-08-12 – 1972-09-27). He was an Indian librarian and mathematician who developed Five Laws of Library Science (1931), and the Colon Classification System (1933).

Shiyali Ramamrita Ranganathan

The five laws are:

  1. Books are for use: This focuses attention on access-related issues, such as library location, loan policies, hours and days of operation, the quality of staffing and even more mundane matters, such as library furniture and temperature control.
  2. Every reader his or her book: libraries serve a wide variety of patrons, they have to acquire literature to fit a wide variety of needs. Everyone is different and each person has different tastes regarding the books they choose.
  3. Every book its reader: All books have a place in a library, even if only a small demographic chooses to read them.
  4. Save the time of the reader: All patrons should be able to easily locate the materials they desire quickly and efficiently.
  5. A library is a growing organism: a library is a dynamic institution. Books, methods and the physical space needs to be updated over time.

There have been numerous updates and modifications of these laws over time. In 2004, librarian Alireza Noruzi emphasized the web. In 2008, librarian Carol Simpson referred to media, more generally. In 2015, B. Shadrach referred to knowledge. In 2016, Achala Munigal focused on social media.

Colon Classification System

A faceted classification uses semantic categories, either general or subject-specific, that are combined to create the full classification entry.

In the Colon Classification system, originally presented in 1933, a book is assigned a set of values from each independent facet, using punctuation marks (most notably colons) and symbols between the facets to connect them.

The system is organized into 42 classes. In the 6th edition (2006), some examples are: Class D = Engineering, J = Agriculture, N = Fine Arts, U = Geography and X = Economics. Each class is has its own specific characteristics, facets. There are five fundamental categories that can be used to express the facets of a subject:

  • Personality is the most specific or focal subject.
  • Matter is the substance, properties or materials of the subject.
  • Energy includes the processes, operations and activities.
  • Space relates to the geographic location of the subject.
  • Time refers to the dates or seasons of the subject.

As e-reading increases, and works rely more on digital storage, than physical storage, it becomes easier to use tags, rather than numbers, to classify books. With tags, one is no longer confined to the singularity of one classification system. Tags can be a mishmash of Dewey, Cutter, LCC, Colon or other features. There is no need to physically locate a book in order to read it. At Cliff Cottage, relatively fewer books are located on physical shelves in our library, while an increased number of books are found on virtual shelves on our server. There is no limit on the number of household residents, who can access the same book simultaneously!

A book can be found, and loaded onto an e-reader almost instantaneously. The current difficulty with such a system, is being in agreement as to which tags are to be used.

Note: This weblog post has been in development for over two years. One text (A) mostly about Ranganathan, was originally written 2020-07-08; a second (B), about library classification systems more generally, on 2020-11-19; a third (C) mainly about classification as it applies to physical inventories, like screws, buttons, flour and yarn, was started on 2021-12-23. These were amalgamated on 2022-02-25 and further modified on 2022-08-31 . This text didn’t work properly. On 2022-09-20, these were separated into two separate documents, essentially (A & B) and (C). The text as it appears here consists of the first two texts.

bell hooks (1952 – 2021)

bell hooks, 2009-11-01 Photo: Cmongirl

bell hooks (no capitals, please) was born Gloria Jean Watkins in Hopkinsville, Kentucky, on 1952-09-25, or 70 years before the publication of this weblog post. Her pen name is taken from her maternal great-grandmother who, according to Heather Williams “was known for her snappy and bold tongue, which [bell hooks/ Gloria Jean Watkins] greatly admired”. Williams further informs us that the name was put in lowercase letters “to distinguish [herself from] her great-grandmother.” It also signified that it was more important to focus on her works, not her personality, expressed as the “substance of books, not who I am.”

Perhaps the most import insight bell hooks brings is that communication and literacy, defined as the ability to read, write and think critically, are necessary for the feminist movement because without them people may not recognize gender inequalities.

If there is a single work that would help people understand bell hooks, it is Ain’t I a Woman: Black Women and Feminism (1981). The title is not original. It was used by Sojouner Truth (1797 – 1883), as the publication title of an untitled speech given at a Woman’s convention in 1851 at Akron, Ohio. The fact that there is 130 years between these publications suggests that the status of Black women has not improved noticeably in that time.

Racism and sexism have doubly impacted the lives of Black women, so that they have the lowest status and worst conditions of any group in American society. Southern segregationists promoted a stereotype of Black female promiscuity and immorality. According to hooks, white female reformers were more concerned with white morality than the living conditions of Black Americans.

White society stereotyped white women as pure/ goddess/ virginal, in contrast to the stereotypical Black women depicted as seductive whores. This, in turn, justified the rape of Black women by white men. hooks views Black nationalism as patriarchal and misogynist.

The then-current feminist movement (from the 1970s), is seen as a (largely) white middle and upper class movement, unsympathetic to the needs of poor/ non-white women. This feminism actually reinforces existing patterns of sexism/ racism/ classism.

There are two other problematic starts, and legacies affecting Black women. The Nation of Islam dates from 1930. The Nation was started by Wallace Fard Muhammad (c. 1877 – c. 1934) in Detroit. After Muhammad’s disappearance, leadership was taken over by Elijah Muhammad (1897 – 1975), who expanded it to include schools/ banks/ restaurants/ stores/ truck and air based transportation systems/ publishing in 46 American cities. It also owned about 80 square kilometers of farmland in 1970. While all of these may be viewed positively, it was also a patriarchal organization that promoted gendered roles, and denied women leadership opportunities.

The Black Panther Party was started in the mid 1960s, by Huey P. Newton’s (1942 – 1989) and Bobby Seales (1936- ). They initially developed a 10-point manifesto. The Black Panther Party founded over 60 community support programs (renamed survival programs in 1971) including food banks, medical clinics, sickle cell anemia tests, prison busing for families of inmates, legal advice seminars, clothing banks, housing cooperatives, and their own ambulance service. The most famous of these programs was the Free Breakfast for Children program which fed thousands of impoverished children daily during the early 1970s. Newton also co-founded the Black Panther newspaper service, which became one of America’s most widely distributed African-American newspapers.

To begin with, not all was well with the Black Panther Party. It too advocated violence, black masculinity and traditional gender roles. Thus, it was not a vehicle for improving the status of Black women. It was patriarchal and misogynist. However, things started to improve, especially from 1968, when women constituted two-thirds of the party.

In Black Looks: Race and Representation (1992) hook takes an article by Audre Lorde’s (1934 – 1992) about black womanhood as a structure, then discusses how black women are imprisoned in a stereotype of violence, that continues on through the generations. She believes that the narrative can be changed, but that it is hard. Black women are encouraged to discuss Black literature. Yet, this does not come with any guarantees of self-actualization. In particular, she refers to Celie, a character in Alice Walker’s (1944 – ) The Color Purple (1982), where she escapes an abusive situation, only to return to a similar situation at the end of the novel. What these fiction writers are doing, is breaking “new ground in that it clearly names the ways structures of domination, racism, sexism, and class exploitation, oppress and make it practically impossible for black women to survive if they do not engage in meaningful resistance on some level.” (p. 50) Angela Davis (1944 – ) and Shirley Chisholm (1924 – 2005) are presented as examples of Black women breaking the trend and resisting the cycles. Women of color need to engage in feminism and in the “decolonizing of our minds” in order to center “social change that will address the diversity of our experiences and our needs.” (p. 60)

Not being Black, female or queer pas gay, it is not my place to pass judgement on the previous two works. At some level there is an intellectual understanding, but no lived experience. This is not the case with the third, and last, book that I would like to discuss: belonging: a culture of place (2009). hooks begins chapter 2, Kentucky is My Fate, with: “If one has chosen to live mindfully, then choosing a place to die is as vital as choosing where and how to live. Choosing to return to the land and landscape of my childhood, the world of my Kentucky upbringing, I am comforted by the knowledge that I could die here.” This was her fate, in 2021.

She regards her upbringing in rural Kentucky, as an exposure to anarchy, where people are enabled to live a relatively free life, despite racial separatism, white exploitation and black oppression. She contrasts this with more general urban experiences, where rules were made, imposed and enforced by unknown others, where “black folks were forced to live within boundaries in the city, ones that were not formally demarcated, but boundaries marked by white supremacist violence against black people if lines were crossed. Our segregated black neighborhoods were sectioned off, made separate. At times they abutted the homes of poor and destitute white folks. Neither of these groups lived near the real white power and privilege governing all our lives.”

In her last chapter, 10: Earthbound: On Solid Ground, hooks discusses the concept of interbeing, “That sense of interbeing was once intimately understood by black folks in the agrarian South. Nowadays it is only those who maintain our bonds to the land, to nature, who keep our vows of living in harmony with the environment, who draw spiritual strength from nature. Reveling in nature’s bounty and beauty has been one of the ways enlightened poor people in small towns all around our nations stay in touch with their essential goodness even as forces of evil, in the form of corrupt capitalism and hedonistic consumerism, work daily to strip them of their ties with nature…. To look upon a tree, or a hilly waterfall, that has stood the test of time can renew the spirit. To watch plants rise from the earth with no special tending reawakens our sense of awe and wonder.”

While I am happy that bell hooks was able to return to Kentucky, it is not always possible for people to return to their own place. For most of my adult life, my home town, New Westminster, on the banks of Sto:lo, the Fraser River, has been economically inaccessible. Thus, I have had to create my own substitute, Cliff Cottage, at Vangshylla, in rural Inderøy, Trøndelag, Norway. This has not happened without my own internal protests! Despite these, it is a place that is suitable for my anarchist self. Rural landscapes make better use of their internal resources, and are closer to sustainable. Prices for housing are lower, so people can work less. The benefits of an rural lifestyle are real.

Urban landscapes, unfortunately, have become dependent on the massive import of external resources, for their survival. They are no longer sustainable. People living there, feel a need to work excessively just to pay for the basics of housing. The benefits of an urban lifestyle are largely a mirage. At one point I read that in 2020 New Westminster experienced the worst air quality in the world due to the combined effects of the 2020 Western American wildfires and a fire at the old pier at the quay.

This week, I was sent two listings for houses for sale in Kerrisdale, a residential area in Vancouver, British Columbia, where Trish, my wife, grew up. The prices for these modest houses on smallish lots were between two and three million dollars, Canadian. I would discourage everyone, from supporting this form of übercapitalism. Buying such a house is not in the spirit of bell hooks. It is hard to be an anarchist, making monthly mortgage payments! It is hard to be an anarchist, wasting income on unnecessary expenditures.

Plasma Kinetics

This illustration shows some of the applications for Plasma Kinetics hydrogen technology, that include aircraft, and assorted types of land vehicles. Presumably, various types of vessels could also use it. Source: Plasma Kinetics.

Hydrogen based storage technology could replace capacitor and battery technology for energy storage in vehicles, vessels and aircraft of various types and sizes. Previously, posts in this weblog have taken up a hydrogen station explosion, and its aftermath. In addition, a flawed report about the economics of hydrogen and methane has been examined.

Plasma Kinetics hydrogen technology was introduced, and patented, in 2008. It was first claimed that the technology was transformational, then disruptive. Almost immediately restrictions were placed on their use of patents, effectively resulting in the technology being banned by the US government. That situation continued until 2017, when it was allowed to be commercialized. There were some restrictions imposed under the International Traffic in Arms Regulations (ITAR), which continues to restrict its export as a missile fuel.

Where Plasma Kinetics technology differs from other providers of hydrogen, is that it does not need a compressed gas infrastructure to capture, move or distribute hydrogen. Instead, one common distribution method is to fill 19 l / 7 kg containers with hydrogen, for sale at assorted local stores. Empty containers can be returned, in exchange for recharged containers.  The stored hydrogen is non-flammable.  Containers of hydrogen can be transported via truck, rail, or ship without restriction.  There is no need to build compressed hydrogen gas stations.  Plasma Kinetics systems are slightly larger, and only moderately heavier, than compressed gas carbon-fiber tanks at 700 bar.  But solid storage containers are much easier to manage than compressed gas, and have a lower overall energy cost, and a cleaner fabrication process.  Safe, non-flammable, hydrogen storage in dense solid form. Hydrogen is zero-carbon. No energy or pressure is required to collect and store hydrogen. No pipelines or fixed structure pumping stations are required. Cassette, canister and other container systems can be easily recharged. Materials used are non-toxic and readily available worldwide. The entire processing process is quiet. 

The nano-graphite film recharges through 150 cycles and is fully recyclable. The reason for this limit, is that the process only works with atomic hydrogen = 1H (where an atom consists of one proton and one electron, but no neutrons). This amounts to 99.98% of hydrogen found in the wild. Deuterium = 2H (where an atom consists of one proton, one neutron and one electron), amounts to 0.02% of the wild hydrogen population. It cannot be used in the energy system, so it accumulates on the film. It can, however, be retrieved when the storage units are recycled, and sold for a profit that exceeds the recycling costs!

Comparison between different hydrogen storage methods. Source: Plasma Kinetics.

My acquaintance with this technology came from a YouTube video (2021-06-24) on the E for Electric channel, when Sandy Munro was asked by Alex Guberman, what he would do if he became CEO of Toyota for a day? Part of his answer involved Toyota acquiring, or at least developing a relationship with, Plasma Kinetics.

Some weeks later, in an interview with Paul Smith (2021-08-12), Smith explains how the technology works, starting at about 5m00s in. He claimed that 15 lbs provides 20 miles of range in a car. With a severe allergy to imperial units, I would probably have said that a 19 l/ 7 kg cartridge would provide an average car with sufficient energy for 30 km. Cylinders for trucks would be 20 x larger (140 kg). Four of those would allow a truck to travel 570 miles = ca. 900 km.

One of the main concerns with this technology is the capability of consumers to replace a 19 l/ 7 kg cartridge every 30 km. People expect a modern electric vehicle (EV) to have a range of at least 300 km, which would require a vehicle to carry ten such units, at a weight of 70 kg. It was pointed out that systems were being developed for the automatic removal and insertion of disks (in cars), and presumably cylinders (in trucks and airplanes).

It was noted that while batteries are extremely efficient, the specific energy of hydrogen, expressed in terms of J/ kg, is three times that of a battery. Except, in some respects, one is comparing avocados with olives! The hydrogen needs to go through a fuel cell for its energy to be converted to electricity.

It should be noted that prior to the hydrogen ending up in some container, water = H2O has been converted in an electrolyzer resulting in hydrogen 2 parts H2 and oxygen 1 part O2. Please do not ask what happens to the oxygen!

Both fuel cells and electrolyzers are becoming smaller, lighter and more reliable. Electrolyzers can be stationed at local wind or photo-voltaic farms, wastewater treatment facilities, or other climate friendly sources.

It was also pointed out that a conventional compressed hydrogen refueling station can cost US$ 2.5 to 3 million. This contrasts with a station for Plasma Kinetics containers that costs about US$ 200 000.

A fuel cell vehicle using this technology should be far cheaper to make than a battery electric vehicle. Some items are eliminated, others are repurposed. For example, the battery cooling system becomes a fuel cell cooling system. Some components remain the same, such as the electric motors. In essence, a heavy battery is being replaced with a much lighter fuel cell and the Plasma Kinetics photo release system for hydrogen. This should give the vehicle improved range.

Paul Smith concludes that interest for the technology is stronger in Asia and Europe, and much less so in North America. A fab = fabrication facility = factory, to make the equipment costs about US$ 100 million.

In EV 2030 predictions, the challenges with fuel cells involve the energy costs of electolyzing hydrogen from water, which account for somewhere between 25% (DC) and 31% (AC) energy loses. Then, processing of hydrogen in the fuel cell costs another 50%. This means that the energy value available to the motors is somewhere between 36 – 38%. In contrast, the energy value available with a battery is about 77%.

Since my prophecy quotient is already used up, I will only speak of dreams. One of which is that dynamic charging along highways will meet much of the vehicular need for electricity, by 2050. Unfortunately, this is not supported by any evidence seen so far. Associated with this dream, is that the cost of dynamic charging technology will be less than that provided by hydrogen containers and fuel cells or equivalent battery based components, in vehicles. An agenda to this dream is that solid-state batteries will become the norm because of their increased specific energy and energy density, and durability. Any such batteries will generally be much smaller and reserved for last mile situations, something a 20 kWh battery would be able to supply.


For people living – possibly even born – in the 21st century, Eurorack is a major approach to acquiring an affordable synthesizer. It is not a specific instrument, but a modular synthesizer format originally specified in 1996 by Doepfer Musikelektronik. It has since grown in popularity, and as of 2018 has become the dominant hardware modular synthesizer format, with over 5000 modules available from more than 270 different manufacturers.

Stated another way: If you, as a synthesizer playing person, want to base your synthesizer on modular components, there is no point in acquiring anything that isn’t Eurorack compatible; If you are a synthesizer module manufacturer, there is no point in offering modules that aren’t Eurorack compatible. Eurorack is the unavoidable standard, the intersection between module consumers and producers. Here, in this weblog post, the Eurorack specifications will be examined in some detail.

The mechanical specification for the Eurorack are derived from Eurocard, but with additional power supply and bus board specifications. The power supply is currently specified as A-100 PSU3, updated in 2015. Many cases adhere to the A-100 PSU2 specification, this allow modules to fit into existing (read: used) rack cases.

The Doepfer bus board allows for a maximum of 14 modules to be plugged in. A standard Doepfer case, either rackmount or portable, consists of two rows of 84hp, 6U high, that contain one PSU and two bus boards.

A Doepfer A-100 modular synthesizer, with two modules from Analogue Solutions, and the remainder from Doepfer. Photo: Nina Richards, 2011-12-02.

Doepfer-style bus boards are circuit boards. An alternative to these is a flying bus board. These have similar connections but use a flexible ribbon. This is often preferred, as mounting circuit boards can sometimes prove difficult.

The modules themselves have to meet Eurocard specifications for printed circuit board (PCB) cards that can be plugged together into a standard chassis which, in turn, can be mounted in a 19-inch rack. The chassis consists of slotted card guides top and bottom, into which cards are slid so they stand on end. The spine of each card has one or more connectors which plug into mating connectors on a backplane at the rear of the chassis.

Module height was three rack units (3U) and the width was measured in the Eurocard-specific Horizontal Pitch (hp) standard, where 1hp equals 0.2 inches, or 5.08 mm. The modules were largely low-cost, compact, and had some components on their boards that were socketed instead of soldered down, so the user could, for example, upgrade to a better op-amp IC.

An unpopulated Doepfer LC6 Eurorack case, with power bus. Photo: Ashley Pomeroy, 2020-12-31.

Nathan Thompson, writing as nonlinearcircuits, has posted 33 laser-cut Eurorack cases, plus rails and some other components on Thingverse. Most of the cases date from 2015 and 2016.

Modules connect to a bus board using a 10-to-16 or 16-to-16 pin cable, depending on module design. These 16 pins are arranged in pairs and carry the following signals, from top to bottom: Gate, CV, +5V, +12V, GND, and -12V. The bottom 10 pins do most of the work, providing + and -12V to power the modules. The top two pins are for Doepfer’s optional internal CV and Gate routing. The +5V rail is used on some modules that require more power.

Plugging modules in, is not always as simple as it seems. Experienced Eurorack users will rigorously check connections before powering up, no matter how long they’ve been working with the system. Typically, the red stripe on the ribbon cable connecting the modules to the bus board must line up with -12V. This should be labeled on the module, and is always labeled on the bus board. Plugging a module incorrectly may have expensive ramifications.

A-100 PSU2 provides 1200 mA = milliamps of current to both the +12V and -12V rails. This has to be compared with the power drawn by a module. This has to be less than what the PSU specifies. The A-100 PSU3 also provides +5 V of power.

With the classic Doepfer case, a user would need to consume less than 1 200 mA on both rails. Modules should be almost evenly split between the two bus boards. If a module requires +5V, most manufacturers, including Doepfer as of 2015, either a PSU3 has to be used, or an adapter, which takes current from the +12V rail. The amperage required on the +5V rail will be subtracted from that available current on the +12V rail. The power specifications in Eurorack are not technically standardized, but most follow the Doepfer standard.

Perhaps the most important consideration, but one that may be difficult to answer for someone new to synthesizers and/ or Eurorack, is deciding on the type of rig to make.

Some people refer to a classic analogue synth, a rig capable of generating its own waveforms with wave-shaping tools to add character including textures and timbres to the generated signal. Another approach is to build an effects rack that processes sound generating elsewhere. These can be monophonic, stereo or polyphonic. Below this, one can build a drum machine that is focused on rhythm, rather than more tonal qualities.

One major advantage of Eurorack is its modular nature, allowing an opportunity to add and delete modules. To construct a self-contained instrument one needs: an oscillator, a filter, a voltage controlled amplifier (VCA), two envelope generators, one for the filter and another for the VCA, an effects unit, a mixer and/ or an output module.

Beginners are often encouraged to choose an analogue oscillator. These are easy to find and use, while still offering opportunities for creative expression.

Voltage-controlled oscillator (VCO) generate waveforms—sine, triangle, sawtooth, ramp or square waves— that are slightly unstable, with fluctuations in pitch and timbre as the voltage changes over time, this gives the oscillator a unique character.

Filters impact sounds the most. For better or worse, many modern synths use filters with characteristics that emulate those found on specific vintage synthesizers.

Robert Moog’s (1934 – 2005) lasting impact on synthesizers, starts with his dictate of 1 V per octave. Increasing the voltage going into a VCO by 1 volt raises its pitch by one octave. To understand this, consider a piano and how it is tuned. Convention dictates that middle C is referred to as C4. Tuning is based on A4, two white keys below, or to the left of, middle C. A4 has a standard frequency of 440 Hz. For convenience, it will be assumed that this is produced by a VCO signal of 4V. Thus, the relationship between note, voltage and frequency can be expressed by: A0 = 0V = 27,5 Hz; A1 = 1V = 55 Hz; A2 = 2V = 110 Hz; A3 = 3V = 220 Hz; A4 = 4V = 440 Hz; A5 = 5V = 880 Hz; A6 = 6V = 1 760 Hz; A7 = 7V = 3 520 Hz; A8 = 8V = 7 040 Hz. Note: Not all VCOs are turned to A in this fashion. As can be seen, above, this results in an exponential relationship between voltage and frequency, as each change in octave requires an additional doubling or halving in frequency. An accurate reproduction of this exponential curve in modules is difficult in analogue synthesizers because temperature changes and the ageing of electronic components, often referred to as tracking errors, can impact pitch.

An aside: Many Japanese synthesizers, such as those made by Yamaha or Korg, use a system where voltage is proportional to frequency. If A1 = 1 V, then 2A = 2 V, 3A = 4 V, 4A = 8 V. In other words, it takes a doubling or halving of the voltage to result in an octave change.

There are three basic approaches to acquiring modules that can be used with Eurorack. These are 1) assembled systems; 2) DIY from kits; 3) DIY from components. All three of these approaches will be discussed below.

Assembled Systems

Moog in the late 1960s released synthesizer modules Ic, IIc and IIIc followed by the Ip, IIp and IIIp These were followed in the early 1970s by System 10, 12, 15, 35 and 55. These were all extremely expensive, based on a discrete transistor designs. The separate modules – such as oscillators, amplifiers, envelope generators, filters, noise generators, ring modulators, triggers and mixers – were connected using patch cords, which also enabled them to control each other. This produced a distinctive sound that made its way into many contemporary recordings. Production of all these except system 15, 35 and 55 modules had stopped by 1973. These last three lasted until 1981. Moog released new versions of some of these since 2014, but these typically cost US$ 35 000.

The patents and other rights to Moog’s modular circuits expired in the 1990s. With the expiration of these rights, other manufacturers have been able to offer sound clones of these modules, many in the Eurorack format. Since 2020, Behringer has been one of these.

The Behringer 960 Sequential Controller Photo: Behringer

The Behringer 960 sequencer controller replicates the operation of System 55 but using modern components, and built so that can fit in a standard Eurorack case. It is also affordable, at about NOK 1 600.

DIY from kits

Dreadbox Dysphonia Eurorack synth module Photo: Dreadbox

For slightly more money, about NOK 2 200, one could also buy a Dreadbox Dysphonia, that was offered as a kit in 2021-11. As with many kits, it was made as a single run. Once the kits from that batch are sold, no additional kits will be made. Dreadbox describes this as buy now or cry later. On 2022-01-19, one was being offered for sale for NOK 4 000. Despite the hype, one can usually expect something similar being offered in the future, but there will be differences, sometimes even improvements.

The main advantage of this kit is that It could be used as a stand-alone desktop synthesizer, or be fitted into a Eurorack. To facilitate both purposes, It comes with a USB to Eurorack power converter. This type of kit is claimed to be well suited for inexperienced DIY construction. Instructions are typically easy to understand, and solder together!

The Dysphonia consists of 13 individual sections that offer an affordable, compact, modular patch system, if one is prepared to build the system from parts. It consists of a single analogue oscillator comes with 4 waveforms that you can patch independently through 3 VCAs = voltage controlled amplifiers, and a 3 channel mixer before being subjected to a 24dB 4-pole lowpass filter and 12dB 2-pole multimode filter. The low pass filter can also self-oscillate to provide additional tones. In addition to an analogue LFO = low frequency oscillator ad envelope, there is a digital modulator providing 4 different modes with low frequency oscillator (LFO), Envelope, Random and CC = continuous control, a MIDI = musical instrument digital interface message capable of transmitting a range of values, usually 0-127. These are commonly used for controlling volume (#7), pan (#10), data slider position (#6), mod wheel (#1) and other variable parameters. This can enhance music, but an over-use of these messages can result in MIDI log-jam, where the amount of data sent exceeds the supported bandwidth. There is also a MIDI-to-CV = control voltage, module which provides analogue to digital and digital to analogue conversions, allowing the module to intereact with a keyboard, computer, phone or almost any other device. There is also a Hybrid Echo module.

DIY from components

One useful source of updated electronic information comes from Elektor magazine. A green subscription provides everything digitally, including back issues. Elektor publishes electronic projects, background articles and designs aimed at engineers, enthusiasts, students and others. It also sells printed circuit boards (PCBs), kits and modules. PCB design work is usually available without charge from their website. Microcontroller and/or computer based projects normally come with program source code and other necessary files.

This is also a good source of synth designs that take advantage of modern electronic components with methodologies that are suitable for hobbyists.

Gear Acquisition Syndrome

One of the major challenges with Eurorack, is that it encourages the acquisition of excessive amounts of gear. Gear acquisition syndrome (GAS) is a real psychological challenge, satirically documented by Steely Dan guitarist Walter Becker (1950 – 2017) in a 1994-04 Guitar Player magazine article (p. 15), where G originally stood for guitar. Because the many providers of Eurorack offer a wide variety of relatively low-cost components, often with specific but limited characteristics, it is tempting to buy just one more! Some people realize compulsive shopping should be resisted. Those who need the advice, will probably not follow it.

The seven key stages of GAS are discussed in a 2022-08-18 Music Radar article. These are: dissatisfaction, desire, ‘research’, the purchase, guilt, acceptance and relapse. Relapse, this last “cruellest stage of GAS can hit anywhere between a year to eighteen months after the purchase, although the time passed invariably depends on the amount of cash spent and the amount of meals you’ve had to eat from a tin as a consequence.” Once again, the article refers specifically to guitars, but also applies to synths, and by extension Eurorack modules.

Another weblog post tentatively titled DIY Synths and currently scheduled for publication 2023-03-25, contains more detailed information about synth circuits, especially from kits.

Elizebeth Friedman (1892-1980)

One hundred and thirty years before the publication of this weblog post, Elizebeth Friedman née Smith (1892–08-26 – 1980-10-31) was born. She was known in later life for her expertise in cryptanalysis.

Cryptanalysis, popularly called code breaking, is actually an important branch of computer science. All practitioners are expected to have some basic understanding of it, at both practical and conceptual levels. It is actually a broader topic than many realize. It involves the analysis of information systems to understand what lies hidden. As expected, Wikipedia has an article on the topic. People often describe some (often their own) cryptanalytic efforts as reverse engineering. Other people may actually think they know what this term implies, but it is vague, often deliberately used to cover up the specific techniques involved in ferreting out system information.

American Cryptanalysis begins at Riverside, Illinois, where a number of documents on the subject were published, some of which are available for download. This was mainly the work of one woman, Elizebeth Friedman (1892-1980), sometimes assisted by her husband, William (1891-1961), who was working at Riverside, starting in 1915, then went on to lead the research division of the Army’s Signal Intelligence Service (SIS) in the 1930s, and some follow-on services into the 1950s. William’s one major contribution was inventing the term cryptanalysis!

Unfortunately for the world, some scientific practices are shameful. It is not so much the individual facts that are either stumbled upon or ignored that constitute a major problem, but rather how some practitioners (typically male) take credit for work performed by others (typically female). In addition, large numbers of women have been actively discouraged in pursuing scientific careers, simply on the basis of their gender. In the twentieth century, when they were permitted to participate, they were shunted into inferior positions/ roles, and their activities were depreciated. Hopefully, in the twenty-first century, this will come to an end.

Elizebeth Friedman was the foremost cryptanalyst in USA, exceeding in ability, the talents of her husband. This is mentioned because, while they worked together, only William Friedman’s name appears on published documents, although many knew that they were sometimes written jointly, but often by Elizebeth alone. It is difficult (if not impossible) for me, a male, born more than half a century after her, to understand her situation, let alone her motivation for allowing her husband to receive full credit.

Yet, the most outrageous appropriation of her work did not involve her husband, but another American man who should have been regarded as Public Enemy #1, John Edgar Hoover (1895 – 1972). who took credit for much of Elizebeth’s cryptanalysis. As Wikipedia states, “Later in life and after his death, Hoover became a controversial figure as evidence of his secretive abuses of power began to surface. He was found to have exceeded the jurisdiction of the FBI, and to have used the FBI to harass political dissenters and activists, to amass secret files on political leaders, and to collect evidence using illegal methods. Hoover consequently amassed a great deal of power and was in a position to intimidate and threaten others, including multiple sitting presidents of the United States.”

Fake Science and its consequences

Originally, this post ended with the previous paragraph. Then, on 2022-07-22, some allegations emerged that part of a key 2006 study of Alzheimer’s disease may have been fabricated. Matthew Schrag (1971 – )found serious problems with underlying research led by Sylvain Lesné (1974 – ) on a specific protein. Science has now issued a statement about it. Images accompanying and supporting the research, seem to have been altered.

One problem with fake research is that it often results in fakers getting undeserved research grants. More importantly, real researchers get denied these grants, which can mean that medical breakthroughs get delayed. This can result in unnecessary suffering, or even premature death. The amount of money involved can reach hundreds of millions of dollars. The wrong papers get cited. In this particular case, one published in Nature, has been cited 2 300 times.

Schrag told Science, “You can cheat to get a paper. You can cheat to get a degree. You can cheat to get a grant. You can’t cheat to cure a disease.” This is the real essence of the problem. Fake research leads society down a cul-de-sac that leads nowhere.

Much of the world is already treating science with contempt. In particular, there are climate deniers, who use falsified science to justify their own claims. There are oil companies that have knowingly publicly denied the climatic impact of the carbon dioxide produced from the combustion of petroleum products, despite knowing about its consequences for about six decades. This is having a negative impact on billions of people, while some few others are provided an unwarranted life of luxury.

The world needs to prevent the publication of falsified reports, such as those by Lesné. Currently, the peer review system lacks a mechanism to prevent the publication of doubtful works. I remember my wife, Trish’s cousin, Terry Heaps (1941 – 2017), discussing a paper he had peer reviewed as a resource economist. He had rejected it, because it contained some bad data that produced incorrect conclusions. He had been alerted to this situation by an illustration. He notified the publication of this fact. The publication then notified the author. Despite this, the paper showed up in another publication, complete with the bad data and incorrect conclusions, but minus the illustration.

In addition to developing a system to prevent inappropriate works from being published, science needs to ensure that people who make valuable contributions, such as Elizebeth Friedman, are fully acknowledged.


Once again, this post ended with the previous paragraph. Then, on 2022-08-21, X-ray evidence emerged that Wyndham Lewis (1882 – 1957) had deliberately destroyed Helen Saunder’s (1885 – 1963) missing artwork, Atlantic City (ca. 1915), by painting Praxitella (1921) on top of it. Students Rebecca Chipkin and Helen Kohn, used X-ray and other imaging technology to investigate Praxitella, because of its “uneven texture and glimpses of bright red through cracks in the surface paint.”

Vorticism was an artistic movement, heavily influenced by cubism and futurism, whose artworks typically used bold colours, harsh lines and sharp angles. It originated with the Rebel Art Centre, started in 1914 by Wyndham Lewis and Kate Lechmere (1887 – 1976), who financed it. Helen Saunders and Jessica Dismorr (1885 – 1939) were associated with the movement as practicing painters. The movement also had literary supporters that included Ford Madox Ford (1873 – 1939), Ezra Pound (1885 – 1972), T S Eliot (1888 – 1965) and Rebecca West (1892 – 1983).

The Courtauld Gallery is an art museum in central London, established in 1932. It houses the collection of the Courtauld Institute of Art, a self-governing college of the University of London, specializing in art history. Barnaby Wright, deputy head of the Courtauld Gallery and a 20th-century art specialist, is quoted several times in the Guardian article.

“Saunders was a really interesting figure, but she was largely overshadowed by her male contemporaries. She and Jessica Dismorr were the backbone of the group,”

“In the prewar years, [Saunders] was one of the most radical painters and draughtspeople around. There were only a handful of people in Europe producing that type of hard-edged abstract painting and drawing.”

Atlantic City depicts a fragmented modern metropolis, almost certainly in the vibrant colours associated with the Vorticists. A black and white image of the painting appeared in Blast, the avant garde Vorticist journal.

Starting on 2022-10-14, Courtauld Gallery will open Helen Saunders: Modernist Rebel, an exhibition of 18 of Saunders’ drawings and watercolours, tracing her artistic development. It will also show Praxitella, loaned from Leeds Art Gallery, alongside the X-ray and partial colour reconstruction of Atlantic City.

Finally, one has to ask why men deliberately destroy the work of women? From my perspective, social justice demands that the layers of paint constituting Praxitella must be removed, to allow Atlantic City to reemerge. Criminal actions cannot be allowed to triumph over legitimate actions.

Back to Elizebeth Friedman.

This post originally started at some forgotten point in 2020, It was based on one simple question. Why is Alan Turing (1912-1954) remembered as a cryptanalyst, but not Elizebeth Friedman? Of course, Elizebeth was not the first cryptanalyst. The first known recorded explanation of cryptanalysis was given over 1 100 years earlier by Al-Kindi (c. 801–873).

Other important women cryptanalysts include Aggie Mayer Driscoll (1889-1971) and Joan Clarke (1917-1996).

Extreme Heat Belt

The counties marked in red are expected to experience temperatures of 125 °F = 51.67 °C at least one day a year, by 2053. This area is referred to by some as the Extreme Heat Belt. Screenshot of an Axios map, without the underlying data provided at the county level by the First Street Foundation.

The mission statement of the First Street Foundation reads: Make climate risk accessible, easy to understand and actionable for individuals, governments, and industry. A changing climate is impacting the risks facing American properties, communities, and businesses as perils like flood, fire, heat, and drought become more common, and more severe…. First Street Foundation is a non-profit research and technology group dedicated to quantifying and communicating those risks by incorporating world class modeling techniques and analysis with the most up to date science available in order to simply, and effectively, inform Americans of their risk today and into the future from all environmental changes.

Extreme heat refers to a maximum heat index greater than 125°F. This refers to a temperature reached at least one day a year. Currently (that is, in 2022) 8 million Americans are exposed to it. By 2030, some additional coastal areas in the Southeast and Mid-Atlantic may also experience a heat index at or above 125°F. By 2053, the number of people exposed to extreme temperatures, is expected to increase to 107 million people.

Dangerous days have a heat index greater than 100 °F = 37.78 °C. The Gulf Coast and Southeast will see the highest chances and longest duration of exposure to these. While many place experience more than 20 consecutive days with heat indices above 100°F, by 2053, these streaks could involve up to 74 consecutive days.

Local hot days are days that exceed the temperatures typically experienced for a particular area. The West will have the highest chance of long durations of these.

Future cooling-driven increases in carbon emissions could aggravate warming further. Texas, Florida, California, Ohio and Missouri are the top 5 states with the greatest cooling demand expected increase in CO2 emissions between 2022 and 2053.

As a missionary for SI, the international system of units, temperature always presents a quandary. In this official system, temperature is measured in kelvin, with symbol K. Both the kelvin and celsius systems use a 100 K/ °C difference between the freezing and boiling point of water, at a standard/ sea-level air pressure reading.

0 K is set to absolute zero, which is -273.15 °C, while 0 °C, in the celsius system, is set to the freezing point of water. In the Fahrenheit system, water freezes at 32 °F = 0 °C and boils at 212 °F = 100 °C, resulting in a 180 °F difference between these two points.. Thus, 125 °F = 324.8167 K.

SI clergy undoubtedly use many nights, sleeplessly pondering if the extreme heat value should be increased to 325 K = 125.33 °F, or if 50 °C = 122 °F, should be used. Those prioritizing as little change as possible will support the former. Those wanting to use rounder values, ending in 0, will opt for the latter. The reason for this proposal is that the world needs a mechanism to compare extreme heat locations, which will require heat to be expressed in degrees celsius. This is why, personally, 50 °C holds greater appeal, even if more locations in the world will fall into that category.

Those wishing to be further perplexed by this topic, are invited to read the Wikipedia article on thermodynamic temperature. In an imperfect world, every gram of improved understanding is worth the effort.

Analogue Electric Vehicles

A Woodpecker skateboard, to encourage young experimenters to investigate battery electric vehicles. Photo:

Part 1

On 2021-07-07 Robert N. Charette wrote an article in IEEE Spectrum, How Software Is Eating the Car, The trend toward self-driving and electric vehicles will add hundreds of millions of lines of code to cars. Can the auto industry cope?

As usual, an article in Slash Dot ( /.) is my main source of biased opinions about a serious technological issue, with one typical comment given a score of 4: interesting. It read: “If you get something pre-1978 then the most sophisticated electronics in the vehicle will probably be the radio kit.” This was then followed by numerous comments about 1976 (and later) Chrysler Cordobas. This type of reasoning reaches its zenith with, “What was the last car without this nonsense? Makes me want to buy a classic car or motorcycle, just for the simplicity.”

Yes, for a long time the trend has been towards increasing [Engine Control Units =] ECUs, based on the design philosophy of, “If you want a new feature, you buy a box from a Tier 1 [top-level component suppliers, such as Bosch] that provides the feature, and wire it in. As a general rule, automakers love outsourcing work; for most of them, their dream scenario is that everyone else does all the work for them and they just slap a badge on it and take a cut.

Then Rei adds a score 5: informative, but long, comment: “This article actually has it backwards. The first company to break with this philosophy was Tesla, which has from the beginning had a strong philosophy of in-house software design, and built basically a ‘car OS’ that offloads most vehicle software functionality into a handful of computers (with redundancy on safety-critical functionality). … Eventually everyone is going to have to either make their own ‘car OS’ stack or lease one from someone else. The benefits are just too significant[:] Lower hardware costs, lower assembly costs, lower power consumption, simpler cheaper lighter wiring harness, faster iteration time on new functionality, closer integration between different subsystems, you name it. This trend throws into reverse the notion of ever-increasing numbers of ECUs (which quite simply was an unsustainable trend).”

Who could possibly disagree?

Part 2

What is the minimal vehicle a person needs? Of course, there will be as many answers as there are people, and it will all be dependent on what they are doing. There are a lot of vehicles available, but I will not refer to them as choices. Some places lack trams or other forms of public transit. They may exist in other places, but run at inappropriate frequencies. Some communities lack bike lanes, forcing cyclists to compete for space with cars. Some streets are perpetually gridlocked.

Some people need to work, outside of their residences! Does one have to take children to kindergartens or schools? What distance does one have to travel to attain basic health and nutritional needs? Can this be done as part of a commute, or is a separate trip necessary? What about specialty shops? What is the distance to the nearest bus station/ train station/ airport/ international airport? Is there a need for a social life? Is one dependent on driving a car? Could a bicycle do for some items? Are trains or buses an option? So many questions, so few obvious answers.

Perhaps my own situations could be used as an example. Compared to most people, my life is simple: no job is calling me, and I am no longer responsible for looking after young children. Yesterday, I used a vehicle with a mass of about 1.5 Megagrams (where 1 Mg = 1 000 kg), to drive 40 km. Admittedly, there are vehicles that weigh less than a car. A bicycle is probably the most efficient device for conveying people, and it can have a mass of from about 5 to about 20 kg. Yet, I would not feel safe driving one of these on the roads of rural Norway. There are no buses, but if I plan in advance and contact the appropriate office a day in advance, I might be able to use public transit, essentially a taxi charging bus rates, as long as I am willing to wait up to several hours, for a return trip.

The most basic foods, as well as building supplies, can be purchased with a 14 km return trip across Skarnsund bridge in Mosvik, where there is even a coffee bar, with better than acceptable lattes. Basic health care (doctor, dentist, pharmacy, optometrist) and a larger selection of food and basic necessities are met by driving 26 km for a return trip in the opposite direction, into Straumen. More specialty shops are available in Steinkjer involving a 70 km round trip. This all involves driving. However, there is also a train station at Røra, 40 km round trip by car, that will allow one to connect with an international airport (TRD), and the fourth largest city in Norway, Trondheim, about 120 km away – 240 km round trip, with an even larger selection of shops and activities.

Part 3

I am in agreement with Rei, that more software (and less hardware) is needed in vehicles. Yet, I am reading this week that General Motors is charging purchasers of many GMC, Buick, and Cadillac vehicles, that are shipped with OnStar and Connected Services Premium Plan by default, $1 500 for the three-year plan that was once optional, but is now required. Other companies are doing the same sort of thing. It is estimated that this revenue stream could give GM an additional $20 to 25 billion per year by 2030. BMW has come out with similar claims, giving them an additional revenue of about $5 billion per year by 2030. I do not want to ensure that a wealthy elite continues to take more of an income pie that is already unfairly divided.

At issue is the right of consumers to direct access to vehicle data, which historically has been obtained from an on-board diagnostic (OBD-2) port (North America) or European on-board diagnostic (EOBD) port, since 1996 and 2001, respectively.  These allowed vehicle owners and technicians access to vehicle data to assist with maintenance and repair. This situation is threatened by vehicle manufacturers, who want to use telematics = the sending of data wirelessly and directly, restricting vehicle data to manufacturers. In 2021, 50% of new cars have these connected capabilities, but no country has more than 20% of its vehicle fleet equipped. USA has the most. By 2030, it is estimated that about 95% of new vehicles sold globally will have this connectivity, according to a study by McKinsey. ​

While this data could provide economic and other benefits to car owners, vehicle manufacturer want to act as gatekeeper, determining who can access it, and at what cost. This is a detriment to consumers, which could result in: Increased consumer costs; restrictions on consumer choices for maintenance and repair;  safety and security issues involving the use of non-standard data types and formats; privacy concerns. Automotive mechanics, and other aftermarket providers can also be affected. 

This has resulted in a consumer backlash, which I associate with the right-to-repair movement. There are already open-source groups working to ensure that consumers retain their rights. In addition, Automotive Grade Linux (AGL) is an open source project hosted by The Linux Foundation that is building an open operating system and framework for automotive applications. It was started in 2012, and currently has 146 corporate members.

I imagine that automotive manufacturers will try to add just enough proprietary software to their vehicles, to profit maximally from their investment. On the other hand, I see that there will be an incentive for ordinary consumers to demand right-to-repair legislation, and for guerilla activists to produce generic software substitutes where this is useful.

In Europe, repair is increasingly regarded as an essential consumer right and an environmental necessity. The main objective of the European Green Deal, is to be climate neutral by 2050. The European Commission’s Circular Economy Action Plan (CEAP), published 2020-03, details how this goal is to be reached. To reduce waste, products have to be designed to last. If they don’t last, they shouldn’t be sold. To encourage the development of products that are longer-lasting, there could be lifespan labels, service manuals, and an EU-wide repairability index. This would encourage the market to compete on repairable and durability.

In 2020-11, the European Parliament voted overwhelmingly in favor of a right-to-repair, and insisted that the more conservative European Commission administrative arm, implement it. It also included repairability labeling.

In 2020-11, voters in Massachusetts approved Question 1, involving a right-to-repair Law, with almost 75 percent in favour. The law requires automakers to provide a way for car owners and their chosen repair shops to access vehicle data, including that sent wirelessly to the manufacturer. The intent of this law is to prevent manufacturers and dealerships from having exclusive access to data.

Massachusetts is the state where the first automotive right-to-repair law was passed in 2012. That law made car makers open up the data inside the car. Rather than create a state by state solution, automakers reached a nationwide agreement with car parts makers/ suppliers and repair shops on how to share the data. This agreement opened the OBD-II port. With this new and improved right-to-repair law, similar transformative actions are required.

There are an increasing number of underpaid programmers and other software and hardware specialists, unable to fully live the American (and Scandinavian) dream. Many of these would undoubtedly be willing to work as guerilla technologists to develop the tools needed for retrofitting vehicles with more consumer friendly components, especially after warranties have ended. There are an increasing number of inexpensive microprocessors and systems on a chip that can be used for these purposes.

Part 4

To put electric vehicles in perspective, one needs to return to 1965-11-05, when President Lynden Johnson was given a copy of Restoring the Quality of Our Environment, a report by the Environmental Pollution Panel, President’s Science Advisory Committee. On publication of this blog, people have had 20 735 days or 56 years, 9 months, 8 days to confront this challenge, but have failed miserably at this task.

One fundamental question is, where can younger people learn more about the construction of appropriate vehicles for the 21st century? Currently the most interesting project is Woodpecker, that describes itself as an: “Open source based Carbon negative Electric Vehicle Platform. Woodpecker is a game changing micromobility vehicle to decrease CO2. Electrical propulsion allows to use solar and renewable power. Production of Wooden frame even decreasing CO2 because it is encapsulated by [wood] while growing. Vehicle built on Circular Economy concept – most parts are recyclable.” It appears to have originated in Latvia, and includes partnerships with many higher-educational institutions in the country. One problem with Woodpecker, is that it as an organization is too closely bound to commercial enterprises. For example, a good starting point for most open-source projects is to become acquainted with their documentation. In this case it requires people interested in downloading their technical drawings to have a Trimble account, in order to use Sketchup.


1. This post follows up some aspects of Vehicle Devices, published 2020-11-03. The division between parts is not based on content, but time. Part 1 of this weblog post was originally written 2021-06-18 and saved at 10:49. It had been patiently waiting to be published. On 2022-08-12, outdated content was removed, and Part 2, was added, starting at 20:43. Parts 3 was started on 2022-08-13 at about 07:40, while part 4 was started on the same date at 08:48.

2. Trondheim claims to be the third largest city in Norway, but I would give that title to Stavanger. The challenge with Stavanger, is that its metropolitan area is divided between multiple municipalities. Yes, I am aware that I have offended some of my Norwegian readers, because of their origins in Trøndelag. However, Stavanger is the only place in Norway where I have ever been able to find/ buy root beer! This is probably due to Americans working in the oil industry, and living in the Stavanger area.


WBT = wet-bulb temperature. Yes, I appreciate short, cryptic post titles. That said, there is a serious point to this post, with life-saving potential, related to heatwaves.

A digital psychrometer (combined dry and wet-bulb thermometer).
The functions available with a modern psychrometer.

When a wet cloth/ wick covers the bulb of a thermometer, evaporation of the water cools the thermometer. This results in a WBT, which is equal to or below the dry temperature, measured on a thermometer without a wet cloth. The WBT reading reflects the humidity in the atmosphere. Humidity refers to the relative saturation of air with water. Low humidity means there is not much water in the air; high humidity means lots of water in the air. When the air can hold no more water, it is totally saturated. It is referred to as 100% relative humidity (RH). Air at a higher temperature is able to hold more water vapour, than air at a lower temperature.

A related concept is that of dew point = the temperature to which air must be cooled to become saturated with water vapor, at the current relative humidity. At, and below this temperature, water vapor condenses to form dew, a liquid state of water.

WBT is important because it can measure heat-stress conditions, that affect many people. In fact, a high WBT can kill them. This happened mainly from 2021-06-24 to 2021-07-01 in British Columbia. At 100% RH, the WBT will equal the dry temperature.

A combined dry and wet-bulb thermometer is referred to as a psychrometer. While analogue models are available, they require either calculations or the reading of graphs to determine values. One should not overestimate the ability of a person to perform even simple calculations, when they are potentially dying of heat stroke. Digital models can be purchased at relatively low cost that do all of the calculations automatically. The model illustrated above costs NOK 255 = US$ 26.38 = CA$ 33.72, including taxes and delivery charges to Norway (as of 2022-08-01). The quality of this particular model has not been evaluated.

At Cliff Cottage, we have recently received temperature and humidity sensors that will be part of our weather station, a subsystem of our home automation system. These components are considerably cheaper than the Habotest model, but require electrical and mechanical work, as well as programming, to implement as a system. Thus, the first iteration does not produce a cost effective system, may be frustrating to make, but will give satisfaction when completed.

From 2021-06-20 to 2021-07-29, the British Columbia Coroners Service, reported the following heat related deaths, in the table below. There are some who feel the number of deaths were under reported. Note that 445 of the 569 deaths (78%) occurred during the transition week, between June and July.

Age Group# of Deaths 
<40 2
40-49 13
50-59 42
90+ 76
Total 569
Heat related deaths during the summer of 2021. Source: British Columbia Coroners Service.

British Columbia, was only one of many jurisdictions, that faced heat challenges in 2021. Temperature records are being broken regularly, throughout the world. In the United States, every state and territory had a maximum temperature that exceeded 37 °C. Of these 46 entities, only six had recorded maximum temperatures below 40 °C. Four states, had maximum temperatures at or exceeding 50 °C: New Mexico, 50 °C; Nevada, 52 °C; Arizona, 53 °C; and, the highest, California, 57 °C. In Canada, Lytton, British Columbia, distinguished itself with 49.6 °C, on 2021-06-29, the maximum ever recorded in the country. The next day, a wildfire destroyed most of the town. In Norway, the highest temperature recorded is 35.6 °C, at Nesbyen, on 1970-06-20.

The challenge with these high temperature values, is that they do not take into consideration humidity, which determines how people experience heat. Some locations on planet Earth may be approaching values that prevent human survivability. The countries most affected are Saudi Arabia and other Arabian Gulf states, Pakistan, India and Australia. What is not fully appreciated, is that indoor climates in temperate zones, can also create conditions that kill people during heat waves.

The difference between WBT and dry temperature, measures how effective people can cool themselves by sweating. Admittedly, this is a simplification because, in addition to humidity and temperature, solar radiation and wind speed are other factors that affect survivability. Yet, WBT is especially important in indoor environments, where deaths often occur in heatwaves.

Sweating above WBT will no longer cool down a person, but lead to a steady rise in body temperature. This is the limit of human adaptability to extreme heat. If a person cannot escape these conditions, their body’s core temperature will rise beyond the survivable range, and organs will start to fail.

The critical WBT value for humans was usually considered to be 35 °C, indicating a situation where a healthy person could survive for six hours. One representation of this is an air temperature of 40 °C, and a relative humidity of 75%. This value comes from a 2010 theoretical study. However, research by Vecellio et al., found that this value only applied to young healthy people. Real-world data indicates that the critical WBT value is closer to 31.5 °C.

This means that the numbers of people exposed to potentially deadly combinations of heat and humidity across the world would be vastly higher than previously thought. Many older and compromised people will experience dangerous conditions far below the threshold WBT.

In Canada, the humidex = humidity index has been used since 1965 to describe how hot the weather feels to the average person. It is a nominally dimensionless quantity, calculated from °C and the dew point. Values of: 20 to 29: little to no discomfort; 30 to 39: some discomfort; 40 to 45: great discomfort, avoid exertion; above 45: dangerous; heat stroke possible/ probable.

Humidex plot (Source: Morn, using Matplotlib code)

The American Heat Index (HI) was developed from the Humidex in 1979. It is calculated using °F or °C and relative humidity. It makes assumptions about: human body mass, height, clothing, level of physical activity, individual heat tolerance, sunlight exposure, ultraviolet radiation exposure, and wind speed. Significant deviations from these will result in heat index values which do not accurately reflect the perceived temperature.

Heat Index plot (Source: Morn, using Matplotlib code)

In many situations, building construction results in indoor temperatures exceeding outdoor temperatures. Construction methods may prohibit water saturated air from leaving a building. In climates with high humidity, such as along the Gulf of Mexico coast and even on the Atlantic coast of Florida, in the United States, it is often common to use a vapour barrier close to the outer wall, with negative consequences during heatwaves. Sometimes the best solution is to omit a vapour barrier. This is the opposite of the approach used in cold climates, such as Canada, Norway and northern United States where the vapour barrier is located on the inside of the outer wall.

Heatwave Precautions

Since many of the readers of this weblog are older, it is important for them to know what to do when temperatures rise.

A first step is to realize that an indoor environment can be particularly deadly, in part because there is no wind to increase evaporation rates, needed for effective sweating/ perspiration.

A second step is to track indoor temperatures. Even without a psychrometer or wet-bulb thermometer, one knows that the WBT will be below this dry temperature. This means that temperatures should probably not be allowed to rise above, say, 30 °C, without taking some action. Thus, moving to a shady, outdoor location, may reduce risk, compared with staying indoors.

Air conditioning units are another solution, but not everyone can afford them. Their acquisition typically has to be planned well in advance of a heatwave. Fans can be effective at increasing the quantity of air available for evaporation, but they usually should be used with an open window.

In some places, special shelters have been built/ commissioned, that people can visit without charge to find heat relief. While many people will search for information online about shelters located near them, there are other sources of information available. Public libraries are a great place to find this sort of information.

Candela C-8

A Swedish built Candela C-8 foiling cruiser. Photo: Candela.

As I attempted to write the first paragraph of this weblog post, an observer came by. So I made a comment, hoping for some encouragement: “Beautiful boat, isn’t it?”

“Not particularly,” came the reply. “It looks awfully cold. They even have to wear toques. If someone fell in the water, how would they get back onboard?”

At that point I realized, yet again, that the observer and I live in two different universes. In my universe, the boat motor stops, the foils sink and the hull floats on the water. The person in the water is dragged through the open transom onto the boat. It is probably one of the easiest boats in existence to effect a rescue. I replied, “Would you like some tea?”

Tea is one of her passions, while watercraft are one of mine. A major achievement was building a sailing dinghy, a 2.4 meter long Sabot, at the age of thirteen. In my adult life I have owned two sailboats, including a Eygthene 24 cruiser.

Theoretically, I share the same speed obsession as Toad of Toad Hall as found in Kenneth Graham’s (1859 – 1932) Wind in the Willows (1908), but in a more maritime variant. I appreciate fast sailboats including America’s cup AC72 foiling catamarans, AC75 foiling monohulls and even more affordable foiling Moth dinghys.

Practically, I usually sailed my cruiser from its harbour to a small inlet two nautical miles (NM = 3.5 km) away. I would then anchor, and enjoy the tranquility of its relatively remote location. One could make that journey in almost any type of boat, including a kayak or a row boat. The advantages of a cruiser include its galley, bunks, head and shelter from inclement weather.

A glimmer of hope that I might appreciate motorized vessels occurred in 2015. Aspiring to develop a new industry here in Norway, I gave my Technology students an assignment to design an electrically powered, water-jet vessel, based on a surfboard. I introduced the topic by showing a video about river jetsurfing. Now there are foiling boards as well, as this video shows. There are other foiling boards available, but most of them use propellers rather than waterjets, something I find ill-advised.

The Candela C-8 impresses me in several different ways.

First, the hull is constructed out of carbon fibre, using vacuum molding techniques to create a rigid platform to mount the driveline and foils, as well as passenger accommodation. It is also lightweight. However, it is not something that I would like to come into close contact with sharp rocks.

Second, the driveline is remarkable. The battery is enclosed in a waterproof container, to prevent salt-water from damaging it. It is freshwater cooled. Its 40 kWh lithium ion NMC battery pack (from BMW i3) could (theoretically) power the vessel for 50 NM = 92.6 km = 57.5 miles. However, even Candela admits that a more probable result is 40 NM at 20 knots = 2 hours. The motor uses 70 kW to take off and start foiling, 16 kW to foil at 23 knots, and 37 kW at full speed = 30 knots.

The C-Pod showing foils and contra-rotating propellers. Photo: Candela.

The motor is housed underwater, which provides cooling and noise reduction. Further, it is equipped with contra-rotating propellers, that is two propellers that rotate in opposite directions about a common axis, usually to minimize the effect of torque. This approach reduces the size of propellers needed, but it is a more complicated (read: expensive) system that may require more maintenance. Candela claims that its C-POD requires no maintenance and will operate for 3000 hours without service. They state that it is built to last a human lifetime, without maintenance. In addition, there is no need to change oil or cooling fluid, as the sealed electric motors are cooled by the flow of seawater. It is important to note that with contra-rotating propellers, hydrodynamic gains may be partially reduced by mechanical losses in shafting.

An exploded view of the C-Pod driveline showing the two shafts, and twin motors. Photo: Candela.

Third, the flight control system uses ten sensors to estimate the position, velocity, and acceleration of the boat on all axis, and to determine/ estimate the real-time system state. This allows the vessel to operate in rough sea and make sudden and sharp turns. It is so much quieter than a hovercraft.

Fourth, I suspect there is a brilliant navigation system provided, that will keep those onboard out of danger. In addition, I suspect there is a dead-man switch/ man-overboard button that, when engaged, will automatically maneuver the vessel back to the point where the person fell overboard, or became incapacitated.

With a starting price of €290 000, I cannot afford to buy a C-8. No, I have never bought lottery tickets out of principle, so I have no prospects of ever being able to afford one. I would like to encourage my younger friends and family to follow the used market. I estimate that a 20 year old vessel (at about 20% of the price) will offer optimal value.

If any of my offspring are wondering what to get me for my 80th birthday in 2028, a day foiling would be ideal. They can even choose the location, with the Salish Sea, San Francisco Bay or the Stockholm archipelago, three of numerous possibilities.

MaterialCarbon fiber
Weight1605 kg DC version
Passengers8 passengers including driver
Length8.50 m
Width2.50 m
Speed24 kn cruise, 30 kn top
MotorCandela C-Pod (45/50 kW)
Range50+ NM at cruising speed
+3 NM at 4 kn in limp home mode
Draft0.5 m in shallow mode
0,9 m in planing mode
0,8 m while foiling
1.5 m while not foiling, foils extended
Charging230Vx1x16A: 13h
230Vx3x32A: 2,5h
Interface15,4-inch touch screen with Candela’s proprietary navigation and boat integration system. Free software upgrades included. One year free sea chart upgrades included.
AppCandela app with position, state of charge, route statistics and more. Optional geo-fence.
Hull-shapeThe hybrid hull is shaped for frictionless planing in addition to low air resistance when foiling. In Planing mode the foils are above the surface which prevents fouling and corrosion
Candela C-8 Specifications

For additional propaganda:


British propaganda poster from 1939.


Of course, I am hoping that readers will mistake Prolog for Prologue = an introduction to something. In addition, I have a further hope that the poster, displayed above, will induce a feeling of calmness in these same readers, so that they will be able to approach the real content of this weblog post with detachment, but not indifference. The main problem with the poster, is that almost everything about it, apart from its wording, and especially its signal red background, but also large sans-serif white lettering and the British crown, reinforce a feeling of danger!

Wikipedia tells us, Keep Calm and Carry On was a propaganda poster produced by the British government in 1939 in preparation for World War II. The poster was intended to raise the morale of the British public. Although 2.45 million copies were printed, the poster was only rarely publicly displayed and was little known until a copy was rediscovered in 2000 at Barter Books, a bookshop in Alnwick, a market town in Northumberland, in north-east England.

Some topics, toothaches in particular, or dentistry more generally, do not induce calmness. Instead, they increase the flow of adrenaline, and other forms of psychomotor agitation, resulting in psychological and physical restlessness. Thus, before confessing what this topic is really about, I want to reassure readers that it is a topic that can be fun, if approached correctly. Initially, I had thought of dividing the topic into multiple parts and publishing them at the rate of one part a day, over more than a week. The parts are still subdivided, but each reader will have to determine her/ his/ its etc. own consumption rate.


I am used to dealing with actors, people pretending to be someone else. In the process, these people have helped me developed my own acting talents. Some of the actors I had to deal with, had failed their auditions, often called court appearances or trials. One of the consequences of such a failure, could be imprisonment at the Norwegian low security prison where I was assigned as their teacher.

Other actors were youth in the final years of their compulsory education, at senior secondary school. They had to attend school, but some of them were better than others at presenting themselves in a positive light. Not that everyone sought positivity. In a Media and Communication English class, I once asked the pupils to write about something they wanted to accomplish in the future, and why they wanted to do so. The reply that created the most work, not just for myself, but for the student, the school principal, the school psychologist and others, was an essay that detailed how this person wanted to become a mass murderer. Afterwards, he claimed that this was a work of fiction.

I have experienced a lot of acting performances by students. The most problematic actors are those who pretend they understand a topic, when they have absolutely no idea about it. The role of the teacher is to channel student activity so that the student finds a route that suits her/ his personality, and is effective at helping the student learn new sets of knowledge and develop new skills. This route-finding skill is the primary talent needed to teach.


This weblog post’s topic is programming, in a specific language. While numbers vary with the situation, perhaps ten percent of actors will delight in learning the programming language they are confronted with. A similar number, give or take, will not master anything. Those remaining in the middle will accept programming languages as a necessary evil in this internet age. Stated another way, a small percentage will find their own route without assistance, another small percentage will never find a route, while most people in the middle will struggle to varying degrees, but ultimately find a route, hopefully one that suits their personality.

The main difficulty in terms of learning to program, is that schools begin computer science studies assuming that students will want to learn to program the particular language being offered. Admittedly, some languages are fairly general, including some that are designed more for teaching/ learning, than for any more practical applications. Pascal, is probably the best example of such a language. However, my contention is that the first computing course a student takes should look at programming principles.

I was fortunate to read Bruce J. MacLennan’s, Principles of Programming Languages: Design, Evaluation and Implementation (1983). A second edition was published in 1987, and a third in 1999. There is not much difference between the three editions, and the same languages are discussed in all three: pseudo-code interpreters, Fortran, Algol-60, Pascal, Ada, Lisp, Smalltalk and Prolog. All the editions of this book explain that computer languages can have different purposes, and asks readers to examine the purpose of each programming language. Not everyone should learn the same one. Before they decide to learn programming, people should know what they want to do with that language, after they have learned its basics. Much of the time the answer is, learn a more appropriate language.


Books can have multiple uses.

The Prolog in the title of this post refers to the Prolog programming language. Fifty years ago, in 1972, Prolog was created by Alain Colmerauer (1941 – 2017), a French computer scientist, at Aix-Marseille University, based on the work of Robert Kowalski (1941 – ), an American-British computer scientist, at the time at the University of Edinburgh.

Prolog is a logic programming language associated with artificial intelligence and computational linguistics. That doesn’t say much. It might be more understandable to say that students typically learn Prolog by creating a program/ system that shows social relationships between people. Despite their reputation as rather awkward social creatures, even computer scientists have the capability of understanding some social markers: mother, father, daughter, son, at a minimum. Thus, even computer scientists can construct a system that will determine then show relationships between any two people. The system can be constructed slowly, so that initially only, say, four relationships are allowed. Outside of those four choices, people will be labelled as having no relationship. However, in subsequent iterations, the number of relationships can be expanded, almost indefinitely.

Prolog consists of three main components: 1) a knowledge base = a collection of facts and rules fully describing knowledge in the problem domain; 2) an interface engine, that chooses which facts and rules to apply when attempting to solve a user query; 3) a user interface, that takes in the user’s query in a readable form and passes it to the interface engine. Afterwards, it displays results to the user.


Programming in Prolog, written by William F. Clocksin (1955 – ) & Christopher S. Mellish (1954 – ), is the most popular textbook about the language. Originally published in 1981, a revised (read: readable) second edition appeared in 1984. My copy has my name printed on the colophon page in capital letters in blue ink, considerably faded now, along with the date, 12 iii 1985.

It is not the only book about Prolog in my library. Among the thirteen others are: Dennis Merritt, Building Expert Systems in Prolog (1989); Kenneth Bowen, Prolog and Expert Systems (1991); Alain Colmerauer & Philippe Roussel, The Birth of Prolog (1992); Krzysztof R. Apt, From Logic Programming to Prolog (1997) and even an updated 5th edition of Clocksin and Mellish, subtitled Using the ISO Standard, (2003). Part of the reason for this large number, was my using of Prolog to teach expert systems.


Expert systems are not particularly popular, now. In artificial intelligence, popularity contests are being won by machine learning tools. Yet, some people don’t have to be at either the height of fashion or the cutting edge of technological advances, and can appreciate older approaches.

Edward Feigenbaum (1936 – ) constructed some of the first expert systems. He established the Knowledge Systems Laboratory (KSL) at Stanford University. Long words are often strung together to describe his work. A favourite phrase is, “Knowledge representation for shareable engineering knowledge bases and systems.” This was often coded into the phrase expert system. He used it mainly in different fields of science, medicine and engineering. KSL was one of several related organizations at Stanford. Others were: Stanford Medical Informatics (SMI), the Stanford Artificial Intelligence Lab (SAIL), the Stanford Formal Reasoning Group (SFRG), the Stanford Logic Group, and the Stanford Center for Design Research (CDR). KSL ceased to exist in 2007.

The focus of Feigenbaum, and American institutions more generally, was on rules-based systems: Typically, these found their way into shells = computer programs that expose an underlying program’s (including operating system) services to a human user, produced by for-profit corporations, that would sit on top of Lisp, one of the programming languages commented on in two chapters of MacLennan’s book, and used extensively for artificial intelligence applications. Feigenbaum and his colleagures worked with several of these expert systems, including: ACME = Automated Classification of Medical Entities, that automates underlying cause-of-death coding rules; Dendral = a study of scientific hypothesis formation generally, but resulting in an expert system to help organic chemists identify unknown organic molecules, by analyzing their mass spectra, and combining this with an existing but growing chemical knowledgebase; and, Mycin = an early backward chaining expert system that identified infection related bacteria, recommend specific antibiotic treatments, with dosage proposals adjusted for patient’s mass/ weight. He also worked with SUMEX = Stanford University Medical Experimental Computer. Feigenbaum was a co-founder of two shell producing companies: IntelliCorp and Teknowledge. Shells are often used by experts lacking programming skills, but fully capable of constructing if-then rules.


Prolog is frequently contrasted with Lisp, and offers a different approach for developing expert systems. Some users are fond of saying that Prolog has a focus on first-order logic. First-order is most appropriately translated as low-level, or even simple. The most important difference between the two languages, is that anyone with average intelligence should be able to understand, and work with Prolog. Much of the work done with Lisp involves higher-orders of logic, often requiring the insights of real logicians, with advanced mathematics in their backgrounds. An introductory logic course, gives sufficient insight for anyone to work with Prolog.

Prolog is also claimed to be a more European approach. This probably has something to do with the way teaching is organized. In Norway, for example, a (Danish) Royal decree from 1675 and still valid today, required all university students to undertake an Examen philosophicum, devised by advisor Peder Griffenfeld (from griffin, the legendary creature, plus field, but originally, Schumacher = shoemaker, 1635 – 1699). Under the Danish King, Christian V (1646 – 1699), he became the king’s foremost adviser and in reality Denmark’s (and Norway’s) actual ruler. In 1676 he fell into disfavour and was imprisoned. He was sentenced to death, for treason, but the sentence was commuted to life imprisonment. He was a prisoner on Munkholmen, outside Trondheim, and about 55 km directly south-east of Cliff Cottage, for 18 years (1680–1698), and was released after 22 years of captivity.

Until the end of the 1980s, this exam involved an obligatory course in logic, including mathematical logic, along with other subjects. This means that almost every university student (at that time), no matter what they studied, had the necessary prerequisites to work with Prolog.


Expert systems often involve heuristics, an approach to problem solving using methods that are not expected to be optimal, perfect, or rational, but good enough/ satisfactory for reaching an approximate, immediate or short-term goal. George Pólya (1887-1985), who worked at Stanford 1940 – 1953, and beyond, took up this subject in How to Solve It (1945). He advised: 1) draw a picture, if one has difficulty understanding a problem; 2) work backwards, if one can’t find a solution, assuming there is one, and see what can be derived from it; 3) develop a concrete example, from an abstract problem; 4) solving a more general problem first – this involves the inventor’s paradox where a more ambitious plan may have a greater chance of success.

One list of areas where expert systems can be used, involve system control, in particular: 1) interpretation, making high-level conclusions/ descriptions based on raw data content; 2) prediction, proposing probable future consequences of given situations; 3) diagnosis, determining the origins/ consequences of events, especially in complex situations based on observable data/ symptoms; 4) design, configuring components to meet/ enhance performance goals, while meeting/ satisfying design constraints; 5) planning, sequencing actions to achieve a set of goals with start and run-time constraints; 6) monitoring, comparing observed with expected behaviour, and issuing warnings if excessive variations occur; 7) repair, prescribing and implementing remedies when expected values are exceeded.

Sometimes one comes across Prolog tutorials that begin with subjective knowledge/ considerations. Music is a good example. Unfortunately, it is not always easy to remember if one has labelled something as trash metal or punk, and this may have operational consequences. It is much easier to confirm that person X is person Y’s granddaughter, and that person Y is person X’s grandfather, especially if persons X and Y are members of your own family.

It is always hard to know which Prolog expert system implementation will impress readers most. Here are some choices: Bitcoinolog = configures bitcoin mining rigs for an optimal return on investment; CEED = Cloud-assisted Electronic Eye Doctor, for screening glaucoma (2019); Sudoku = solves sudoku problems; an unnamed system constructed by Singla to diagnose 32 different types of lung disease (2013), another for diabetes (2019); an unnamed system by Iqbal, Maniak, Doctor and Karyotis for automated fault detection and isolation in industrial processes (2019); an unnamed system by Eteng and Udese to diagnose Candidiasis (2020). These are just some of hundreds, if not thousands, many open source.


One of the challenges/ problems with expert systems is that the scope of its domain can be unknown. In other words, when a person starts using an implemented expert system, it can be unknown just how big or little the range of problems is that can be used successfully with it. There can also be challenges with system feedback. What looks like an answer, may be a default because the system has insufficient insights (read: rules) to process information. Expert systems do not rely on common sense, only on rules and logic. Systems are not always up to date, and do not learn from experience. This means that real living experts are needed to initiate and maintain systems. Frequently, an old system is an out of date system, that may do more harm than good.

This begs a question of responsibility/ liability in case the advice provided by a system is wrong. Consider the following choices: The user, the domain expert, the knowledge engineer, the programmer of the expert system or its shell, the company selling the software or providing it as an open-source product.


Just before publication, I learned of the death of crime novelist Susie Steiner (1971 – 2022). I decided to mention her in this weblog post, when I read in her obituary that she had spotted a Keep Calm poster on the kitchen wall at a writing retreat in Devon. She was cheered by its message of stoicism and patience.

Speaking of kitchens, at one point my intention was to use Prolog to develop a nutritional expert system, that will ensure a balanced diet over a week long time frame, along with a varied menu for three meals a day. I still think that this would be a useful system. Unfortunately, I do not think that I am the right person to implement it, lacking both stoicism and patience, to complete the undertaking.

Reflecting on Susie, I am certain that a Prolog system could be made to help writers construct their novels, especially crime fiction. A knowledge base could keep track of the facts, as well as red herrings and other fish introduced to confuse the reader, and prevent them from solving the crime. Conversely, a Prolog system could also be built that would help readers deconstruct these works, and help them solve the crime and find textual inconsistencies.


  1. Readers should be delighted to hear that while writing this post I used my original Clocksin and Mellish book on a daily basis! Yes, it held my laptop open at an angle of about 145°, about 10° further open than without it. When writing on other topics, I also use other books for the same purpose. Note to self: ensure that your next laptop opens at least 180 degrees!
  2. The writer should be dismayed about the length of this post. Patricia reminds me, repeatedly, that shorter is better. She felt last week’s post on Transition One was a more appropriate length. Transition One was written in the course of an hour, with a couple of additional proof-reading sessions. Writing Prolog took more than a year, with multiple writing sessions, each adding several paragraphs.