Originally, this post was started to commemorate the release of the first operating system for microprocessors, CP/M. My best guess for its release date is 1974-04-25. Many sources state april, but are hesitant to specify the exact date. The person responsible for its development was Gary Kildall (1942 – 1994). This weblog post is published on the thirtieth anniversary of Kildall’s death, rather than the 50th anniversary of the first CP/M release.
Kildall has his origins in Seattle, Washington. His grandfather was a Norwegian immigrant, who ran a navigation school. Kildal is the name of a farm at Hægeland, Vest-Agder, in the south of Norway. His maternal ancestors had their roots in Långbäck, Skellefteå, Sweden before his maternal grandmother Sophia Lundmark emigrated to Edmonton, Alberta, Canada. His mother then immigrated to Seattle.
Kildall was awarded a doctorate in computer science from the University of Washington in 1972. He then worked as a computer science instructor at the at the Naval Postgraduate School in Monterey, California. This was to fulfill his military conscription obligations, since the US was still engaged in the Vietnam war.
Later that year, to learn more about processors, Kildall bought an Intel 4004 processor and began writing experimental programs for it. Intel lent him 8008 and 8080 processor systems. In 1973, Kildall developed the first high-level programming language for microprocessors, called PL/M = Programming language for microcomputers. It incorporated ideas from: PL/I = Programming language one, developed at IBM starting in 1964; ALGOL = Algorithmic language, originally from 1958; and XPL = expert’s programming language, from 1967.
One of the first software projects Kildall worked on, with Ben Cooper, was the Astrology Machine. It is generally regarded as unsuccessful, but gave Kildall an opportunity to field test programs he had written: a debugger, an assembler, part of an editor, and a Basic interpreter that he used to program.
In 1974, Kildall developed CP/M = Control Program/Monitor (originally)/ Control Program for Microcomputers (later). Intergalactic Digital Research (originally) /Digital Research, Inc. (DRI, later) was established by Kildall and his wife Dorothy McEwen (1943 – 2005) to market CP/M.
In 1975, Kildall developed a set of BIOS = Basic input/ output system routines, firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the boot process = power-on startup. BIOS initially allowed 8080 and compatible microprocessor-based computers to run the same operating system on any new hardware with trivial modifications. Later different BIOS applications were made for other computer families.
A source-to-source translator = source-to-source compiler (S2S compiler) = transcompiler = transpiler, is a type of translator that takes the source code of a program written in a programming language as its input and produces an equivalent source code in the same or a different programming language. In 1981, DRI introduced one of these, calling it a binary recompiler. XLT86 was written by Kildall. It translated .ASM source code for the Intel 8080 processor (in a format compatible with ASM, MAC or RMAC assemblers) into .A86 source code for the 8086 (compatible with ASM86).
Kildal initiated the creation of the first diskette track buffering schemes, read-ahead algorithms, file directory caches, and RAM drive emulators.
At this point it becomes difficult to separate Kildall’s role as innovator/ inventor and that as initiator/ project manager/ executive, where other engineers at DRI and elsewhere made significant technical contributions.
For example, Tom Rolander made most of the developmental inputs to CP/M starting in 1979, that later resulting in these operating systems added preemptive multitasking and windowing capabilities as well as menu-driven user interfaces. These are found on: Multi-Programming Monitor Control Program (MP/M), Concurrent CP/M, Concurrent DOS and DOS Plus.
In 1984, DRI started development Graphics Environment Manager (GEM), known primarily as the native graphical user interface of the Atari ST series of computers, providing a desktop with windows, icons, menus and pointers (WIMP). This was an outgrowth of a more general-purpose graphics library known as Graphics System Extension (GSX), written by a team led by Don Heiskell since about 1982. Another major contributor was Lee Jay Lorenzen at Graphic Software Systems.
Kildall and Rolander founded Activenture in 1984. They created the first computer interface for video disks to allow automatic nonlinear playback, presaging today’s interactive multimedia. This company became KnowledgeSet in 1985, which developed the file system and data structures for the first consumer CD-ROM, an encyclopedia for Grolier.
On 1964-05-01 mathematicians John G. Kemeny (1926 – 1992) and Thomas E. Kurtz (1928 – ) successfully ran a program written in BASIC = Beginner’s All-Purpose Symbolic Instruction Code, on Dartmouth college’s General Electric GE-225 mainframe computer.
Deception is a key word to describe General Electric’s (GE) computers. GE was founded 1892-04-15 in Schenectady, New York. In the 1960s, Chairman Ralph J. Cordiner (1900 – 1973) had forbidden GE from entering the general purpose computer business. General Manager of GE’s Computer Department in Phoenix, Arizona, Homer R. “Barney” Oldfield (1916 – 2000) claimed that the GE-200 series would produce industrial control computers. When Cordiner discovered he had been duped, he immediately fired Oldfield. Despite this, production of the computer series continued as a profitable venture for several years.
For the technically interested, only. The GE-225 used a 20-bit word, of which 13 bits could be used for an address. Along with a central processing unit (CPU) the system could also contain a floating-point unit (FPU) = Auxiliary Arithmetic Unit (AAU). Alternatively, there was a fixed-point decimal option with three six-bit decimal digits per word. It had eleven I/O channel controllers. The machines were built using about 10 000 discrete transistors and 20 000 diodes. They used magnetic-core memory, and a standard 8 kiloword system held 186 000 magnetic cores. A base level mainframe weighed about 910 kg. GE sold a variety of add-ons including storage disks and printers.
I imagine that both GE and the computer department at Dartmouth were attempting to use each other. GE probably wanted a functioning operating system, but didn’t have the human resources to make it. Dartmouth College wanted a computer, but didn’t have the money to buy it. The result was a compromise that benefited both. In 1963, Kemeny applied for a National Science Foundation grant to bring a GE-225 computer to Dartmouth and build a general-purpose time-sharing system, essentially an operating system. This time-share approach was to allow others (Read: faculty and students) to access the mainframe computer and run programs using BASIC. It just took a year to implement.
While Dartmouth College copyrighted BASIC, it was made freely available to everyone. The name originated from Kurtz’s wish for a simple but meaningful acronym. Kurtz, in an open letter, reiterates that BASIC was invented to give students a simple programming language that was easy to learn. It was meant for amateurs, not computing professionals.
Language standards were created: The European Computer Manufacturers Association (ECMA), was founded in 1961 to promote computing standards in Europe. They released their version of BASIC standards in 1986. This was followed in 1987 by the American National Standards Institute ( ANSI) releasing its version of the standards. In 1991, ECMA was renamed ECMA International.
In 1975, Paul Allen and Bill Gates adapted BASIC for personal computers like the Altair 8800. I am uncertain of Gates’ motivations. At this early stage, he undoubtedly appreciated BASIC, as he fell into the amateur category, rather than being a professional (system) programmer. My perspective is that he capitalized on the work of others: Kemeny and Kurtz with BASIC; Tim Paterson (1967 – ) with a Quick and Dirty Operating System (QDOS) at Seattle Computer Products, that became MS-DOS for Microsoft.
In 1976, Steve Wozniak developed a BASIC interpreter for the Apple I which subsequently became Integer BASIC for the Apple II in 1977. More BASIC followed with the personal computer (PC), in 1982.
In 1991, Microsoft developed Visual Basic. Over the years new variants emerged such as Microsoft Small Basic, in 2008. It teaches beginners programming concepts. Basic and similarly languages are important because they emphasize simplicity, readability, and ease of use.
Kemeny and Kurtz’ work on BASIC was recognized by the American professional association for electronic engineering and electrical engineering (IEEE) as part of their milestone program which marks historic places for human innovation from around the world. Places honored include Thomas Edison’s lab in Menlo Park, New Jersey, where he invented the light bulb and phonograph, and the hilltop outside Bologna, Italy where Guglielmo Marconi sent the first transatlantic radio transmission. On 2021-02-22 a plaque was placed outside of the computer lab at Collis Center, 2 N Main St, Hanover, NH 03755, U.S.A. The citation reads: Beginner’s All-purpose Symbolic Instruction Code (BASIC) was created in this building. During the mid-1970s and 1980s, BASIC was the principal programming language used on early microcomputers. Its simplicity and wide acceptance made it useful in fields beyond science and mathematics, and enabled more people to harness the power of computation.
Notes
Kemeny was president of Dartmouth College from 1970 to 1981 and pioneered the use of computers in tertiary education. He chaired the presidential commission that investigated the Three Mile Island nuclear meltdown of the Unit 2 reactor (TMI-2) of the Three Mile Island Nuclear Generating Station on the Susquehanna River, near Harrisburg, PA. The reactor accident occurred 1979-03-28, and released radioactive gases and radioactive iodine into the environment, resulting in the worst accident in the American commercial nuclear power plant history.
I have written about Dartmouth College before. The Synclavier synthesizer was developed there.
It is about artificial intelligence (AI), and the use of chatbots. A robot is a device that automatically performs complicated, often repetitive tasks. Bot is a shortened form of robot. A chatbot (originally, chatterbot) is a robot that uses and pretends to understands human language. ELIZA was an early chatbot implemented by Joseph Weizenbaum (1923 – 2008) from 1964 to 1967. It did so by passing the Turing test developed by Alan Turing (1912 – 1954) in 1950. This test – originally referred to as the imitation game – means that a human interacting with ELIZA will believe that the robot is another person. It is important to understand that ELIZA and other chatbots do not actually understand English (or any other human language). They store words, then use these and other words to mimic it.
The photo of the sheep was found on Unsplash, a website that allows photos to be freely used, when I was searching for a photo of a traffic jam for the beginning of the post. In much the same way that AI can get things wrong, my use of this photo gets things wrong too. It shows traffic congestion, but with sheep, rather than cars.
Why isn’t the post called Artificial intelligence, and the use of chatbots?
Because, if I gave it that title nobody I know would look at it, let alone read it. Such a title would be offensive to the people I interact with. The people I hang out with are not AI experts.
Why is it called Cars?
An honest answers it is that this weblog’s target readership probably find cars a topic they can relate to. Thus, they are being encouraged to learn something about AI by reading about something they already have a relationship to. Most of my audience readers have driving licenses, and know something about cars. A large proportion of them have been driving/ owning/ servicing/ repairing/ enhancing/ customizing cars for over fifty years. It is a topic they can relate to, unlike, say, the breeding of Labrador dogs.
Do you have something against dogs?
Let me be tactful, just this once, and say I think dogs deserve a companion who is interested in their well being. Many readers of the weblog post have dogs. I live contentedly without them. However, while writing this post, I did find this article about dogs that impressed me.
How did this post begin?
On 2024-01-04, I read an article about Perplexity in Tech Crunch. It is an AI chatbot. I opened a free account, and asked Perplexity some questions. I then tried to find some content that could act as a control to questions answered using perplexity. On 2024-01-13, I read an article in Newsweek, about why Americans can no longer afford cars. I thought it would be interesting to make up questions, based on the answers supplied in Newsweek and then ask Perplexity the same questions. For example, the first question I asked was:
Q. In USA, how much have new and used car prices risen since 2020?
Perplexity provided a long answer, one that answered many different but related questions, rather than just that one. So a new challenge arose about how to present content, so that it made sense. Part of the problem was the attribution of Newsweek content to particular people. I decided to eliminate names and quotation marks. Immediately below is the edited Newsweek answer to that first question.
Since 2020, new car prices have risen by 30 % and used car prices have risen by 38 %.
I was just expecting a simple answer from Perplexity of x% for new, and y% for used vehicles.
Here is more of the Newsweek content, extracted to remove references to sources, human or artificial (Microsoft Copilot).
In 2023—a year during which inflation slowed down to the point that the Federal Reserve decided to stop hiking rates—new car prices rose by 1 percent to an average of $50,364, while used car prices fell by only 2 percent to an average of $31,030.
But as things stand, cars are still really expensive for many Americans. Just 10 percent of new car listings are currently priced below $30,000, Things are not much better in the used car market, where only 28 percent of listings are currently priced below $20,000.
In November 2019, the average transaction price for a new vehicle was $38,500. In November of 2023, that figure jumped to $47,939.
The pandemic’s disruption of manufacturing supply chains, as well as outsized consumer demand in 2021, really put a strain on vehicle inventory and drove prices up significantly.
Dealers practically had customers lined up to buy vehicles that weren’t available. Supply chain disruptions also forced manufacturers to prioritize more profitable, higher-trim vehicles in their lineups, which meant the inventory available to purchase also carried a higher sticker price.
Used car prices will continue to fall modestly in 2024.
The backlog in new and used car demand that grew during the pandemic is slowly moderating, but with over 2 years of restricted new car production, in 2020 through 2022, it will take at least that long for supply to catch up.
Macroeconomic factors like inflation and higher interest rates are also reducing vehicle demand, but not enough to drastically drop car prices in the foreseeable future.
But there are other reasons besides pandemic-related disruptions that there seem to be no more affordable cars in the U.S.—including that automakers are increasingly focusing on the production of expensive SUVs and trucks while dropping smaller, cheaper vehicles that would cost $20,000 or less.
Manufacturers cite disappointing sales results as primary reasons for discontinuing smaller, more affordable vehicles from their lineup.
But car buyers’ preferences have also shifted dramatically to larger trucks and SUVs in the past 10 years or so, and even more towards high-tech and comfort amenities in the form of cameras, sensors, radars and large infotainment screens.
Unfortunately, all these features come at a significantly higher price—even if that’s often higher than the average yearly wage of millions of Americans.
(End of Newsweek content)
Full reply from Perplexity.
According to various sources, new and used car prices in the USA have risen significantly since 2020. The average transaction price for new cars declined 1.4% year-over-year as of October 2023, but they have increased to an average price of $47,936, which is roughly a 60% increase over the last decade1. In July 2020, new-vehicle prices increased $749 (up 2%) from July 2019, while decreasing $473 (down 1.2%) from the previous month2. The average new car price paid in December 2020 and the fourth quarter of last year crossed over $40,000 for the first time, and the average new car price in the US in December 2021 was $47,07734. According to The Zebra, the average new car prices spiked by 17.2% between 2020 and 2021, and the average car payment for new vehicles rose 11.8% to $6445.
Used car prices in the US have also increased significantly since 2020. According to various sources, the prices of used cars have risen due to factors such as the global microchip shortage, supply chain disruptions, and changing consumer preferences
Here are some key points regarding the changes in used car prices:
In June 2023, consumer prices for used autos were down 0.5% compared to the previous month and up 5.2% year over year1.
The share of used vehicles that sold for less than $20,000 in the first quarter of 2023 was 30.6%, compared to 60.5% just five years ago2.
Used car prices have dropped 5.6% compared to a year ago, but they remain higher than at any time before 20203.
Used electric vehicle (EV) prices have dropped 26% over the last year3.
Car ownership costs, including used car prices, have skyrocketed, with an overall increase of 40% in used vehicle prices between 2020 and 2023.
In summary, used car prices in the US have risen substantially since 2020, and the trend shows no signs of reversing. The factors contributing to the increase include supply chain disruptions, the global microchip shortage, and changing consumer preferences.
According to Statista, the average list price for used vehicles in the United States was around $27,300 in June 2023
Consumer Reports reported that the average price of a used car was around $27,000 in August 20232. Edmunds reported that the average used vehicle transaction price in Q1 2023 dipped 6.4% year-over-year but is still up 44% from 2019, and the share of used vehicles sold for under $20,000 was 30.6% in Q1 20233. The Zebra reported that the average new car prices spiked by 17.2% between 2020 and 2021, and the average car payment for new vehicles rose 11.8% to $6445. Therefore, the average price of a used car in the US in 2023 is around $27,000 to $27,300.
Used car prices in the US are relatively high compared to some other countries. For example, in Germany, the prices of certain car models can be significantly higher than in the US due to factors such as taxes and fees. In the US, the average price of a used car was around $27,000 in 2023. This is in contrast to some other countries where car prices can be influenced by various factors such as taxes, import duties, and government policies. Therefore, while used car prices in the US are high, the comparison to other countries is influenced by a range of economic and regulatory factors.
Chatbots
A human brain has about 86 billion neurons. Chatbots are based on neural networks, imitating the organization of the human brain. An advanced intelligent processing unit (IPU) consists of about 60 billion transistors. A transistor is, at best, an inferior neuron. Since 2017, most modern chatbots are transformer models, a type of machine learning.
If, at this point, I had decided to explain the details of transformer computer architecture, most readers would be putting down their coffee cups and either heading out to their garage workshops, or begging their dogs to leave the comforts of their sofas, and take them for a walk.
While chatbots appear to lie, they are simply responding to input content (often questions). In chatbotspeak, each word a chatbot learns is called a token, and the relationships chatbots construct are called vectors. The tokens and vectors construct a neural network. At first, neural networks use supervised learning to understand relationships between tokens. Here, humans assign labels to the tokens. With unsupervised learning, the neural network itself, without human assistance will assign labels.
Chatbots are designed/ programmed to instill confidence so that users believe they are interacting with a real person. This is the primary goal. Making truthful statements is unimportant, as long as the charade is maintained. A chatbox will do almost anything in order to maintain an illusion of humanness. It will invent information, if that is needed.
Today’s chatbots such as Google’s Bard (now called Gemini – updated 2024-06-15, 22:30), Microsoft’s Copilot, OpenAI’s ChatGPT or the Cohere’s Cohere, use transform technology, first developed in 2017. These are online, generative AI systems that are capable of maintaining a conversation with a user in natural language.
From 1988 to 1991, I taught a college course in AI. Since I had very little faith in machine learning, and chatbots were very primitive, I concentrated on expert systems. To my mind these did the least damage.
Wikipedia tells us: In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.
If I were wanting to learn about AI today, I would want to start with a fun book. For me, the most enjoyable book on the subject is by Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021). Then I would try to read an AI textbook. My first introduction to the topic was: Artificial Intelligence (1983) by Elaine Rich. The most recent edition of that work is a third edition (2009) by Elaine Rich, Kevin Knight and Shivashankar B. Nair. When, about a decade ago, I took an online course in AI with an emphasis on machine learning, the textbook was by Stuart Brand and Peter Norvig, Artificial Intelligence: a Modern Approach. The latest edition is the 4th, from 2020-1. It is much more technical and difficult.
I used Prolog, a computer programming language for expert systems, in my teaching. Initially, I asked my students to map out their family relationships in a knowledge base. Think first of a family with five generations of daughters that would have to be inserted into a knowledgebase: Adriana, Beatrice, Cordelia, Desdemona and Emilia. Then, one would have to make some abstract categories, such as a mother = a female who has a child; a grandmother = a female who has a child who is either the mother of a child or the father of a child. These rules can quickly become very complex. So much of learning Prolog is learning how to create increasingly complex rules.
After students had learned how to systematize family relationships, they tested it, to make sure that the results mirrored reality. A common problem, to begin with, was that grandmothers could only find granddaughters, but not grandsons. Thus, they had to go back and make changes.
Once the family knowedgebase was working, students could go on to work with other problem areas, of their own choosing.
People wanting to learn Prolog as a computing language for an expert system, should probably use William F. Clocksin & Christopher S. Mellish, Programming in Prolog: Using the ISO Standard, 5th edition (2003) as their textbook. This is not as out of date as its publication year would suggest.
Prolog is widely used in research and education. Yet it and other logic programming languages have not had a significant impact on AI. Part of the reason is that most Prolog applications are small and dependent on human experts providing data. Real experts are a scarce resource, and what they know expertly is limited. Thus, few applications exceed 100 000 lines of code.
WindCORES is best described as a project, founded in 2018 and based in Germany, that operates data centres inside two wind turbines, making them almost completely carbon neutral. This means that previously unused space becomes usable, even valuable. These data centres are powered by the same wind turbines, while fiber optic cables provide internet connectivity.
The concept began about 2013, when WestfalenWIND realized the electricity grid was too weak to handle the electrical power being produced by its wind turbines during peak wind hours. This meant that power from windfarms was switched off due to grid security issues. WindCORES estimated that this unproduced/ undistributed electricity could power one-third of all German data centers
Wind power that never enters the grid is fed to servers located inside formerly empty, large concrete wind turbine towers. Each tower is typically 13 meters in diameter, and could potentially hold servers throughout most of their 150 meters height. On average 85-92% of the power needed to sustain such a data center comes directly from the host turbine. When there is no wind, electricity is obtained from other renewable sources, including solar farms and hydroelectric power plants, via the electricity grid. It is claimed that a typical German data center releases 430 grams of CO2 per kilowatt hour. WindCORES servers will release 10 grams.
Currently, windCORES has a fully operational data centre in a wind turbine in Paderborn, a city in eastern North Rhine-Westphalia, Germany. Initially, For IT, WestfalenWind IT and Green IT installed four fire-resistant IT safety cabinets, housing 62U server racks, with Fujitsu’s Primergy servers and Eternus storage units.
It has about 150 customers of varying size, co-located in the towers offering cloud solutions. Zattoo, is one of these. It is a carbon-neutral Swiss TV streaming platform with several million monthly. Zattoo joined windCORES in 2020, when it moved one of its six data centers into a wind turbine, 218 channels are encoded with windCORES . By the end of 2024, Zatoo plans to relocate more existing servers to the wind farm, making it Zattoo’s main data center location.
WindCORES has recently opened a larger, second location called WindCORES II at Windpark Huser Klee, a 50.85 MW onshore wind power project, also located in North Rhine-Westphalia, but at Lichtenau. The windfarm was commissioned in 2015. The data centre was built for BMW, occupying three levels (20 meters) of space.
Reflection
Some µs after I had typed in the title of this weblog post, I wondered if it should be changed to A in B. After all, the local bus company is called AtB, which in the local trøndersk dialect means A to B. Yes, this dialect specializes in shortening words, so they are barely understandable, even to other Norwegians. While A and B refer to random locations/ stops in the bus network, A refers to any type of product that can be made or stored in a container B, in the original example. More specifically, my reasoning was that readers could be asked to reflect on: What can be housed in a wind turbine mast? or, possibly: Where can data centres be located? In the end, I decided to take the easiest action and do nothing.
There is no reason why other companies in other places in the world could not open data centres in existing wind turbines, even in Trøndelag.
If people think that gaming is a marginal activity, they should reassess their world view. The revenue from computer games exceeds that of the music, film and television industries combined. The production of a game can employ 500 people, many of them engaged in providing different forms of artwork. People under the age of fifty, spend much of their free time gaming. Those over, not so much.
The ten countries with the largest consumption of computer games are, in ranked order: USA, China, Japan, South Korea, Germany, UK, France, Canada, Italy and Brazil. The source for this information provides revenue in USD, and the number of players. In terms of production, 2020, Swedish gaming companies ranked ninth in the world, and generated an annual net turnover of 20.8 billion SEK.
Every game involves a game-world with its own rules, that may differ significantly from the reality the player normally lives in. The better able the player is to adapt to these changes, the better able the player should be able to score, and ultimately to win the game. In many games, there is also an element of chance.
In some games, having control over the graphics (or at least better control than any opponents) is necessary for the player to win. In many games, winning simply means completing the game.
For the record, my list of favourite games has not changed much over the decades, and are not demanding with respect to graphics. In chronological order the ones I remember are: Oregon Trail (1971), Flight Simulator (1982), Where in the World Is Carmen Sandiego? (1985), Sim City (1989), Postman Pat (1989), Street Rod (1989) and Minesweeper (1990). There was also a train simulator, with a forgotten name.
Of these, playing Postman Pat involved the most work. Since it was impossible to obtain an overview map while playing the game, our family visited the entire game world, and recreated the map on paper. For someone with a flawed sense of geography, this was very helpful, possibly allowing me to even beat children, as long as they were very young.
In my teaching career I have used Sim City to introduce the concept of simulation to younger students, most typically a three hour session provided to students at the three junior secondary schools in the Leksvik catchment area. My son tells me that traffic congestion makes it difficult for the game population to exceed about 10 000 people, on a first attempt. He must have inherited his game playing capability from his mother, since he managed to build a city of 150 000 people, on his first attempt.
Non-favourite games include Railroad Tycoon (1990) as well as programs that imitate board and card games, such as monopoly or chess. Railroad Tycoon is less about simulating a transport system, and more about building and managing a company that happens to be a railroad. This is done by investing in track and stations, and by buying and scheduling trains. To win the railroad must be built within a specified time.
To answer the most common question I have ever been asked about computer gaming. Yes, I am acquainted with, but have not played: Minecraft (2009), Pokémon (1996, in card format) or Donkey Kong (1991).
It should now be obvious that I am living in the past with respect to computer gaming. My relationship with board and card games is equally problematic non-existent.
There is one game that I have been considering, NIMBY Rails, which is an open-source transit simulation developed by Carlos Carrasco, from Barcelona, Spain. NIMBY = Not In My Back Yard, which is a common approach to anything requiring change. The game includes content from Open Street Maps, that contains a slightly simplified variant of the entire earth, so that players can construct a transit/ rail system anywhere on the planet. There are detailed rules built into the game that have to been discovered through trial and error. It was launched in 2021.
The first computers used by our family for gaming were an Amiga 1000, soon replaced with an Amiga 2000. Most of the games listed above were first played on it. After that we have owned Windows and Mac machines. With the children becoming adults and capable of making their own decisions, the oldsters use Acer Swift 3 laptops with assorted Asus machines running Linux Mint 21.2. We also have hand-held devices (Asus Zenfone 9) running Android 13.
One of the challenges gamers faced during the pandemic was the lack of GPUs = Graphical Processing Units, usually bought in the form of a graphics card that is inserted into large cases they would call their gaming rig. There are, of course, more portable computers that can be used for the same purpose.
During the pandemic, there was another group of people wanting GPUs: cryptocurrency miners: individuals, companies, organizations, (some criminal and evading paying taxes in any form), who want to use the equipment to produce bitcoins, and other types of cryptocurrency. This production requires enormous amounts of electricity, and these miners want to equip their mining rigs, which look more like servers, with large numbers of the fastest possible GPUs.
For GPU manufacturers this explosion of mining demand created a public relations challenge. The two dominant companies are Nvidia and AMD = Advanced Micro Devices, in previous incarnations, especially their Radeon GPUs. They are now restricting sales of GPUs as add-on products, and prioritize selling them to OEMs = Original Equipment Manufacturers, who put them in new, expensive computers most often designed and labelled as gaming rigs. This created a problem for some gamers, who could/ can no longer upgrade their rigs.
There are several areas where graphic content can provoke conflict. The first is internet throughput, usually measured in k- or M- or Gb/s. An ISP = Internet Service Provider, will provide a range of throughput at various price points, and it will be up to the consumer to decide which one. The current base rate from our ISP is 150 Mb/s, but both 500 and 1 000 Mb/s are available. We have a base rate subscription.
With the onset of Putin’s war in Ukraine, and especially after he stopped/ limited gas sales to Europe, it became economically unviable to mine cybercurrencies! This meant that there was a sudden increase in the number of GPUs on the used market. Unfortunately, not all of these GPUs have a configuration suitable for gamers. Fortunately, Inexpensive former mining-GPUs can be suitable for video-rendering. Unlike gaming machines, rendering machines do not need to connect to a screen. They do not even need to be quiet.
The two next areas involve screen characteristics. Every screen has a specific resolution. In 1988, screen resolution was typically 640×512i (PAL) with 16 colours. In North America, NTSC, the resolution was less. Today (2023), one common resolution is FHD (Full High Definition) = 1920 x 1020 pixels aka 1080p HDTV video format. It has a 16:9 aspect ratio and about 2 megapixels of content. The p stands for progressive scan, i.e. non-interlaced, and is the standard on computer screens. The other dominant standard is 4k UHD (Ultra High Definition) = 3840 × 2160 pixels is the dominant 4K standard, adopted in 2014, with the same 16:9 aspect ratio, and 8 megapixels of content. This, increasingly, is the dominant standard on television sized screens, typically between 40″ = 1 000 mm and 70 inches = 1 780 cm, diagonally.
Human vision varies. A person can process 10 to 12 images per second and perceive them individually. At higher frequency rates these images are perceived as motion. The most common preferred minimum frequency rate is 50 Hz, with that being a common frame rate in Europe or 60 Hz in North America, although some have no problems watching video at 30 Hz. Many gamers, however, are prepared to pay extra for a 144 Hz screen, although I personally don’t think they can improve their perception, by increasing the frequency. The highest frame rate currently available is 240 Hz. While there are some algorithms that can be used to reduce the amount of processing needed, a frame rate of 120 Hz, will require 4 times the processing as a 30 Hz frame rate. Compared to FHD at 50/ 60 Hz, 4k UHD at 120/ 144 Hz, will require 8 x the processing power.
Today, it is 50 years (1973-05-22) since Ethernet was invented at Xerox PARC (Palo Alto Research Center), in Palo Alto, California. This date specifically references a memo written by Bob Metcalfe (1946 – ). Ether refers to an omnipresent, luminiferous aether, that acts as a passive medium for the propagation of electromagnetic waves. In 1975, Xerox filed a patent application listing Metcalfe, David Boggs (1950 – 2022) , Chuck Thacker (1943 – 2017) and Butler Lampson (1943 – ) as Ethernet inventors.
Metcalfe’s law states that the impact of a telecommunications network is proportional to the square of the number of nodes, initially expressed as compatible communicating devices in, later as connected users of, the system (n2). The law was first proposed in 1983-12, in a 3Com sales presentation, given by Robert Metcalfe. The law was popularized by George Gilder in a 1993-09-13 Forbes article which specifically associated it with Ethernet users.
Some feel that this law overstates the case, in that not all nodes are born equal. Specifically, reference is made to the Dunbar number, originally expressed as a cognitive limit to the number of people with whom one can maintain stable social relationships. Anthropologist Robin Dunbar (1947 – ), who devised the concept, places the limit at about 150, for people. In terms of this weblog post, I have stated that I will know that I am doing something wrong, if the number of readers exceeds 100. It is only intended for family and friends, with whom I wish to maintain stable social relationships. In addition, it allows for, say, fifty people to disown me (or the weblog), for various reasons, real or imagined. In terms of Ethernet nodes, it is easier to reach some in contrast to others. Indeed, most people place considerable effort in evading contact by spammers, and other forms of Internet lowlife.
António Madureira, Frank den Hartog, Harry Bouwman and Nico Baken have, in 2013, empirically validated Metcalfe’s law, specifically looking at how Internet usage patterns changed over time, See: Information Economics and Policy, 25 (4): 246–256.
Ethernet was commercially available, starting in 1980. It was standardized in 1983 as IEEE 802.3. New versions of Ethernet have increased bit rates, the number of nodes connected, and link distances, yet retain considerable backward compatibility. Ethernet has largely replaced competing wired local-area network (LAN) technologies, such as the token-ring and aloha networks.
An Ethernet connection is made using a cable fitted with a male plug connector at both ends. One end of the connector (most often a 8P8C plug) usually attaches to a device (such as a laptop computer) fitted with a 8P8C jack, while the plug, at the other end is attached to the jack of a switch. The term RJ45 connector/ plug/ jack is often used. The original (telephone) RJ45 connector featured a key that disallowed 8P8C connectors. These RJ45 connectors are obsolete, but continue to be referenced in the language. There are variations in the termination of the wiring inside 8P8C connectors, with two distinct flavours, officially known as T568A and T568B. These are defined in ANSI/TIA-568. I have never known anyone who uses anything but B.
Connector installation improves with practice. We use purpose built tools and a tester to ensure that cables function properly.
Propaganda
It can be difficult for people to obtain appropriate information about Ethernet connections, and Internet operations, more generally .
Many people I associate with are surprised that Cliff Cottage, our residence, is wired with CAT 6A Ethernet cable. Gratuitous advice: If it is too difficult to put Ethernet cable inside a wall, fasten it outside, immediately above a baseboard. We have done this in select locations. We have also allowed the cable to be fitted inside door molding.
We use a power-over-Ethernet (PoE) switch with 24 ports, that are fitted to our server rack. This allows us to send data and electrical power in the same cable. Users in our household each have a PoE UniFi Flex Mini switch with five ports, operating at gigabit speed. We also have three UniFi Wi-Fi 6 lite access points at appropriate locations in the house, so that all portable/ hand-held devices can connect to the Internet..
Note: The publication date is also World Goth Day #15. It will not have a weblog post devoted to it this year, but will return to explore Gothic architecture on World Goth Day #16, 2024-05-22. As compensation, a weblog post about Wave Gotik Treffen, a Leipzig music festival, will be published 2023-05-27.
There are times, when I actually believe that people can learn to cope with stress/ anxiety, and offer myself as an example. To cope with stress, I divide activities into six categories: Routine, Infrequent, Novel, Challenging, Frustrating and Overwhelming. As I approach an activity, I categorize it, as best I can. I have no problems changing a category, should it be necessary. I use the above terms because I can mark each activity I have planned with a single letter category code, without conflicts arising.
The category determines how the activity will be engaged. Routine activities can be performed at any time. The others, including infrequent activities, will be started when I am fresh, usually in the morning. Novel activities require me to write notes about the activity. Challenging activities require me to consult these notes. Frustrating activities require to me consult with another person about the activity, as well as notes. Overwhelming activities, require me to ask others to help.
In this weblog post, I will be looking at these with respect to dealing with some recent computer software and hardware issues, but they also apply to many other areas in life.
Routine Activities
Routine activities are those performed regularly, with little risk of extraordinary situations arising. Typical examples include: updating application and system software, which can occur many times in the course of a year. Some software manufacturers think their updates should take precedence. Thus, I remember one weather broadcast on television that was interrupted because Microsoft had decided that the computer being used to present a weather animation, should be updated, while the animation was being broadcast. I believe companies like Microsoft, now offer their customers the opportunity to specify time slots for updates.
If one is concerned, one can use expired software that is exempt for updates, such as Windows 7 as long as that part of the system is not exposed to the internet. One can also use software with better manners, such as assorted Linux distros. Because routine activities take place so often, there is usually no need to consult a user manual or check list, although pilots, surgeons and others may disagree.
Infrequent Activities
Some routine activity take place infrequently. When the frequency is about once a year, such as a major upgrade of an operating system to a new version, a different approach may be needed.
A typical annual event is the writing and sending out of an annual letter. While writing emails is a routine activity, an annual letter is often sent to many people. Hopefully, everyone has one or more computer address books used for assorted purposes, filled with contact information for family, friends, colleagues and providers of goods and services . One does not send an annual letter to everyone in one’s address book. Instead one makes lists for assorted groups of people. For the residents of Cliff Cottage, one such list involves people to be sent an annual letter in Norwegian. A second list is for those to be sent an annual letter in English. These lists have to be updated annually.
This year I had a strange experience working with these lists. The first one went fine, I was able to add and remove people from it, without any complications. I then waited ten days to work on the second list. By then I had forgotten how to add a person from an address book entry to a list. It did not take long to look up the procedure in the operating manual, and to remember. When working with infrequent activities, it is often appropriate to have a manual or check list available.
Novel Activities
Initial/ novel/ original/ unique activities are those that are being performed for the first time. It is important to write down the sequence of steps used in such activities, so that these notes can be consulted later, if this turns out to be repeated. I have folders of notes on various topics. There is always room in the folder for one more topic. Frequently, I copy then edit instructions from an online source, then paste them into a LibreOffice Writer file, complete with a link to its source. I often edit the file to suit my style of writing, because not every one writes well.
In order to reinstall Windows onto a 2015 Asus AiO computer, I attempted to burn an ISO file onto a USB stick = pendrive = flash drive, from my Linux desktop. This was a routine task, but somewhat irritating, because I had run out of 16 GB USB memory sticks, and felt I was having to waste a 32 GB stick for the installation of an old operating system. Then Linux gave me a warning that the program I had selected was inappropriate for burning a Windows ISO file, and invited me to investigate Ventoy.
I found that Ventoy was an improvement. It allowed multiple ISO files to be installed on a single USB stick. So, I was able to store several: Windows XP, Windows 7, Linux Mint 21.1 with Cinnamon desktop, despite a suspicion that Windows XP will not install from this drive. In the future, I intend to store other Linux flavours for specific purposes on this stick, especially related to CNC control and robotics. Should Mageia ever release a version 9, it will also be included. Then there are other tools, such as GParted (for creating partitions).
From one source, I made appropriate notes about working with Ventoy.
Challenging Activities
An electric vehicle should soon start occupying our carport. Once it arrives, it will have to be fed! The most convenient way is to have a charger in the carport. An Easee Home charging robot was installed 2023-01-04. To control the charger, a smartphone app is required. Installing this app was a challenging experience. There was no problem in downloading or installing the app itself, but it there was a problem opening an account, because a control number failed to arrive.
In the end, I used Easee’s chatbot, and explained the situation. While the chatbot could not offer any solutions, the issue was soon resolved. Easee seemed to have more data about the charger than I expected. They knew who I was, where I lived, even the serial number of the charger, and its pin number. Thus, they were able to open an account for me without me having to do anything.
Frustrating Activities
Recently, Trish and I discovered that our main communication channel, Signal, was disabled on our laptops, because Signal had not been updated. It was still working on our hand-held devices, and on my desktop machine. Signal provided instructions, involving three simple steps, on how to correct this issue. I decided that the most gallant approach was to solve the problem on my laptop, before fixing Trish’s.
After completing these steps, Signal still failed to work, and a message was received that the Linux Mint operating system on my laptop was compromised. Then, and only then, did I ask for advice from my son, who is more experienced than I am working with Linux. He suggested that I uninstall Signal on Trish’s machine, then reinstall it. I followed this advice, and her OS continued to work. Signal accepted the updates, and started working.
The main reason for this problem arose was that I had failed to install TimeShift, a backup and syncronization tool used on Linux systems that protects that system from corruption, by taking incremental snapshots of file system at regular intervals. Should an unfortunate situation arise, TimeShift can roll back the file system. It is the equivalent of System Restore in Windows and Time Machine in Mac OS.
Correcting the errors on my laptop proved to be more challenging. After three failed attempts to reinstall Linux 21 as the operating system, I had to examine what was happening. I had originally used this same USB stick to install Linux Mint on the two new laptops. Then I upgraded them to Linux 21.1. I decided that remnants of the system didn’t want anything to do with Linux Mint 21(.0). Thus, I burned Linux Mint 21.1 onto the stick. This simple change was all that was needed to get the laptop up and running.
Overwhelming Activities
The last situation involving overwhelming activities happened at the end of 2018. In an Email dated 2018-12-30, I admitted to feeling overwhelmed by some imminent digital technology transitions. When I was in my forties, that started in 1988, I had more serious mental health issues related to anxiety and depression. I was not going to allow this to happen again, as I entered my seventies.
Internet changes: a) technology changing from asymmetric digital subscriber line (ADSL) to fiber cable; b) digging a trench, laying then burying forty meters of orange jacket from our property line to our house, Cliff Cottage; c) preparing an indoor wall for placement of a fiber modem. Professionals would then insert the fiber cable into the orange jacket, and put the wiring into the house, and install the modem.
Telephone changes: d) smartphones = hand-held devices replacing a landline; e) transitioning from two different mobile network/ SIM card providers to yet another provider; f) transitioning from two different smartphone models (Huawei 9 Lite & 10 Lite) to a single model from a different company (Xiaomi Pocophone F1).
Server changes: g) technology changing from one server (Asustor AS1004T) to a completely different type of server (retrofitted Dell enterprise rack equipment, with ; h) new server components; i) transitioning from one brand of hard-drive (Western Digital Red) to another brand ( Toshiba N-300); j) transitioning from server access using WiFi to Ethernet cabling throughout the house, including placement of – k) Ethernet (RJ-45) wall sockets, and l) a network of Unify U6 Lite access ports; m) a new server operating system FreeBDS; n) a new file system OpenZFS; o) new server software FreeNAS (now called TrueNAS core).
Printer changes: p) technology changing from inkjet to laser; q) change of supplier from Epson to Canon; r) change from wireless to Ethernet connectivity.
PC changes: s) Transitioning from one laptop (Acer Chromebook 11.5″) to another (Asus Vivobook 14); with a change of operating system from t) Chrome OS to a more familiar Linux Mint.
Media player changes: u) Relocating the media computer and screen; v) accessing the media player via ethernet, rather than WiFi.
Thus, I wanted to alert people close to me, that twenty-two technological changes (a – v) were demanding attention, and asked for help in dealing with them. These fell into six categories:
Then I also commented on other things that to be done, including a need to replace the media player computer. Then added that this would not be happening during the next few months, as the entire computing budget for 2019 has been used up, before we even entered the year.
Control
People can legitimately ask why there were so many changes happening, in such a short space of time? The answers are grouped. Categories 1 to 3 involve changes initiated by others. Categories 4 to 6 increasingly involved my own decisions.
Much of the time a person does not have control over technological choices. Their role is that of a technological bus passenger. At some point they decide to enter that bus and take a ride, not knowing precisely where the route goes, or even where they want to transfer to the next bus. Adventures happen.
For example, we have never had any real options about an internet service provider (ISP). We could either accept ADSL provided over a copper cable network by Telenor, the Norwegian telephone company, or we could use a lower speed, but more expensive local solution, that failed to operate when it was windy, with communications equipment on a tower of Skarnsund bridge. There was no real choice.
Then, Inderøy municipality decided that our rural area would have internet served by fibre cable, provided by our electrical power supplier, Nord-Trøndelag elektrisitetsverk (NTE). We were further told that after it was installed, support for landlines would be terminated, and we would be dependent on smartphones and cellular base stations for telecommunications, including internet, if we didn’t opt in to the fibre solution. Again, there was no real choice.
Some of these transitions felt minor, some felt less so. Most of the transitions went well:
Additional backup onto external hard-drives.
The transition to fiber went well, despite a couple of misunderstandings, where the supplier enrolled us into using numerous television packages, and an internet speed in excess of what we asked for. These were resolved with a telephone call.
The decommissioning of landline equipment.
Installing new SIM cards on new cell phones.
Physically installing hard drives onto the new server.
Some involved assistance that was appreciated:
Copying data from an old phone to a new phone.
Setting up Signal messaging on computers.
Setting up mail servers.
Installation hardware and software on the new server.
Creating new backup procedures on the new server, since the old server was not backing up data properly.
Copying photos from hand-held devices, so they were available on personal computers, and also saved on the server.
Setting up the printer.
Notes:
When I look at the original floor plan for Cliff Cottage. I don’t understand the placement of anything. I feel the house should have been rotated 90 degrees , counter clockwise, with the kitchen and living room facing the view. It wasn’t, and I have spent the first years of my retirement improving not just the relationship between the house, and the property it sits on, but also changing access between different parts of the house. Among the changes was a better location to access to the living room from the hallway. This was straight forward, but also required the home theatre/ media centre to be moved to a completely different location.
For those who do not know me, I am a sliding/ pocket door junkie. If a conventional door can be replaced with some form of sliding door, it will be replaced. Cliff cottage has seven sliding doors, as well as four sliding doors in the kitchen cabinets. This will increase to thirteen, when the kitchen remodelling is complete.
On 2023-01-13 a 10 year old Acer Revo mini PC computer complete with Windows 7 arrived in the mail. This was acquired to work with a 10 year old 24″ Acer screen, and to work specifically with our household’s library system, BookCAT, with about 4 000 records, and our 35 mm PlusTek OptiFilm 8200i SE slide scanner, and our 4 000 slides. Except, when the computer arrived, I realized that it could be better put to use as a CNC controller in the workshop. I decided that it would be better to use an existing Asus AiO (All in One) computer for the library system and slide scanner. This had had its original Windows system removed, which meant that it needed to have it reinstalled.
When this post was first envisioned, when writing Made Without Repression, in 2019, I was mainly concerned about developments in Hong Kong. Since then, the situation in other countries has revealed a greater need for insights into the challenge of censorship. It is easier to prepare for censorship before it happens, than after. That said, it is probably time for everyone, everywhere to prepare themselves for internet censorship.
Various organizations, with assorted mandates and disparate reasons, make lists of countries engaging in internet censorship and surveillance. Some of the counties currently on their Countries under surveillance list by Reporters sans frontières, (RSF) = Reporters without borders, include: Australia, France, South Korea, and Norway – the last one with a proviso that states that this only applies to metadata on traffic that crosses the Norwegian border. A more serious RFS list, introduced in 2006 and last updated in 2014, Enemies of the internet, is more important because: all of these countries mark themselves out not just for their capacity to censor news and information online but also for their almost systematic repression of Internet users. Of the twenty countries on this list at its 2014 update are: China, India, Iran, Russia, Saudi Arabia, United Kingdom, United States.
On 2013-12-13, RSF published a Special report on Internet Surveillance, with two new lists: 1) State Enemies of the Internet, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights = Bahrain, China, Iran, Syria, and Vietnam. 2) Corporate Enemies of the Internet, companies that sell products that are liable to be used by governments to violate human rights and freedom of information = Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany).
Of course, not all people, organizations or companies exhorting free speech are sincere. Recently, the current CEO of Twitter has been encouraging free speech, at least for himself and a former president of the United States, but not necessarily for anyone else, especially those who express opposing views.
Thus, everyone should be preparing their own plan for managing an internet censorship situation. In addition, they may want to consider how they can help others already caught in one, such as people living in countries listed on the Enemies of the Internet, list.
There are many approaches to dealing with internet censorship, but an easy one is to become acquainted with Psiphon, an open-source Internet censorship circumvention tool, originally developed by the Citizen Lab in 2006.
The Citizen Lab, was founded in 2001, at the University of Toronto, Canada. It studies information controls that impact internet openness and security that threaten human rights. Computer-generated interrogation, data mining and analysis are combined with intensive field research, qualitative social science, and legal and policy analysis methods.
In 2007, Psiphon, Inc. was established as a Canadian corporation independent of the Citizen Lab and the University of Toronto. It uses a social network of trust model, to provide tools that offer internet access to people who live in censored countries. Psiphonode is server software that is easy to install, while psiphonite is client software, that is even easier. These products give ordinary people the opportunity to circumvent internet controls. The key is that the people involved have to trust each other.
Psiphon is currently engaged in developing/ maintaining two related projects: A cloud-based run-time tunneling system, and a cloud-based secure proxy system. Their original home-based server software is no longer supported. This software has been in development since 2006, initally with psiphonites from all over the world, including central Asia and the Middle East, accessing a test psiphonode server.
While the risk to psiphon users is never zero, if appropriate measures are taken, it is much safer than using any other method to visit a censored site. A psiphonite must connect to a unique, and traceable, IP address. Psiphon has been built to run as a private network connecting to home computers, where the connection information is never publicly disclosed. Encrypted psiphon messages are buried inside other commercial traffic.
The software uses a combination of secure communication and obfuscation technologies, including virtual private networks (VPN), secure shell protocol (SSH) = cryptographic network protocol for operating network services securely over an unsecured network. and a web proxy = an intermediary between a client requesting a service, and a server providing it. It is designed to protect the client, Psiphon is both centrally managed yet geographically diverse. It is a network of thousands of proxy servers, using a performance-oriented, single- and multi-hop routing architecture.
With Psiphon effectively out-sourced, the Citizen Lab has been able to concentrate on investigating situations where internet openness and security are restricted, and human rights are threatened. Notable reports include:
Tracking GhostNet (2009) documented a cyber espionage network of over 1 295 infected hosts in 103 countries between 2007 and 2009, with many high-value targets, including ministries of foreign affairs/ embassies/ international organizations/ news media/ NGOs.
Shadows in the Cloud (2010), documented an ecosystem of cyber espionage that compromised computer network systems for government/ business/ academia at the United Nations, as well as in India and other countries.
Million Dollar Dissident (2016), documented the tracking of Ahmed Mansoor, a human rights defender in the United Arab Emirates, with Pegasus software, developed by Israeli NSO Group.
My views are influenced by La Crise d’Octobre (1970-10-05 – 1970-12-28) that started when members of the Front de libération du Québec (FLQ) kidnapped the Quebec Labour Minister Pierre Laporte and British diplomat James Cross. Canadian Prime Minister Pierre Trudeau then invoked the War Measures Act for the first time in Canadian history during peacetime. This limited civil liberties and granted the police far-reaching powers. There were 3 000 searches, and 497 arrests. Everyone arrested was denied due process. Habeascorpus = an individual’s right to have a judge confirm that they have been lawfully detained, was suspended. The Government of Quebec also requested military aid to support the civil authorities, with Canadian Forces being deployed throughout Quebec. Canadian historian Desmond Morton (1937 – 2019) later wrote: “It was unprecedented. On the basis of facts then and revealed later, it was unjustified. It was also a brilliant success. Shock was the best safeguard against bloodshed.”
In the United States, four coordinated suicide terroristattacks carried out by 19 al-Qaeda against the United States on Tuesday, 2001-09-11. The attacks killed nearly 3 000 people and instigated the Global War on Terrorism. Criticism of this war has focused on its morality, efficiency and cost. A 2021 Watson Institute for International and Public Affairs study concluded that the several post-9/11 wars have displaced at least 38 million people in Afghanistan, Pakistan, Iraq, Libya, Syria, Yemen, Somalia and the Philippines. It estimated these wars caused about 900 000 deaths and cost $8 trillion. Despite the U.S. Constitution and U.S. law prohibiting the use of torture, this became common practice.
More recently, on 2020-01-06, following Donald Trump’s defeat in the 2020 presidential election, a mob of his supporters attacked the United States Capitol Building in Washington, D.C., seeking to keep Trump in power by preventing a joint session of Congress from counting the electoral college votes to formalize the victory of Joe Biden. This was the seventh and last part of a plan by Trump to overturn the election.
These events show that democracy is not guaranteed anywhere, and that people have to be vigilant.
A Plan to prevent Internet Censorship
One of the first tasks a person can engage in, is to visit the Psiphon website. It claims that: Where other VPNs can not connect, Psiphon will find a way. These connections are free, built on leading edge, research driven security and network technologies. These services are designed to keep people connected. They provide everything from social media, to games, to voice over internet protocol (VOIP) = a telephone service based on the internet, Psiphon is designed to help people access online content and services they appreciate, even if they are blocked by the authorities.
Once these internet connections are in place, it is much easier to provide content to people. My intention is that in 2023, I will set up a website, possibly mist.mclellan.no, but more likely an equivalent website with a more neutral name, that will be able to house courses/ lectures/ labs on technical subjects that will be freely available to people anywhere in the world. I specifically think about political hotspots of the world, currently: Hong Kong, Iran, Ukraine. It is not something I can do alone, since I have no knowledge of Cantonese or Farsi. My knowledge of Ukrainian is so elementary that it is of no practical use. Thus, I hope content can be translated by others, potentially learners who have a good understanding of English.
It would be interesting to know what other people feel they can contribute. They can send me an email at first name @ last name.no, where the first and last names are found as the name of this weblog.
The purpose of this weblog post is to introduce the concept of a database primary key, Without it, a database will not work because data conflicts will arise. A primary key is a single attribute or group of attributes that can uniquely identify an individual record. Part of the challenge of database design, is determining what to use as a primary key.
Often, the solution is simply to generate a number in sequence. In many cases this is effective. In other cases, it might be more expedient to use an existing code, that can access more information, when that is needed. In many cases, geographical information is wanted, that is not contained in a database. such as the name of a mayor of a city, or a chronological list of mayors.
Geoscheme is my way of organizing geographical data. The land area of the Earth is divided into 26 different regions, labelled A to Z. Each region is populated by countries that can be further sub-divided. This process can continue into smaller and smaller units.
When a data set about a geographic area is being assembled for inclusion in a database, it is important to assess existing keys to see if one distinguishes itself from others. For geographical jurisdictions, this can be an agreed upon international code, for subdivisions of countries national or other codes may be appropriate.
I have used Geoscheme for so many years, that its origins are lost in the depths of time. Needless to say, the map with regions did not originate with me. I have simply appropriated it for my own purposes. The alphabetic coding, on the other hand, is something I recall creating.
Africa: [A] Southern Africa; [B] Eastern Africa; [C] Middle Africa; [D] Western Africa; [E] Northern Africa. Europe: [F] Southern Europe; [G] Western Europe; [H] Northern Europe; [I] Eastern Europe + North Asia. Asia: [J] Western Asia; [K] Central Asia; [L] Southern Asia; [M] Eastern Asia; [N] South-Eastern Asia. Oceania: [O] Australia and New Zealand; [P] Melanesia; [Q] Micronesia; [R] Polynesia. Americas: [S] Northern America; [T] Caribbean; [U] Central America; [V] South America. Other: [W] Antarctica; [X] Atlantic Ocean; [Y] Pacific Ocean; [Z] Indian Ocean.
My current work with Geoscheme is the collection of outline maps and flags for most countries, often using Fortnight Insider as a source of black on white maps. It provides answers to the Worldle game, that I lost interest in playing, that offer white on black maps.
ISO 3166-1 alpha-3 codes are currently used in Geoscheme as a primary key for countries. These three-letter country codes are defined in the ISO 3166 standard published by the International Organization for Standardization (ISO), to represent countries, dependent territories, and special areas of geographical interest. There is also a two-letter coding system, referred to as ISO 3166-1 alpha-2 codes. The 3 letter codes give a better visual association between the codes and the country names than the two-letter codes. There is also a purely numeric code that offers no visual association. ISO 3166 became a standard in 1974. It is updated at irregular intervals. Some of the codes used are: CAN (Canada); MUS (Mauritius); NOR (Norway); TWN (Taiwan); UKA (Ukraine); URY (Uruguay); USA (United States of America).
Since ISO does not allow duplicate 3166 codes to be used, there are no issues using them as primary keys.
As noted, it is possible to expand these areas to use codes to define areas that are smaller than an unit with a 3-letter code. ISO 3166-2 defines codes for identifying the principal subdivisions. These use two-letter country codes, as well as two-character subdivisions. Thus, the province of British Columbia in Canada is CA-BC; The Moka district in Mauritius is MU-MO; Trøndelag county in Norway is NO-50. It was the result of an amalgamation of North (-17) and South (-16 ) Trøndelag on 2018-01-01; the state of Michigan in the United States is US-MI.
This approach can also work at lower levels. Inderøy municipality has its own municipality number. These representations are not written in stone. Municipality numbers were first introduced with the Norwegian census of 1946. Even municipalities that had dissolved before then, were given municipality numbers, that could be used for statistical purposes. Municipality numbers use four digits, with the first two being the county number.
Inderøy municipality was officially founded in 1837. The municipalities of Hustad and Røra were established on 1907-01-01 when the old municipality of Inderøy was divided into three municipalities: Røra (population: 866) with municipality number 1730, in the southeast, Hustad (population: 732) with municipality number 1728, in the north, and Inderøy (population: 2 976) with municipality number 1730, in the west. In 1912, Hustad changed its name to Sandvollan, but retained municipality number 1728. During the 1960s, there were many municipal mergers across Norway. On 1962-01-01, the three neighboring municipalities of Røra (population: 1003), Sandvollan (population: 750), and Inderøy (population: 3 194) to form a new, larger municipality of Inderøy.
Mosvik and Verran formed a municipality in 1867 that lasted until 1901, when Verran (population: 1 456) became its own municipality. Mosvik (population: 969) had retained the old municipality number, 1723. Adding to the confusion, 1968, the Framverran area on the south side of the Verrasundet strait (population: 395) was transferred from Verran municipality to Mosvik municipality. When Mosvik (population: 811) joined Inderøy in 2012, this newest iteration of Inderøy were assigned municipality number 1756. This lasted until 2018, when it became municipality number 5053.
These amalgamations, splits and transfers are mentioned in detail, because this is the reality of geography in the world. Situations change, and people interested in geographic realities have to be aware of the changes and their consequences. One cannot assume that boundaries are fixed.
Since primary keys are generally confined to database operations, there is no problem making artificial constructs as keys. One example is combining a 3-letter country code with a 2-letter subdivision code, even if this is not an acceptable international standard.
Geographical information about a country/ sub-division can contain a variety of information, that have to be formatted correctly. A jurisdiction name, or the name of its capital are generally a sequence of letters. Its population and its area in square kilometers are often integers. Typically when information about a country is assembled it occupies a single row in a table, but where every column will be formatted to accommodate the data collected.
Some people ask, why not just use longitude and latitude as a primary key? In such a system, the prime meridian and the equator dividing the world into four Eurocentric mathematical quadrants. So that: lines of longitude north of the equator are positive (+) from 0 at the equator to 90° at the north pole, while those south of the equator are negative (-) from 0 at the equator to 90° at the south pole; lines of latitude east of the prime meridian are positive (+) from 0 to 180° in the middle of the Pacific ocean , while those west of it are negative (-) from 0 to 180°at that same position in the middle of the Pacific ocean. One of the major problems with a geographical jurisdiction, is that it occupies an area not a point. So point data is uninteresting, and difficult to specify.
Another approach is to codify a small area. Because of radio interference issues, amateur radio operators are less interested in precision than a short code that gives an approximate position that is gudenuf. John Morris G4ANB originally devised such a system and it was adopted at a meeting of the International Amateur Radio Union (IARU) Very High Frequency (VHF) Working Group in Maidenhead, England, in 1980. The Maidenhead locator has an interesting historical development. A sub-square can be described using two letters, then two digits, ending with two more letters. Two points within the same Maidenhead sub-square are always less than 10.4 km (6.5 mi) apart, which means a Maidenhead locator can give adequate precision from only six easily transmissible characters. There is no guarantee that a Maidenhead sub-square will be located in the same country. EN82lh is such an example. In the north of this map, one finds Detroit, Michigan, USA while the south of the map is in Windsor, Ontario, Canada.
Another approach is to use what3words, which has given every 3m square (9 m2) in the world a unique 3 word address. The words are randomly assigned, but will always remain the same.
Cliff Cottage is located at 63° 50′ 31.596” N and 11° 5′ 26.178” E which converts to 63.8421098 N and 11.0906046 E in decimal format. It occupies Maidenhead sub-square JP53nu. Its What3words are casual.year.messaging (in the middle of the living room), conqueror.lawn.consented (in the middle of the kitchen), popular.feuds.positives (in Trish’s work room) and hides.lake.proclaims (in my work area). The multiplicity of codes for a single dwelling creates its own problems.
While a well designed database-engine can ease the workload of creating data-structures and algorithms, and running a database, database administrators study the types of data that are needed. Some of the most difficult decisions involve finding ways to structure the database content so that a collection of data values, relationships between them, and operations/ manipulations/ functions that can be applied to them, work for the benefit of users. Once that is done, users can concentrate their time on adding/ editing/ deleting data that can go inside a database, and transforming data into valuable information.
In a recent weblog post, Classification, I wrote about Geoscheme, my approach to dividing up the geography of the world. Eleven days before this, 2022-09-16, Mahsa Amini (2000-2022), died in a hospital in Tehran, Iran, with a broken skull after being arrested by the Guidance Patrol = the Iranian government’s religious morality police, for not wearing a hijab. A photograph of Amini illustrates this weblog post.
Since then, Iranian women and men, girls and boys have protested daily. They have been joined by others throughout the world. Writer, comedian, and former president of Humanist UK, Shaparak Khorsandi (1971 – ), who fled Iran to Britain with her family following the 1979 revolution, wrote: The Iranian regime kills women for trying to live freely. This is not just Iran’s problem, it is the world’s problem. Do not look away. This denial of basic human rights is an affront to human dignity. Mahsa Amini cannot speak up any more. The world should act in solidarity and amplify her voice and the voices of all Iranian women who dare to speak up for choice and democracy.
On 2022-11-30, I decided that I could amplify Amini’s voice by using some of my working hours on a personal Women, Life, Freedom project to: 1) transform Geoscheme into a database, and 2) use that database as an example to teach the basics of database development. The example will use an open-source database, and the teaching materials will be freely available under a copy-left licence. A third step has yet to be worked out, which involves the translation of the materials into Persian, with the help of a local Iranian refugee, preferably one with little or no understanding of the workings of databases.
Later, while I was wondering what else could be built into a database as dinner approached, my housekeeper, Trish, was telling me that she was re-organizing her recipes. It then struck me, that many Iranian women probably knew a lot about cooking, and that combining this existing knowledge with databases, could help improve an understanding of them.
Since food is generally made from ingredients, such a database could also be expanded to include inventories for raw materials, goods in process, and finished products. It also could allow some accounting basics to be explored. In other words, this example offered greater scope.
Another point that could be made is that there is an opportunity for small-scale software-as-a-service (sssaas). No one says a cloud has to be huge and multi-national. In a previous post, titled Clouds & Puddles, I used the Xample family with mother Ada, father Bob, cat Cat and daughter Deb, as an example to explain cloud-based computing. I am sure more Persian sounding names could be found.
Geoscheme data will be discussed in another post, to be published 2022-12-11. The remainder of this post will discuss database choices. It provides a short and incomplete history of the evolution of relational databases.
Rather than using the overworked term, cloud, to describe sssaas, I propose to call it Mist computing. This topic will be explored in yet another post to be published 2022-12-18.
My own interest in databases has nothing to do with either Geoscheme or cooking, but digital asset management (DAM), a system to store and access different types of content, such as texts, photos, videos, audio (including music), so that they can be edited and otherwise combined to produce new content. While such a framework is most often called a DAM, at other times it is referred to as a content management system.
Data structuring
In the beginning there was a bit, the most basic unit of information that expresses either 0 or 1. It first appeared in Claude Shannon’s (1916 – 2001) paper A Mathematical Theory of Communication (1948), but he attributed its origin to John Tukey (1915 – 2000), who in a Bell Labs memo of 1947-01-09 contracted binary information digit to bit. Eight bits form a byte, which can take one of 264 values. The term byte was coined by Werner Buchholz (1922 – 2019) in 1956-06.
The total amount of data created, captured, copied and consumed globally was about 64 zettabytes in 2020. By 2025, this could increase to 180 zettabytes. 1 zetabyte = 1021 = 1 000 000 000 000 000 000 000 bytes = 1 sextillion bytes.
Data storage has always been problematic. In 1951 magnetic disks as well as magnetic tape was developed for data storage. To begin with, there were flat files, often they came with programming languages, with data stored within the program. Then, in the 1960s, hierarchical and network database models appeared and flourished. These early database implementations were imperfect. Some would call them messy, overly large and difficult to use.
Structuring data begins with Edgar Frank “Ted” Codd (1923 – 2003) and his A Relational Model of Data for Large Shared Data Banks (1970). Codd’s model structures data in tables, with columns and rows, where one column holds unique keys. models were unable to use this model, as computer hardware only became powerful enough to deploy it in the mid-1980s. Within a few years (early 1990s), relational databases dominated, and continue to dominate in 2021.
Codd worked for IBM, but it was a mismatch. IBM failed to understand the importance of Codd’s work, and the need for data storage to have a theoretical basis, that ultimately resulted in a need for relational databases. Codd’s twelve rules, from 0 to 12 making 13 altogether, are important for anyone wanting to understand databases. The most important concept is that relational databases store information without unnecessary redundancy.
Initially, tables formed a basic unit for structuring data. These need storage space for data, and programming capabilities to create (insert), modify (update) and remove (delete) content.
Data is useless unless it can be retrieved = accessed in a form that is directly usable or can be further processing by other applications, that make it usable. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.
Administrative tasks include registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
The Alpha language was defined in Codd’s A Data Base Sublanguage Founded on the Relational Calculus (1971). It was never implemented, but influenced the design of QUEL, that was implemented by Michael Stonebraker (1943 – ) in 1976. This was part of the University of California, Berkeley project Ingres (Interactive Graphics and Retrieval System), used to demonstrate a practical and efficient implementation of the relational model. QUEL was supplanted by Structured Query Language (SQL, pronounced sequel, by some), based on Codd’s relational algebra defined in Relational Completeness of Data Base Sublanguages (1970).
SQL was (and is) available in several different flavours. It can be housed in the cloud (someone else’s server), one’s own server, a desktop machine, or a laptop. Handheld devices, such as smartphones, are less suitable.
SQL was implemented by Donald Chamberlin (1944 – ) and Raymond Boyce (1946 – 1974), based on an earlier language SQUARE (Specifying Queries As Relational Expressions).
By 1980, relational databases had became mature enough to enter the world of business. A number of startups such as Britton Lee, Relational Technology and Sybase were founded by people actively involved in the Ingres project at Berkeley. These were often implemented on cheap mini-computers, such as VAX machines made by Digital Equipment, rather than on larger, more expensive mainframes, such as System/ 370 machines made by IBM.
Postgres, was a successor project to Ingres, at Berkeley. Its goals were to make evolutionary improvements, including support for more complex data types, and improved performance. Most people exposed to Postgres, praised its extensible nature, allowing it to add new features, as required. This allowed databases to be used in new areas, such as support for geographical data as in geographical information systems (GIS) using a geolocation engine, and/ or time-series. Currently Timescale DB, launched in 2018 as an open-source product, offers the best possibilities for extending PostgreSQL in this area.
In the 1990s, the project drifted closer to SQL, so that by 1996 it was renamed PostgreSQL to reflect this support for SQL. There are many database practitioners who regard PostgreSQL as the standard database system to use, unless something dictates otherwise.
For me, no discussion of databases is complete without a mention of the closed-source Microsoft Office and its Access relational database system. It was promoted as a database for everyone, that provided a graphical user interface and the Access Connectivity Engine (ACE) that I originally knew as Joint Engine Technology (JET). The Norwegian Department of Education, in its wisdom, decided that computer science at the senior secondary school level would be taught with the help of this software. This approach to teaching, was a mistake.
Fortunately, not all Norwegian school programs were using Windows software. The Media and Communications program, that I transitioned to, used Apple equipment, notably iMac G4 machines with flatscreens, and software that included ProTools, to create a digital audio workstation, along with Adobe Creative Suite, and Claris FileMaker database system.
FileMaker is superior, and Access inferior everywhere one looks. FileMaker had more extensive capabilities but, more importantly, it just works. My experience with Access was that it was impossible to make coding changes with it. It was more time-efficient to just scrap an existing database, and to write a new one.
Thus, when it comes to other databases designed for everybody, I take this experience with me. Base, the LibreOffice/ OpenOffice equivalent of Access, performs equally badly. One reviewer noted: It [Base] doesn’t support Insert, Delete, and Update statements [all fundamental to any database] through its query designer, and the SQL tool [is not standard, because it] doesn’t parse [= analyse symbols that are written in] ANSI [American National Standards Institute] compliant SQL. There are no easy solutions.
The next database problem child is MySQL, originally developed by MySQL AB, a Swedish company founded in 1995 by Michael Widenius (1962 – ) who programmed the database system – and named developed products after his children: My, Max and Maria; David Axmark (1962 – ); and Allan Larsson (? – ). It offered dual licence database software. A Gnu General Public Licence (GPL) version was offered free of charge, but the database system was also sold under other licences. This closed-source version included additional components that improved the operation of the database. In 2008, MySQL AB was acquired by Sun Microsystems. In 2010, Sun Microsystems was acquired by Oracle Corporation.
MariaDB is a forked = branched version of MySQL, also developed by the same Michael Widenius, starting in 2009. MariaDB maintains high compatibility with MySQL, allowing it to function as a drop-in replacement for MySQL. However, new features diverged. MariaDB includes new storage engines. While MariaDB was originally entirely open source and non-commercial, a merger with SkySQL in 2013, resulted in a more commercial profile, especially the emergence of software-as-a-service (SAAS) activities on Google Cloud Platform.
Avoiding redundancy is not just a matter of saving space. It is a means of avoiding data conflicts. If, for example, a person’s address is only saved in one place, if that address changes, that change only has to be recorded once. It should be noted that redundancy has nothing to do with backup which, at a minimum should follow the 3-2-1 rule: 3) Create one primary backup and two copies of data. 2) Save backups on two different types of media. 1) Keep at least one backup file offsite.
Relational databases use an ACID test to monitor the suitability of a database. ACID involves atomicity, consistency, isolation and durability properties that guarantee data validity despite errors, power outages, and more. This has to operate at the transaction level. Other types of databases find this test impractical to implement, and accept data loss to a varying degree.
DB-Engines ranks the most popular databases. In 2022-12, the top five ranked databases were, in order: Oracle, MySQL, Microsoft SQL server, PostgreSQL and Mongo DB. Microsoft Access is #9, Maria DB is #13, FileMaker is #22. Base uses the FirebirdSQL DB-engine, which is ranked #32.
So far, one would get the impression that all modern databases are relational, and in some way related to SQL. This is not the case, and is exemplified by Mongo DB in the above list. It is a document database, often described as NoSQL. There are a large and increasing number of NoSQL databases, a 21st century term variously translated as not SQL, not only SQL and not a relational database.
Document databases typically store archived records, then use their database engines to extract metadata. It is a more complex arrangement, and is not the place to begin when teaching database techniques.
It is my intention to encourage everyone to use open-source databases, where these are appropriate. This includes, especially, oppressed women wanting software solutions in a country experiencing sanctions, such as Iran. Even profitable corporations, such as Google, have found it appropriate to develop and support open-source software.
The conclusion of this post is that I will begin to use PostgreSQL to implement a Geoscheme database, with the aim of making it a example to be used for teaching basic database management techniques. After that, everything is open.