On 2021-07-07 Robert N. Charette wrote an article in IEEE Spectrum, How Software Is Eating the Car, The trend toward self-driving and electric vehicles will add hundreds of millions of lines of code to cars. Can the auto industry cope?
As usual, an article in Slash Dot ( /.) is my main source of biased opinions about a serious technological issue, with one typical comment given a score of 4: interesting. It read: “If you get something pre-1978 then the most sophisticated electronics in the vehicle will probably be the radio kit.” This was then followed by numerous comments about 1976 (and later) Chrysler Cordobas. This type of reasoning reaches its zenith with, “What was the last car without this nonsense? Makes me want to buy a classic car or motorcycle, just for the simplicity.”
Yes, for a long time the trend has been towards increasing [Engine Control Units =] ECUs, based on the design philosophy of, “If you want a new feature, you buy a box from a Tier 1 [top-level component suppliers, such as Bosch] that provides the feature, and wire it in. As a general rule, automakers love outsourcing work; for most of them, their dream scenario is that everyone else does all the work for them and they just slap a badge on it and take a cut.
Then Rei adds a score 5: informative, but long, comment: “This article actually has it backwards. The first company to break with this philosophy was Tesla, which has from the beginning had a strong philosophy of in-house software design, and built basically a ‘car OS’ that offloads most vehicle software functionality into a handful of computers (with redundancy on safety-critical functionality). … Eventually everyone is going to have to either make their own ‘car OS’ stack or lease one from someone else. The benefits are just too significant[:] Lower hardware costs, lower assembly costs, lower power consumption, simpler cheaper lighter wiring harness, faster iteration time on new functionality, closer integration between different subsystems, you name it. This trend throws into reverse the notion of ever-increasing numbers of ECUs (which quite simply was an unsustainable trend).”
Who could possibly disagree?
Part 2
What is the minimal vehicle a person needs? Of course, there will be as many answers as there are people, and it will all be dependent on what they are doing. There are a lot of vehicles available, but I will not refer to them as choices. Some places lack trams or other forms of public transit. They may exist in other places, but run at inappropriate frequencies. Some communities lack bike lanes, forcing cyclists to compete for space with cars. Some streets are perpetually gridlocked.
Some people need to work, outside of their residences! Does one have to take children to kindergartens or schools? What distance does one have to travel to attain basic health and nutritional needs? Can this be done as part of a commute, or is a separate trip necessary? What about specialty shops? What is the distance to the nearest bus station/ train station/ airport/ international airport? Is there a need for a social life? Is one dependent on driving a car? Could a bicycle do for some items? Are trains or buses an option? So many questions, so few obvious answers.
Perhaps my own situations could be used as an example. Compared to most people, my life is simple: no job is calling me, and I am no longer responsible for looking after young children. Yesterday, I used a vehicle with a mass of about 1.5 Megagrams (where 1 Mg = 1 000 kg), to drive 40 km. Admittedly, there are vehicles that weigh less than a car. A bicycle is probably the most efficient device for conveying people, and it can have a mass of from about 5 to about 20 kg. Yet, I would not feel safe driving one of these on the roads of rural Norway. There are no buses, but if I plan in advance and contact the appropriate office a day in advance, I might be able to use public transit, essentially a taxi charging bus rates, as long as I am willing to wait up to several hours, for a return trip.
The most basic foods, as well as building supplies, can be purchased with a 14 km return trip across Skarnsund bridge in Mosvik, where there is even a coffee bar, with better than acceptable lattes. Basic health care (doctor, dentist, pharmacy, optometrist) and a larger selection of food and basic necessities are met by driving 26 km for a return trip in the opposite direction, into Straumen. More specialty shops are available in Steinkjer involving a 70 km round trip. This all involves driving. However, there is also a train station at Røra, 40 km round trip by car, that will allow one to connect with an international airport (TRD), and the fourth largest city in Norway, Trondheim, about 120 km away – 240 km round trip, with an even larger selection of shops and activities.
Part 3
I am in agreement with Rei, that more software (and less hardware) is needed in vehicles. Yet, I am reading this week that General Motors is charging purchasers of many GMC, Buick, and Cadillac vehicles, that are shipped with OnStar and Connected Services Premium Plan by default, $1 500 for the three-year plan that was once optional, but is now required. Other companies are doing the same sort of thing. It is estimated that this revenue stream could give GM an additional $20 to 25 billion per year by 2030. BMW has come out with similar claims, giving them an additional revenue of about $5 billion per year by 2030. I do not want to ensure that a wealthy elite continues to take more of an income pie that is already unfairly divided.
At issue is the right of consumers to direct access to vehicle data, which historically has been obtained from an on-board diagnostic (OBD-2) port (North America) or European on-board diagnostic (EOBD) port, since 1996 and 2001, respectively. These allowed vehicle owners and technicians access to vehicle data to assist with maintenance and repair. This situation is threatened by vehicle manufacturers, who want to use telematics = the sending of data wirelessly and directly, restricting vehicle data to manufacturers. In 2021, 50% of new cars have these connected capabilities, but no country has more than 20% of its vehicle fleet equipped. USA has the most. By 2030, it is estimated that about 95% of new vehicles sold globally will have this connectivity, according to a study by McKinsey.
While this data could provide economic and other benefits to car owners, vehicle manufacturer want to act as gatekeeper, determining who can access it, and at what cost. This is a detriment to consumers, which could result in: Increased consumer costs; restrictions on consumer choices for maintenance and repair; safety and security issues involving the use of non-standard data types and formats; privacy concerns. Automotive mechanics, and other aftermarket providers can also be affected.
This has resulted in a consumer backlash, which I associate with the right-to-repair movement. There are already open-source groups working to ensure that consumers retain their rights. In addition, Automotive Grade Linux (AGL) is an open source project hosted by The Linux Foundation that is building an open operating system and framework for automotive applications. It was started in 2012, and currently has 146 corporate members.
I imagine that automotive manufacturers will try to add just enough proprietary software to their vehicles, to profit maximally from their investment. On the other hand, I see that there will be an incentive for ordinary consumers to demand right-to-repair legislation, and for guerilla activists to produce generic software substitutes where this is useful.
In Europe, repair is increasingly regarded as an essential consumer right and an environmental necessity. The main objective of the European Green Deal, is to be climate neutral by 2050. The European Commission’s Circular Economy Action Plan (CEAP), published 2020-03, details how this goal is to be reached. To reduce waste, products have to be designed to last. If they don’t last, they shouldn’t be sold. To encourage the development of products that are longer-lasting, there could be lifespan labels, service manuals, and an EU-wide repairability index. This would encourage the market to compete on repairable and durability.
In 2020-11, the European Parliament voted overwhelmingly in favor of a right-to-repair, and insisted that the more conservative European Commission administrative arm, implement it. It also included repairability labeling.
In 2020-11, voters in Massachusetts approved Question 1, involving a right-to-repair Law, with almost 75 percent in favour. The law requires automakers to provide a way for car owners and their chosen repair shops to access vehicle data, including that sent wirelessly to the manufacturer. The intent of this law is to prevent manufacturers and dealerships from having exclusive access to data.
Massachusetts is the state where the first automotive right-to-repair law was passed in 2012. That law made car makers open up the data inside the car. Rather than create a state by state solution, automakers reached a nationwide agreement with car parts makers/ suppliers and repair shops on how to share the data. This agreement opened the OBD-II port. With this new and improved right-to-repair law, similar transformative actions are required.
There are an increasing number of underpaid programmers and other software and hardware specialists, unable to fully live the American (and Scandinavian) dream. Many of these would undoubtedly be willing to work as guerilla technologists to develop the tools needed for retrofitting vehicles with more consumer friendly components, especially after warranties have ended. There are an increasing number of inexpensive microprocessors and systems on a chip that can be used for these purposes.
Part 4
To put electric vehicles in perspective, one needs to return to 1965-11-05, when President Lynden Johnson was given a copy of Restoring the Quality of Our Environment, a report by the Environmental Pollution Panel, President’s Science Advisory Committee. On publication of this blog, people have had 20 735 days or 56 years, 9 months, 8 days to confront this challenge, but have failed miserably at this task.
One fundamental question is, where can younger people learn more about the construction of appropriate vehicles for the 21st century? Currently the most interesting project is Woodpecker, that describes itself as an: “Open source based Carbon negative Electric Vehicle Platform. Woodpecker is a game changing micromobility vehicle to decrease CO2. Electrical propulsion allows to use solar and renewable power. Production of Wooden frame even decreasing CO2 because it is encapsulated by [wood] while growing. Vehicle built on Circular Economy concept – most parts are recyclable.” It appears to have originated in Latvia, and includes partnerships with many higher-educational institutions in the country. One problem with Woodpecker, is that it as an organization is too closely bound to commercial enterprises. For example, a good starting point for most open-source projects is to become acquainted with their documentation. In this case it requires people interested in downloading their technical drawings to have a Trimble account, in order to use Sketchup.
Notes:
1. This post follows up some aspects of Vehicle Devices, published 2020-11-03. The division between parts is not based on content, but time. Part 1 of this weblog post was originally written 2021-06-18 and saved at 10:49. It had been patiently waiting to be published. On 2022-08-12, outdated content was removed, and Part 2, was added, starting at 20:43. Parts 3 was started on 2022-08-13 at about 07:40, while part 4 was started on the same date at 08:48.
2. Trondheim claims to be the third largest city in Norway, but I would give that title to Stavanger. The challenge with Stavanger, is that its metropolitan area is divided between multiple municipalities. Yes, I am aware that I have offended some of my Norwegian readers, because of their origins in Trøndelag. However, Stavanger is the only place in Norway where I have ever been able to find/ buy root beer! This is probably due to Americans working in the oil industry, and living in the Stavanger area.
Of course, I am hoping that readers will mistake Prolog for Prologue = an introduction to something. In addition, I have a further hope that the poster, displayed above, will induce a feeling of calmness in these same readers, so that they will be able to approach the real content of this weblog post with detachment, but not indifference. The main problem with the poster, is that almost everything about it, apart from its wording, and especially its signal red background, but also large sans-serif white lettering and the British crown, reinforce a feeling of danger!
Wikipedia tells us, Keep Calm and Carry On was a propaganda poster produced by the British government in 1939 in preparation for World War II. The poster was intended to raise the morale of the British public. Although 2.45 million copies were printed, the poster was only rarely publicly displayed and was little known until a copy was rediscovered in 2000 at Barter Books, a bookshop in Alnwick, a market town in Northumberland, in north-east England.
Some topics, toothaches in particular, or dentistry more generally, do not induce calmness. Instead, they increase the flow of adrenaline, and other forms of psychomotor agitation, resulting in psychological and physical restlessness. Thus, before confessing what this topic is really about, I want to reassure readers that it is a topic that can be fun, if approached correctly. Initially, I had thought of dividing the topic into multiple parts and publishing them at the rate of one part a day, over more than a week. The parts are still subdivided, but each reader will have to determine her/ his/ its etc. own consumption rate.
One
I am used to dealing with actors, people pretending to be someone else. In the process, these people have helped me developed my own acting talents. Some of the actors I had to deal with, had failed their auditions, often called court appearances or trials. One of the consequences of such a failure, could be imprisonment at the Norwegian low security prison where I was assigned as their teacher.
Other actors were youth in the final years of their compulsory education, at senior secondary school. They had to attend school, but some of them were better than others at presenting themselves in a positive light. Not that everyone sought positivity. In a Media and Communication English class, I once asked the pupils to write about something they wanted to accomplish in the future, and why they wanted to do so. The reply that created the most work, not just for myself, but for the student, the school principal, the school psychologist and others, was an essay that detailed how this person wanted to become a mass murderer. Afterwards, he claimed that this was a work of fiction.
I have experienced a lot of acting performances by students. The most problematic actors are those who pretend they understand a topic, when they have absolutely no idea about it. The role of the teacher is to channel student activity so that the student finds a route that suits her/ his personality, and is effective at helping the student learn new sets of knowledge and develop new skills. This route-finding skill is the primary talent needed to teach.
Two
This weblog post’s topic is programming, in a specific language. While numbers vary with the situation, perhaps ten percent of actors will delight in learning the programming language they are confronted with. A similar number, give or take, will not master anything. Those remaining in the middle will accept programming languages as a necessary evil in this internet age. Stated another way, a small percentage will find their own route without assistance, another small percentage will never find a route, while most people in the middle will struggle to varying degrees, but ultimately find a route, hopefully one that suits their personality.
The main difficulty in terms of learning to program, is that schools begin computer science studies assuming that students will want to learn to program the particular language being offered. Admittedly, some languages are fairly general, including some that are designed more for teaching/ learning, than for any more practical applications. Pascal, is probably the best example of such a language. However, my contention is that the first computing course a student takes should look at programming principles.
I was fortunate to read Bruce J. MacLennan’s, Principles of Programming Languages: Design, Evaluation and Implementation (1983). A second edition was published in 1987, and a third in 1999. There is not much difference between the three editions, and the same languages are discussed in all three: pseudo-code interpreters, Fortran, Algol-60, Pascal, Ada, Lisp, Smalltalk and Prolog. All the editions of this book explain that computer languages can have different purposes, and asks readers to examine the purpose of each programming language. Not everyone should learn the same one. Before they decide to learn programming, people should know what they want to do with that language, after they have learned its basics. Much of the time the answer is, learn a more appropriate language.
Three
The Prolog in the title of this post refers to the Prolog programming language. Fifty years ago, in 1972, Prolog was created by Alain Colmerauer (1941 – 2017), a French computer scientist, at Aix-Marseille University, based on the work of Robert Kowalski (1941 – ), an American-British computer scientist, at the time at the University of Edinburgh.
Prolog is a logic programming language associated with artificial intelligence and computational linguistics. That doesn’t say much. It might be more understandable to say that students typically learn Prolog by creating a program/ system that shows social relationships between people. Despite their reputation as rather awkward social creatures, even computer scientists have the capability of understanding some social markers: mother, father, daughter, son, at a minimum. Thus, even computer scientists can construct a system that will determine then show relationships between any two people. The system can be constructed slowly, so that initially only, say, four relationships are allowed. Outside of those four choices, people will be labelled as having no relationship. However, in subsequent iterations, the number of relationships can be expanded, almost indefinitely.
Prolog consists of three main components: 1) a knowledge base = a collection of facts and rules fully describing knowledge in the problem domain; 2) an interface engine, that chooses which facts and rules to apply when attempting to solve a user query; 3) a user interface, that takes in the user’s query in a readable form and passes it to the interface engine. Afterwards, it displays results to the user.
Four
Programming in Prolog, written by William F. Clocksin (1955 – ) & Christopher S. Mellish (1954 – ), is the most popular textbook about the language. Originally published in 1981, a revised (read: readable) second edition appeared in 1984. My copy has my name printed on the colophon page in capital letters in blue ink, considerably faded now, along with the date, 12 iii 1985.
It is not the only book about Prolog in my library. Among the thirteen others are: Dennis Merritt, Building Expert Systems in Prolog (1989); Kenneth Bowen, Prolog and Expert Systems (1991); Alain Colmerauer & Philippe Roussel, The Birth of Prolog (1992); Krzysztof R. Apt, From Logic Programming to Prolog (1997) and even an updated 5th edition of Clocksin and Mellish, subtitled Using the ISO Standard, (2003). Part of the reason for this large number, was my using of Prolog to teach expert systems.
Five
Expert systems are not particularly popular, now. In artificial intelligence, popularity contests are being won by machine learning tools. Yet, some people don’t have to be at either the height of fashion or the cutting edge of technological advances, and can appreciate older approaches.
Edward Feigenbaum (1936 – ) constructed some of the first expert systems. He established the Knowledge Systems Laboratory (KSL) at Stanford University. Long words are often strung together to describe his work. A favourite phrase is, “Knowledge representation for shareable engineering knowledge bases and systems.” This was often coded into the phrase expert system. He used it mainly in different fields of science, medicine and engineering. KSL was one of several related organizations at Stanford. Others were: Stanford Medical Informatics (SMI), the Stanford Artificial Intelligence Lab (SAIL), the Stanford Formal Reasoning Group (SFRG), the Stanford Logic Group, and the Stanford Center for Design Research (CDR). KSL ceased to exist in 2007.
The focus of Feigenbaum, and American institutions more generally, was on rules-based systems: Typically, these found their way into shells = computer programs that expose an underlying program’s (including operating system) services to a human user, produced by for-profit corporations, that would sit on top of Lisp, one of the programming languages commented on in two chapters of MacLennan’s book, and used extensively for artificial intelligence applications. Feigenbaum and his colleagures worked with several of these expert systems, including: ACME = Automated Classification of Medical Entities, that automates underlying cause-of-death coding rules; Dendral = a study of scientific hypothesis formation generally, but resulting in an expert system to help organic chemists identify unknown organic molecules, by analyzing their mass spectra, and combining this with an existing but growing chemical knowledgebase; and, Mycin = an early backward chaining expert system that identified infection related bacteria, recommend specific antibiotic treatments, with dosage proposals adjusted for patient’s mass/ weight. He also worked with SUMEX = Stanford University Medical Experimental Computer. Feigenbaum was a co-founder of two shell producing companies: IntelliCorp and Teknowledge. Shells are often used by experts lacking programming skills, but fully capable of constructing if-then rules.
Six
Prolog is frequently contrasted with Lisp, and offers a different approach for developing expert systems. Some users are fond of saying that Prolog has a focus on first-order logic. First-order is most appropriately translated as low-level, or even simple. The most important difference between the two languages, is that anyone with average intelligence should be able to understand, and work with Prolog. Much of the work done with Lisp involves higher-orders of logic, often requiring the insights of real logicians, with advanced mathematics in their backgrounds. An introductory logic course, gives sufficient insight for anyone to work with Prolog.
Prolog is also claimed to be a more European approach. This probably has something to do with the way teaching is organized. In Norway, for example, a (Danish) Royal decree from 1675 and still valid today, required all university students to undertake an Examen philosophicum, devised by advisor Peder Griffenfeld (from griffin, the legendary creature, plus field, but originally, Schumacher = shoemaker, 1635 – 1699). Under the Danish King, Christian V (1646 – 1699), he became the king’s foremost adviser and in reality Denmark’s (and Norway’s) actual ruler. In 1676 he fell into disfavour and was imprisoned. He was sentenced to death, for treason, but the sentence was commuted to life imprisonment. He was a prisoner on Munkholmen, outside Trondheim, and about 55 km directly south-east of Cliff Cottage, for 18 years (1680–1698), and was released after 22 years of captivity.
Until the end of the 1980s, this exam involved an obligatory course in logic, including mathematical logic, along with other subjects. This means that almost every university student (at that time), no matter what they studied, had the necessary prerequisites to work with Prolog.
Seven
Expert systems often involve heuristics, an approach to problem solving using methods that are not expected to be optimal, perfect, or rational, but good enough/ satisfactory for reaching an approximate, immediate or short-term goal. George Pólya (1887-1985), who worked at Stanford 1940 – 1953, and beyond, took up this subject in How to Solve It (1945). He advised: 1) draw a picture, if one has difficulty understanding a problem; 2) work backwards, if one can’t find a solution, assuming there is one, and see what can be derived from it; 3) develop a concrete example, from an abstract problem; 4) solving a more general problem first – this involves the inventor’s paradox where a more ambitious plan may have a greater chance of success.
One list of areas where expert systems can be used, involve system control, in particular: 1) interpretation, making high-level conclusions/ descriptions based on raw data content; 2) prediction, proposing probable future consequences of given situations; 3) diagnosis, determining the origins/ consequences of events, especially in complex situations based on observable data/ symptoms; 4) design, configuring components to meet/ enhance performance goals, while meeting/ satisfying design constraints; 5) planning, sequencing actions to achieve a set of goals with start and run-time constraints; 6) monitoring, comparing observed with expected behaviour, and issuing warnings if excessive variations occur; 7) repair, prescribing and implementing remedies when expected values are exceeded.
Sometimes one comes across Prolog tutorials that begin with subjective knowledge/ considerations. Music is a good example. Unfortunately, it is not always easy to remember if one has labelled something as trash metal or punk, and this may have operational consequences. It is much easier to confirm that person X is person Y’s granddaughter, and that person Y is person X’s grandfather, especially if persons X and Y are members of your own family.
It is always hard to know which Prolog expert system implementation will impress readers most. Here are some choices: Bitcoinolog = configures bitcoin mining rigs for an optimal return on investment; CEED = Cloud-assisted Electronic Eye Doctor, for screening glaucoma (2019); Sudoku = solves sudoku problems; an unnamed system constructed by Singla to diagnose 32 different types of lung disease (2013), another for diabetes (2019); an unnamed system by Iqbal, Maniak, Doctor and Karyotis for automated fault detection and isolation in industrial processes (2019); an unnamed system by Eteng and Udese to diagnose Candidiasis (2020). These are just some of hundreds, if not thousands, many open source.
Eight
One of the challenges/ problems with expert systems is that the scope of its domain can be unknown. In other words, when a person starts using an implemented expert system, it can be unknown just how big or little the range of problems is that can be used successfully with it. There can also be challenges with system feedback. What looks like an answer, may be a default because the system has insufficient insights (read: rules) to process information. Expert systems do not rely on common sense, only on rules and logic. Systems are not always up to date, and do not learn from experience. This means that real living experts are needed to initiate and maintain systems. Frequently, an old system is an out of date system, that may do more harm than good.
This begs a question of responsibility/ liability in case the advice provided by a system is wrong. Consider the following choices: The user, the domain expert, the knowledge engineer, the programmer of the expert system or its shell, the company selling the software or providing it as an open-source product.
Infinity
Just before publication, I learned of the death of crime novelist Susie Steiner (1971 – 2022). I decided to mention her in this weblog post, when I read in her obituary that she had spotted a Keep Calm poster on the kitchen wall at a writing retreat in Devon. She was cheered by its message of stoicism and patience.
Speaking of kitchens, at one point my intention was to use Prolog to develop a nutritional expert system, that will ensure a balanced diet over a week long time frame, along with a varied menu for three meals a day. I still think that this would be a useful system. Unfortunately, I do not think that I am the right person to implement it, lacking both stoicism and patience, to complete the undertaking.
Reflecting on Susie, I am certain that a Prolog system could be made to help writers construct their novels, especially crime fiction. A knowledge base could keep track of the facts, as well as red herrings and other fish introduced to confuse the reader, and prevent them from solving the crime. Conversely, a Prolog system could also be built that would help readers deconstruct these works, and help them solve the crime and find textual inconsistencies.
Confessions
Readers should be delighted to hear that while writing this post I used my original Clocksin and Mellish book on a daily basis! Yes, it held my laptop open at an angle of about 145°, about 10° further open than without it. When writing on other topics, I also use other books for the same purpose. Note to self: ensure that your next laptop opens at least 180 degrees!
The writer should be dismayed about the length of this post. Patricia reminds me, repeatedly, that shorter is better. She felt last week’s post on Transition One was a more appropriate length. Transition One was written in the course of an hour, with a couple of additional proof-reading sessions. Writing Prolog took more than a year, with multiple writing sessions, each adding several paragraphs.
Every time an image is displayed on a hand-held device (cellphone) or other variant of a computer, someone has decided its format. The people who make up the webpage, program or whatever else is being made, have procedures to help them decide what to use. Users have no choice, they simply experience the consequences of choices made by others. The speed at which an image decodes introduces a delay (sometimes called latency) that can be annoying.
QOI = the Quite OK Image format for fast, simple, lossless compression. Compared to PNG = Portable Network Graphics format, it provides 20 – 50 times faster encoding, and 3 – 4 times faster decoding. Lossless images retain their fidelity. The alternative, lossy images, gradually loose their quality each time an image is re-encoded. The simplicity of QOI is found both in its code, which uses about 300 lines of C, a common programming language, and in its file format specification, that occupies a single page in PDF = Portable Document Format, a file format developed by Adobe in 1992 to describe documents, including text and image formatting information.
Dominic Szablewski has developed this file format. It is much better than quite OK because almost every other file format in current use, including JPEG, MOV, MP4, MPEG and PNG, “burst with complexity at the seams.” He adds that they “scream design by consortium… and “require huge libraries, are compute hungry and difficult to work with.”
Szablewski proposed the idea on GitHub, and paid attention to the more than 500 comments generated.
QOI implementations are found for many different languages/ libraries, including C, C#, Elixir, Go, Haskell, Java, Pascal, Python, Rust, Swift, TypeScript and Zig, among others. There are native applications, meaning that they can be run without any external software layers, as well as plugins for Gimp, Paint.NET and XnView MP. Szablewski does not expect it to appear in web browsers anytime soon. It will probably end up in games and other applications where there are performance issues.
The QOI-Logo is released as public domain under the CC0 License and may be freely used.
Note: On 2021-01-02, the content of this post was changed to eliminate references to gaming. A separate post about rendering content for video games will be written and published, later in 2022.
Forth is not a mainstream programming language. Whenever it is compared to something, the most operative word is different. It is almost like assembly language, which is how a machine would interpret code, if it used English, rather than 0s and 1s to calculate and communicate. Some refer to Forth as a virtual machine, which is software pretending to be a physical machine. In part, this is because it is not just a programming language, but also an operating system. Despite this, Forth is simple. It can run on a few kilobytes (kB) of memory. When coded appropriately, it seems to be its own independent language, but with a lot of English-like words.
While Forth was invented by Charles (Chuck) Havice Moore II (1938 – ) in 1970. It was operationalized by Elizabeth (Bess) Rather (1940 – ), who – with Moore – started Forth, Inc. in 1973. Rather refined and ported Forth to numerous platforms throughout the 1970s. She also chaired the ANSI Technical Committee that produced the ANSI Standard for Forth (1994).
Forth was made specifically for the real-time control of telescopes at the United States National Radio Astronomy Observatory and, later, at Kitt Peak National Observatory. A real-time response is one that guarantees that something will happen within a specified time period. In other words, it sets a deadline for something to happen, usually one that is relatively short. Thus, a real-time process is one that happens within defined time steps of some maximum duration.
Forth is the antithesis of Ada. Wikipedia defines Ada as “a structured, statically typed, imperative, and object-oriented high-level programming language, extended from Pascal and other languages.” In its purest form, Forth is none of these, with the exception of being imperative. Most computer languages are imperative. They use statements/ commands to change a program’s state. Ada originated in the 1970s because of US Department of Defense (DoD) concerns about the high number of programming languages being used in embedded computer systems. They wanted one language to do everything. Unfortunately, with Ada they created a monster that was far too large and complex, that was slow to compile and difficult to run. A compiler (used with a compiled language) requires code to be translated into machine language, before it can be run. In contrast, an interpreter (used with an interpreted language) directly executes instructions without requiring them to have been translated into a machine language, in advance.
Allegedly, a Forth program can be compiled, but not if it contains words that are only evaluated at runtime: DOES>, EVALUATE and INTERPRET are three such words. If even one word has to be interpreted, the entire Forthdictionary would have to be embedded inside the program. Thus, Forth should always be treated as an interpreted language.
Forth is an appealing language because of its one and only guiding principle, Keep it simple! Part of this simplicity involves how the language is used. Leo Brodie – a third main contributor to the language – explains, in Starting Forth, E2 (1987): The interpreter reads a line of input from the user input device, which is then parsed for a word using spaces as a delimiter. When the interpreter finds a word, it looks it up in the dictionary. If the word is found, the interpreter executes the code associated with the word, and then returns to parse the rest of the input stream. If the word isn’t found, the word is assumed to be a number and an attempt is made to convert it into a number and push it on the stack; if successful, the interpreter continues parsing the input stream. Otherwise, if both the lookup and the number conversion fail, the interpreter prints the word followed by an error message indicating that the word is not recognised, flushes the input stream, and waits for new user input. (p. 14) While the use of unusual words may make the above description seem complex, this is a much simpler approach than that used in most other computer languages. A graphic version is shown in the flowchart above. Parse is a word used extensively by people who construct compilers. It refers to the process of dividing a sentence (or in computing, a statement) into words/ grammatical parts and identifying the parts and their relationships to each other.
One major problem with Forth is that its dictionary, more often referred to as a library in other languages, is not uniform. Some implementations come with an adequate dictionary, others less so. Some use words the same way, others give the same word different meanings. This means that Forth implementations can produce very different results, depending on dictionary content. This weakness is probably the main reason why Forth is not treated seriously, and has not been extensively used.
Forth is a stack machine, a computer where the primary interaction is moving short-lived temporary values to and from a storage location that follows the rule: last in, first out. A stack significantly reduces the complexity of a processor. Tasks are expressed as words. Simple tasks usually involve single words . More complex tasks connect together many smaller words, that each accomplish a distinct sub-task. Thus, a large Forth program is almost like a sentence that involves a hierarchy of words, distinct modules that communicate using a stack. Data is only added to the top of the stack, and removed from the top of the stack. Each word is built and tested independently. Provided that words are chosen appropriately, a Forth program resembles an English-language description of the program’s purpose.
Forthwright
Forthright is an adjective, used to describe a plainspoken/ frank/ blunt person. A person who develops/ modifies/ corrects/ improves/ uses Forth programs is referred to as a forthwright, a noun. Both words are pronounced the same way. A wright is a person who makes or repairs something. The original ca. 700 AD Old English wryhta, referred to someone working with wood. Since then, the term has expanded to include many different occupations. Carpentier, now carpenter, was introduced into England only after the Norman conquest in 1066, effectively replacing wright to describe this role. In Scotland, wright is used much more extensively.
Raspberry Pi Pico
The traditional strength of Forth is its minimalist use of resources. This is more important than it may seem. Gordon Earle Moore (1929 – ) formulated an expectation in 1965, later termed Moore’s law, by others, that computing capacity would double every year, some say every 18 months. This doubling cannot continue indefinitely. Many, including Moore, expect it to be invalid from about 2025, giving it a life span of 60 years. Even so, this means that even the most primitive of microprocessors made today has many magnitudes of capacity compared to anything made in, say, 1970. This is why many people prefer to use computer languages that are less optimal.
In contrast to Moore’s law, Niklaus Emil Wirth (1934 – ) formulated a very different expectation in his 1995 article A Plea for Lean Software, later termed Wirth’s law, by others, that software is getting slower more rapidly than hardware is getting faster. Most computer scientists are no longer making software that optimize/ minimize resource use, because they know that ample resources are available.
The reason some few people continue to use Forth is because of their acute awareness of Wirth’s law, where they see the negative impact of software bloat, on a regular basis.
General Public Licence (GPU) and public-domain Forths exist for most modern operating systems including Windows, Linux, MacOS, Android and some Virtual Machines. Such implementations include: gForth and bigForth. Dale Schumacher forked the Raspberry Pi/ARM port of JonesGorth around 2014, and removed its dependency on Linux. It now runs bare-metal, on a Raspberry Pi, booting directly into the Forth interpreter. Many important words have been re-implemented in assembly, or as part of the built-in definitions. Note: In computing, bare or bare metal refers to a computer executing instructions directly on a processor aka logic hardware, without an intervening operating system.
Iteration #2 of Unit One (#2U1), my personal workshop, will officially commence on 2023-11-01, less than two years away. It will transform a construction-support workshop into a fabrication shop, as my career as a wright/ building constructor/ carpenter comes to an end, and my career as a millwright/ machinist begins. My primary emphasis is broad, mechatronics, but the workshop’s role is limited to fabrication. Electronics and programming will probably be done inside Cliff Cottage, while much of the thinking will take place wandering about in the woods.
The purpose of the workshop is for an old man to have fun, to build upon skills learned in the past, and to learn new 21st century skills, to keep his brain and body active. Hopefully, some useful and environmentally sensitive products will be made at it.
There are plans to use Forth as the official shop language/ operating system for computer numerical control (CNC), the automated use of machine tools, controlled by a computer. I expect to have one primary machine that can move in three dimensions, and change heads as required. The two most important heads will be a router, which can shape materials as well as drill holes, and a laser cutter that cuts more accurately and with less waste than a saw. I expect to concentrate on various types of hardwoods as my primary material focus, but not to the exclusion of other materials. These are subtractive processes that remove material. In contrast, 3D-printers are additive.
There is no need to waste money on expensive silicon if cheap silicon will do. The silicon needed to control a CNC mill will be a Raspberry Pi Pico microcontroller. It costs NOK 55 = US$ 6.05 = CA$ 7.75, on the day before publication. Any money saved on silicon will be put into better bearings, and improved versions of other machine components.
Forth is not for everyone. It is useful where there is a need for a real-time system involving mechanical movements. After milling machines, and other types of tools, robots come to mind first, including unmanned underwater vehicles and drones. It should be mentioned in all fairness that Forth is not the only language I intend to use in the future. Two others are Prolog and Lua. Prolog is a logic programming language developed in France in 1971 with a number of artificial intelligence applications. Lua is a multi-paradigm scripting language, developed in Brazil in 1993. Its basic set of features that can be extended to suit different problems.
Today (2021-11-15) is the 50th anniversary of the Intel 4004 microprocessor. This featured a 4-bit central processing unit (CPU). It was the first microprocessor to be sold as an electronic component. At the time of its development, Intel considered itself a memory chip manufacturer. At about the same time, three other CPU designs were being developed, but for specific projects. These were: Four-Phase Systems AL1, (1969); American Microsystems MP944 (1970); and Texas Instruments TMS-0100 (1971).
The Intel 4004 project began in 1969, when Japanese adding machine manufacturer Busicom, approached Intel to manufacture a chip it had designed. Intel was a start-up, so small that they didn’t have the staff to design the logic required. Thus, they came with a counter proposal, to build a general purpose computer-on-a-chip and to emulate the calculator architecture using a read-only memory (ROM) byte-code interpreter.
Frederico Faggin (1941 – ) was assigned responsibility for the project. He was able to design a customer-programmable microprocessor. The work included logic design, circuit design, chip layout, tester design and test program development. His initials F.F. were incorporated into the chip design. Assisting in the development process was Masotoshi Shima (1943 – ), a Busicom software and logic designer, but without any chip design experience. The chip was first used in the Busicom 141-PF adding machine.
Faggin is known for several microprocessor inventions. These include the buried contact, and the bootstrap load. He also created the basic methodology for random logic design using silicon gate technology. He was particularly vocal inside Intel in advocating the 4004 as a general purpose microprocessor, with a huge market potential. He subsequently led the design of the 4040, 8008 and 8080 processors.
Faggin was presented with the engineering prototype of their calculator with the first 4004. This was subsequently donated to the Computer History Museum.
Faggin and Ralph Ungermann (1942 – 2015) left Intel in 1974 to start Zilog. Intel’s reaction was to disown Faggin, and to rewrite company history. In particular, it credited more loyal, but less competent, employees, with the 4004 design.
Today is Tuesday, 2021-10-12. Because it is the second Tuesday in October, it is Ada Lovelace Day.
The micro-story behind this posting is that Ada Lovelace (1815 – 1852) collaborated with Charles Babbage (1791 – 1871) on his Analytical Engine. In 1843, she was the first person to publish a computer program. It generated Bernoulli numbers. Lovelace is also considered the first person to foresee the creative potential of the Analytical Engine, especially its ability to create music and art. The date selected for Ada Lovelace day is arbitrary. This day is one that could be one used by people with programming skills to serve humankind in various ways. In many places, it is also a school day, although not this year, and many other years where I live, as a week long autumn school break is being held.
For those wanting more information about Ada Lovelace, one place to begin is her Wikipedia article. In additional to a biography, it also provides other sources of information about her, including books, plays and videos.
At one level this day attempts to raise the profile of women in Science, Technology, Engineering and Mathematics (STEM). Some want to use STEAM, by adding Art. In my time as a teacher of technology, Ada Lovelace day was an opportunity to encourage female students to investigate STEAM, where they might be able to bypass some of those headstrong members of another, weaker gender. This day does not supersede or in any way compete with the International Women’s Day on 03-08,
In terms of the more technical aspects of computing there are many other days that can be celebrated, World computer day is 02-15. It was first celebrated in 2021, with a focus on 75 year old Eniac, described by some as the first programmable, electronic, general-purpose digital computer. At a more practical level, the second Monday in February, is designated the (American) National Clean Out Your Computer Day. Many people have issues regarding the storage of data on their computers, including the taking of regular backups. However, there is also a World Backup Day on 03-31, which could be a better day to focus on such issues.
For those who need more computing days: (Apple) Macintosh Computer Day = 01-24; World Password Day = 05-05; System Administrator Appreciation Day = 07-30; Computer Security Day = 11-30; Computer Literacy Day = 12-02, and National Download Day = 12-28.
Dates in the weblog follow International Standard ISO 8601 formats. Generally, of the form YYYY-MM-DD, however in this specific post there are many in the MM-DD format. ISO 8601 is the only format that the Government of Canada and Standards Council of Canada officially recommend for all-numeric dates. It is my experience that about half the Canadian population uses the American MM-DD-YYYY format, while the other half uses DD-MM-YYYY, necessitating the need for ISO 8601. However, usage differs with context. See: https://en.wikipedia.org/wiki/Date_format_by_country
Sometimes, it can be difficult to determine start dates. Take the average person. The exact time of conception may be difficult to know. Thus, it may be easier to use some other event as a proxy for the start of life, such as a person’s birth. Even if a child has no recollection of her/ his own birth, this is an event most mothers will remember.
Something similar has happened with Linux. There have been a lot of different start dates proposed. In part, this is because an operating system is complex, consisting of many different but integrated parts. Linux has a tendency to borrow these components from other projects, including/ especially Unix. Thus, the very first start could have been a day in 1970 when Ken Thompson (1943 – ) and Dennis Ritchie (1941 – 2011) started their Unix project. It could have been a day in 1977, when the Berkeley Software Distribution (BSD) was developed by the Computer Systems Research Group (CSRG) at the University of California, at Berkeley.
If you prefer the 1980s, there are several more dates to choose from. In 1983, Richard Stallman (1953 – ) started the GNU project to make a free Unix-like operating system. He wants Linux to be called GNU, but is willing to compromise on GNU-Linux. Others might prefer 1986, when Maurice J. Bach (1952 – ) published The Design of the UNIX Operating System, a basic source of Unix information. Many computer science students at the time would recognize 1987, and MINIX, a Unix-like system released by Andrew S. Tanenbaum (1944 – ) based on principles found in, Operating Systems: Design and Implementation (1987) by Tanenbaum and Albert Woodhull.
Appropriately, people have concentrated on 1991-08-25, and an announcement by Linus Torvalds (1969 – ), about an operating system kernel, based on Minix principles, but free of Minix code. The launch date of Linux was 1991-09-17. However, even Torvalds refers to 1992 when the X Window System (not to be confused with the Microsoft Windows operating system) was ported to Linux by Orest Zborowski, which allowed Linux to support a graphical user interface (GUI) for the first time. By 1996, Linux came of age, having its own brand identity in the form of the penguin, Tux. The term mascot is discouraged because, while there were three competitions to find a mascot, Tux won none of them!
Linux is not an entire operating system. Rather, it is just the kernel, as developed by Torvalds. Some corporations and communities build their own operating systems directly on top of the kernel. This applies to Red Hat, Debian, Suse, Arch and other distributions who have each independently chosen the software that is to make up their operating system, above the kernel. Other companies/ groups build distributions, such as Fedora, Ubuntu, openSuse Tumbleweed and EndevourOS, on top of these, respectively. Several iterations of building can take place. Linus Mint is built on top of Ubuntu, which is built on top of Debian, for example. In the Linux world it is easier to modify something that exists, rather than to start from scratch. This is allowed, because of the Linux licencing conditions, that permit development forks, that is development branches. This also means that most Linux operating system distributions resemble one another, but in many different ways. Most of these distributions comes bundled with a variety of open-source applications, although the number and specific applications included vary.
BSD is organized differently. BSD is both a kernel and a complete operating system. Both are maintained as a single project. That said, much of the software on top of the Linux kernel by the many distributions, is the same software as used on BSD. It is often more correct to refer to Linux and BSD operating systems as Unix-like operating systems. Linux and BSD have different lineages, resulting in the use of somewhat different components.
The first Linux distribution I used was Mandrake, developed by Gaël Duval (1973 – ), and released in 1998. It was based on Red Hat Linux and the K Desktop Environment (KDE). Its current reincarnation as Mageia is fondly, but possibly irrationally, regarded by this writer. More recently, I have supported Gaël Duval in his development of /e/, a privacy-oriented fork of the Android-based LineageOS, with some online services, part of the E Foundation since 2018, Android, itself, could be considered a Linux variant, as it was based on a modified Linux kernel.
Linux Mint 20.2 Uma, is my current daily drive. Since 2014, Mint has been a community-driven Linux distribution. Almost all distributions used on tablets, laptops and desktops have a GUI desktop environment. The standard one with Mint is Cinnamon. It works appropriately, and without issues. Similarly, there are other applications that are used more than others in a wide variety of distributions. For example, the office package used here at Cliff Cottage is LibreOffice.Firefox is used as a web browser.
Looking no further than across the kitchen table, there are many different types of people who want distinctly different things from their operating systems. One person is content with her 2016 model Asus Zenbook laptop that for the past couple of years runs Linux Mint, and the other programs mentioned. She has no desire to replace the machine, for that would mean adapting to something new. As long as there are no hardware issues with the machine, then it is probably good enough to last until at least 2026. Another, older user, who does almost nothing more than read The Guardian newspaper with a Firefox browser, and writes a few weblog posts, using WordPress, still finds excuses to own several machines, most currently running Linux Mint, at least, most of the time. Another, younger user regularly uses four computers. These are a work-supplied Acer Swift running Windows 10, a personal Asus TUF gaming laptop, and two (2) scratch built gaming rigs, each filling their rather large tower with an assortment of fans. Each of these last three run Linux Mint.
However, there may be changes ahead at Cliff Cottage. While Mint is user friendly, it updates on a two year-cycle. For many people, especially the gamer and de facto support person, this is not fast enough. Enter Arch Linus, first released in 2002. While not trying to be all things to all people, it does aim to be simple, modern, pragmatic, user centric and versatile. It succeeds at some of these, but is probably more complicated and less user centric, than its adherents want to admit. However, it uses a rolling (rather than fixed) update cycle, which means that it is considerably more up to date than Mint. It also provides excellent documentation.
EndeavourOS is a rolling release Linux distribution, based on Arch Linux. Setup is much easier than that of the Arch base. It is currently installed on Eerie, our experimental Ryzen 7 desktop machine. After further testing, it will probably be deployed as a replacement for Linux Mint. Because so much of operating system activities is related to the specific desktop environment being used, it is probably advantageous to retain Cinnamon. During testing several other desktop environments were tried, but offered no apparent advantage. The main challenge with Cinnamon is a lack of appropriate documentation. LibreOffice, Firefox, and many other programs used regularly, will also be retained. Any transition to EndeavourOS has to be close to invisible, for at least one (and potentially two) of the Linux Mint users, as the ageing process makes accepting changes more difficult.
On 2021-06-24, Microsoft announced Windows 11, which will probably be released late in 2021, although preview versions are available now. The main disadvantage of Windows 11, is that it only runs on devices with a Trusted Platform Module 2.0 security coprocessor, which offers protection against firmware and hardware attacks. In addition, virtualization-based security (VBS), hypervisor-protected code integrity (HVCI), and Secure Boot must be built-in and enabled by default. This means that devices as recent as 2019, may not be able to run Windows 11. In the short-term this will not be a problem, because they will be able to run Windows 10. However, most versions of Windows 10 will loose support on 2025-10-14. After this date, they should not be used online, for security reasons.
One way to future proof an older system is to install a Linux distribution along with Windows 10, in a dual-boot configuration. This allows the machine to use both operating systems. It will also give users about four years to become accustomed to a Linux operating system, should they choose to go that route. Currently, people are encouraged to use EndeavourOS (or Linux Mint) with a Cinnamon desktop environment, if they want any meaningful help from this user.
While Microsoft Windows is demanding in terms of processor speeds, random access memory (RAM) size and other features, Linux is more accepting. It is possible to thrive with Linux installed on ten year old, limited capacity equipment. Many distributions flourish with 4 GB of RAM, some even less.
At Cliff Cottage we use a variety of special purpose operating systems. The large screen some people might mistake for a television, runs the Kodi media player on LibreELEC = Libre Embedded Linux Entertainment Center on an Asus PN40. Yet, not all of our open-source operating systems are Linux systems. Server equipment is often BSD based. The reason for this is that we want our network attached storage (NAS) server to use ZFS = Zettabyte File System, (its name in a previous life). Some Linux distributions handle this adequately, but BSD is designed to use it. Another advantage of BSD is its ports system. which provides a way of installing software packages by compiling them from their original software source. Packages can also be installed from pre-compiled binaries, if that is preferred.
There are situations where Linux is not the answer. Real-time operating systems (RTOS) are used in environments where a large number of events must be processed quickly. An example would be the control of your neighbourhood nuclear power plant. When a hazardous situation occurs, you want the operating system to respond immediately and appropriately. Linux, typically, does not respond fast and consistently enough to be used in demanding control situations. Part of the problem here, is that the Linux kernel is bloated. An upcoming weblog post will discuss RTOS in more detail.
More generally, there are legitimate criticisms of desktop Linux, including: an excessively large number of choices of distributions; poor open source support for some hardware, in particular drivers for 3D graphics chips; and, the failure of software providers such as Adobe and Microsoft to provide Linux versions of widely used commercial applications. Sometimes this can be solved by running the Windows versions of these programs through a virtual machine, a software version of a physical computer, or by using Wine, an open-source compatibility layer that allows application programs, including computer games, developed for Microsoft Windows to run on Linux.
Returning to the kitchen table, the Acer Zenbook user would probably be happy with Windows XP, and content with Windows 7. Problems arose with Windows 10, when Microsoft demanded more control than the lady (or was it the lady’s husband?) was willing to give. In addition, her support team had already migrated to Linux, and were not particularly helpful solving her Windows 10 related challenges. In the end, she had no choice but to reluctantly go over to Linux, herself.
Today, is the day many have chosen to acknowledge 30 years with Linux. For some it will be a celebration. For many others it will only be a statement of fact. While few use it on their personal computers (estimates are 2% or lower), Linux based distributions are at the heart of servers: All of the world’s 500 fastest supercomputers run on Linux; Of the most popular 25 websites in the world, only 2 aren’t using Linux; about 95% of the world’s server activity is run on Linux; while; 90% of all cloud infrastructure operates on Linux; 85% of smartphones are based on Linux.
There are many sources of information about Linux: Wikipedia can be a good place to begin, with links to other sources. Distro Watch allows people to discover new and exciting, as well as old and boring, distributions.
A fanboy is a boy or man who is an extreme or overt enthusiast of someone or something. It very specifically excludes half of the population. Yes, those people who frequently become mothers, and who excel at multi-tasking. If there are complaints about the sexist nature of the title, I will graciously allow fanperson, or even fanchild, to be used. However, I wonder if there is a good reason why this description applies to one gender, and not the other?
Some definitions of fanboy use ardently devoted, or even obsessed. While some restrict this obsession to a more general “single hobby or interest”, another specifies categories and includes comic books, science fiction, video games, music or electronic devices and mentions Apple and the iPhone specifically.
While the number of fanboy sites checked is limited, none of them mentioned vehicles either generally (cars, locomotives and other forms of railcar, aircraft, vessels), or road machines more specifically in the form of motorcycles, muscle cars or sports cars. Here, I admit an unnatural attraction to panel vans and multi-purpose vehicles (MPVs), most specifically, the Citroën Berlingo more than any other vehicle, although if Citroën doesn’t soon produce an EV version with a ca. 300 km range, it may be dumped for another E-MPV: a Renault Kangoo or even a Kia Soul.
In terms of clothing, there were a number of products that I purchased because of their fit. These included McGregor Weekend socks and Ecco shoes, especially. Unfortunately, once McGregor started acquiring socks from China, they just didn’t last, so I stopped buying them. Instead, I have relied on Trish to make most of my socks. She also produces all of my shirts and pajamas. Ecco started using narrower lasts, which meant that their shoes no longer fit. Fortunately, my daughter, Shelagh, introduced me to Allbirds, which have now become my preferred shoe brand.
Owning a large number of products produced by the same manufacturer does not make one a fanboy. Take Jula’s Meek range of battery tools. While I own a large number of them, it is primarily because the tools are good enough, and they have battery compatibility. Similarly, the Scheppach woodworking power tools I have were purchased because of their low cost. Some are being replaced with Bosch (and other brand-name) equivalents. I want tools that do the job. I am a hobbyist, not a woodworking professional.
Where I am a fanboy relates to computing equipment, hardware more than software. Yet, because of my education, software is important. In terms of programming languages, I describe myself as a member of the Algol tribe, with Simula the closest to my heart. Unfortunately, it has been 35 years since I last programmed in it. One of the most important clans of the Algol tribe is Pascal. Outside of this tribe there is Smalltalk, which is appreciated more for its origins at Xerox Palo Alto Research Center (PARC) than its utility. I am also attracted to Forth, Lua, Node-Red, Processing and Prolog, each for specific purposes. Yet, in the real world, if I have to program, I stick to C and C++. More modern languages, such as Python, arrived too late for me to work in, although it is a language I have played with, and recommend younger people to learn.
When it comes to desktop/ laptop operating systems, I will probably stick to Linux Mint with a Cinnamon desktop. This is not because I have mastered it. Rather, it allows me to muddle though. Other operating systems such as LibreELEC, FreeBSD, Robot operating system (ROS) and FreeRTOS = free real-time operating system, will be used to meet specific needs. Yet, I have an emotional attachment to another Linux distribution, Mandrake, the first Linux distribution I used, or at least its descendent, Mageia. The first family computer we owned was an Amiga, with Amiga OS still finding a place in my heart.
Most of the software listed above, is either unavailable (Simula, being the best example) or freely available without cost (most other products). Thus, if there is a specific need, most of the above operating systems can be put on a memory stick, and be running on a computer within a few minutes. Then I can spend an hour or two indulging my (software) desires.
This is not the case with hardware. Most of the time, real money has to be spent buying gear. I have already wasted sufficient money buying computer components that break down far too early. Thus, the primary reason for being a fanboy is to secure reliable products.
In alphabetical order, some products are:.
Advanced Micro Devices (AMD): microprocessors, currently Ryzen models
Asus: computers, motherboards
Canon: printers
Logitech: keyboards, mice, headphones
Noctua: fans, cooling systems
Raspberry Pi: single board computers, and peripheral interfaces (HATS)
Wacom: graphic tablets
Some products have almost attained fanboy status. One is Native Instruments for its musical/ sound equipment. A change in status is dependent on support for Linux operating systems, something currently lacking. Similarly, I am awaiting for RISC V reduced instruction set computers (RISC) to arrive at some point in the near future. Hopefully, this open-source microprocessor system will improve security, reliability and durability.
There are a number of products that at one time had fanboy status, but lost it. Western Digital is the best example. Before, almost all hard disk drives (HDD) in use here were made by the company. Now, Toshiba is the most popular brand, although I did purchase a Samsung solid-state drive (SSD) for my latest system. I also buy a number of Kingston products, in the form of memory sticks.
The opposite of a fanboy is a h8er, an adversary of particular products. While others may be h8ers out of prejudice, the term here is restricted to unreliable products. Printers often fall into this category with those provided by HP and Epson particularly notable.
The one other manufacturer that should be commented upon is Apple. Their Macintosh Performa series was a disaster, but countless times they make decisions that negatively impact users. This may be fine for the class of user that can upgrade regularly without thinking of the cost, but Apple products are not for people in the lower echelons of society. I have previously commented on my experiences with Apple products.
Final comment: Nikki Gordon-Bloomfield, in a YouTube video, on her Transport Evolved channel, 2020-11-25 at 20:32, used the term fangirl to describe what she wasn’t. This is the first time, I have been exposed to a gendered variant of the term. Admittedly, I lead a sheltered life.
2023-07-07 Notes:
The Berlingo was permanently dumped for a Volkswagen ID.Buzz on 2023-02-13.
Currently, I have six pairs of Allbirds in blue, olive green, bright green, pink, red and yellow,
Bosch is my current preferred brand of electrical tools, with a compound mitre saw my latest purchase, and a plunge saw pending.
The next brand to be replaced will be Raspberry Pi. Some RISC-V = reduced instruction set computer, version 5 microcontroller/ -processor will be their replacement.
A workstation is a computer that acts as an attachment site for a wide range of tools (software as well as hardware), that a particular operator uses on a regular basis. In this weblog post, the history of computing will be examined, with an emphasis on its gradual expansion into new areas, as new capabilities emerged. This expansion results in the evolution of computers into workstations.
Military purposes came first. Colossus, designed and built starting 1943-02, was delivered to Bletchley Park, 1944[-01-18, and was operational by 1944-02-05. It was the world’s first electronic digital programmable computer It used 1 500 vacuum tubes, had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on data, typically breaking code encrypted by German Enigma machines.
After the second world war, electronic data processing (EDP) became the new buzzword (or more correctly phrase or abbreviation, respectively) between about 1950 and 1970 that referred to automated methods to process data, most often business related. A data processing system consists of four components: hardware, software, procedures and personnel.
Data was prepared by keypunch operators who created punch cards, typically in the IBM card format, introduced in 1928, with rectangular holes, 80 columns, and 12 rows. The card size was 7 3⁄8 by 3 1⁄4 inches (187.325 mm × 82.55 mm). There were about 143 cards to the inch, or 56/ cm. A box provided 2000 cards. These cards were fed into a card reader, that was attached to a mainframe computer. Typical for the era was the IBM System/360 family of computer systems were delivered between 1965 and 1978. The model 195 was the most powerful, and cost between US$ 7 – 12 million.
Mini was another buzzword of the 1960s. It could refer to skirts (and dresses), cars and – for the discussion here – a class of computers, the minimachine. These had their own operating systems and software architectures that distinguished them from mainframes. Minis were designed for control, instrumentation, human interaction, and communication switching as distinct from calculation and record keeping. They also had a two decade long lifetime from 1965 to 1985, although there were almost 100 companies formed, my personal experience was with Digital Equipment Machines VAX-780s, and later with Norsk Data Nord 500 machines.
Workstations were small scientific computers designed to be used interactively by a single person. Perhaps the first workstation was the IBM 1620, launched in 1960. More began to emerge as minimachines became more popular and increasingly available. Most workstations of this early period were minimachines, repurposed for a single user.
With the emergence of microprocessors (in the mid 1970s), and personal computers (in the early 1980s), a more modern version of the workstation began to take shape.
A 3M workstation was an ideal for many computer professionals in the early 1980s. While it was a word play on the 3M = Minnesota Mining & Manufacturing company, it also referred to at least a megabyte of memory, a megapixel display and a million instructions per second (MIPS) processing power. It could be upgraded to a 4M machine if it cost less than a megapenny = US$10 000.
The closest most people could come to a workstation in the mid 1980s, was a Commodore Amiga 2000. It was a bargain machine at less than NOK 20 000. It had a MB of memory, but otherwise failed to meet the 3M criteria. It was more powerful but less expensive than an Apple Macintosh, that had come onto the market in 1984. It was also fitted with two 3.5″ floppy drives, five Zorro II expansion slots, two 16-bit and two 8-bit ISA slots, a CPU upgrade slot, a video slot and a battery-backed real-time clock. It came with an IBM PC Compatible bridgeboard with its own 5.25″ floppy disk drive, which allowed it to run MS-DOS, and compatible programs.
AmigaOS was a single-user operating system. Its firmware was referred to as Kickstart. There was a multitasking kernel, called Exec. Like most modern computers – but unlike many of its contemporaries – this was pre-emptive, allowing interupts to disrupt processing flows. It also provided: a disk operating system, AmigaDOS, a comand-line interface (CLI), AmigaShell; a windowing application program interface (API), Intuition; and a desktop file manager, Workbench.
Starting with AmigaOS 3.1, Workbench referred to what is now called a Desktop. Directories were referred to and depicted as drawers, executable files were tools, data files were projects and GUI widgets were gadgets.
Unfortunately, while there was software for 3D design, it did not extend far enough for that needed for industrial strength computer aided design (CAD) and other engineering tasks. Thus, the machine in some respects failed to live up to its workstation expectations. The Amiga came with a two-button mouse, unlike the Macintosh that had only a single button.
When the Amiga arrived, many people expected it to last into “the next century” by regularly upgrading hardware as well as software. Unfortunately, by the early 1990s, it was out of date, and the promised hardware never arrived.
Today, the computing power of any of the above machines is exceeded by an inexpensive (US$ 5), single board computer, such as a Raspberry Pi Zero W. Even the smallest computer today is a powerful processing machine, compared to those of the past. For example, the slightly more powerful Raspberry Pi 4, can provide 8 GB = 8 000 MB of RAM, and can support two 4k (3840 x 2160 pixels) screens = 16.58 Mpixels, and operate at 8 176 (Dhrystone MIPS).
In terms of operating systems, most versions of Linux are able to match (or exceed) anything and everything offered by an Amiga, or any other operating system from that period. For readers preferring to live in the past, a PiMIGA 1.3 clones the AmigaOS so that works on a Raspberry Pi 4, while AROS (originally Amiga Research Operating System (1995), now AROS Research Operating System) runs on x86 (conventional PC) architectures.
Bill, at the Dronebot Workshop, defines a computer as: “Not a tablet. Not a phone. Not a Chromebook.” This is a good starting point for a definition of a workstation, but in addition there have to be some positive attributes. It is some sort of a container filled with a microprocessor and various forms of memory, it is typically equipped with or attached to input devices, usually a keyboard and mouse, and output devices, such as a display. Other devices may also be plugged into the machine, as required.
Hobby Electronics: An Example
With a massive amount of computing power available in a box 100 x 100 x 50 mm (4″ x 4″ x 2″), there is a decreasing need for electronic hobbyists to buy dedicated hardware. An AMD Ryzen 5/ Intel i5 computer, 16 GB RAM, a 500 GB SSD attached to a Red Pitaya STEMlab = Science, Technology, Engineering, Mathematics laboratory kit, an open-source hardware project intended to be alternative for many expensive laboratory measurement and control instruments. It can act as oscilloscope, signal generator, spectrum analyzer, Bode analyzer, logic analyzer, LCR meter (a type of electronic test equipment used to measure the inductance (L), capacitance (C) and resistance (R) of electronic components) and a vector network analyzer, used to test component and system specifications, to verify designs and to ensure these components and systems work properly together.
Additional software such as KiCad, a computer aided design (CAD) program for electronic design, Thonny, an integrated development environment (IDE) for Python, as well as editors, file management and communication tools, including office tools, transform the computer from something that is nice to have, to an indispensable tool, a workstation.
Many of the tools mentioned above, could be purchased as separate/ independent tools. However, the total cost would be many times the price, as the tools contain multiple iterations of the same component. One other advantage is that this configuration takes up far less desk and shelf space than the seven (or more) tools it replaces.
This series about computing has consisted of 24 parts, published through much of 2020. Its goal was to help people make appropriate choices as they struggle through the maze of computer component/ device/ system acquisition opportunities.
It is my intention in 2021 and beyond to update the posts that constitute this series, at approximately annual intervals, somewhat close to their original date of publication. At the bottom of each updated post, there will be a statement providing a version history, and a summary of content changes. Subscribers will not be notified about these changes.
One person, who I know reasonably well, has never used a computer, and never made the transition to a smartphone, but relies on a clamshell mobile phone, anno 2020. This has serious consequences. For example, it means that common banking services are unavailable, medical appointments have to be made using a living intermediary, and there are no opportunities to buy anything online. One is dependent on a printed newspaper and television/ radio broadcasts for information. Unfortunately, there is very little people can do to help this person enter the digital age.
Today’s weblog post is personal. It looks at the wants, needs and thought processes of a single person. It attempts to show what this person takes into consideration before making an acquisition. Writing this series was an opportunity for me to learn how to make smarter choices when it came to purchasing equipment! It is written through the prism of an older person.
Budgeting: Simple
The world is unfair. In this context it has to do with disposable income and apportioning some of that to buy computing infrastructure related services (web, broadband and cell-phone subscriptions) and products (cell-phones, computers, printers, servers, memory sticks, etc.). Some people can afford a lot, while others will have to consider the relative merits of every proposed expenditure.
At the end of 2020, every adult (and almost every child) needs a smartphone. In Norway these cost between NOK 2 000 and 10 000. They need to be replaced about every three years, although some replace them twice as often. We kept our first smartphones for almost five years. Thus, this expenditure can vary from about NOK 500 a year, to over NOK 5 000. To put it another way, frugality can provide considerable cost savings.
Apart from broadband connections and cell-phone subscriptions, almost all other computer expenditures are voluntary. The most impoverished with a need for a computer should consider buying a five year old laptop, with a price of NOK 1 000, and keep it for another five years. In addition, they may need to install and use an operating system suitable for older equipment, which in most cases means Linux. This machine needs to be augmented with at least one (preferably two) external hard-drives, for backup. Here, I would not compromise, but purchase new equipment. Two of these could cost as little as NOK 1 300, and last five years. Currently, our new external drives are Toshiba Canvio Advance units. Basic level units are cheaper, and have almost the same functionality, except for hardware encryption.
With a minimal solution, there is no need for a printer, or other peripherals. As one ages, there may be increased need for ergonomic peripheral equipment. These will also have to be considered in terms of a budget. Even here, there is a possibility to buy used equipment, and to keep it/ them for many years.
There are people in Norway who have sub-minimal solutions. They have no broadband, and rely on prepaid cards, instead of cell-phone subscriptions. They may only own a clam-shell dumb-phone and nothing more. It is also one reason why I have a low threshold to give away equipment, especially to people who are unemployable or underemployed, or live on minimal disability or old–age pensions.
Budgeting: Complex
Sometimes, it is useful to have a budget that can take the form of an equation. It looks scientific, though it isn’t. However, it might still express a relationship between budget items. I discovered that the following fits gudenuf for the past four years: z = a (x + 1), where z = total budget, a = annual web, internet and telephone subscription costs, and x = the number of people in the household.
At Cliff Cottage broadband costs NOK 619 per month, while our telephone subscriptions cost NOK 118 per month each. The web-related subscriptions cost NOK 1 436 per year. Subscriptions amount to NOK 11 700 per year, in total.
If this budget for subscriptions seems excessive, we have the cheapest broadband rate (50 Mbps) available locally, and the cheapest cell-phone subscriptions (with 5 GB of data) that are available in Norway. Others may have 500 Mbps of broadband, with multiple television channels, and pay almost NOK 1 300 per month. Some pay NOK 1 700 for twice as many channels and 1 000 Mbps = 1 Gbps broadband. Cell phone subscriptions vary up to about NOK 450 for unlimited voice and up to 100 GB of data per month. This would give a monthly cost of NOK 2 600, or NOK 31 200 a year, over twice of what we are paying now.
In this household, x = 2, that is, there are 2 people. Z = the total budget, which amounts to NOK 35 100 per year. Apart from the subscription payments, the other NOK 23 400 goes to pay for computer infrastructure, such as a NAS server components, Ethernet cabling, printers or a house cinema components, not to mention paper and ink/ toner. Then there are personal devices for each resident. Personal devices may consist of laptop/ desktop machines, hand-held devices such as smartphones and tablets, as well as memory sticks, solid-state drives, etc.
The great advantage of having a budget, is that it forces one to think through expenditures and their economic implications. It also shows important people in the household that one actually has a plan, and that plan is being followed. The great disadvantage, is that costs don’t always follow a linear curve.
Replacements
At some point smartphones will have to be replaced. My 3.5mm headphone/ microphone jack has been damaged. I have found a temporary fix, but it will not last forever. So, the frequency of this type of purchase may increase. In addition, I am considering buying a Fairphone 3+, which is almost twice as expensive as my current phone, at about NOK 6 000. This is not a final decision. Fairphones may be easy to repair, but they also seem to need more repairs than many other phones.
Laptops are increasing in price. What used to cost NOK 6 000 in 2016 costs NOK 10 000 in 2020. The most popular laptop in Norway is currently a MacBook Air with an M1 processor, which costs almost NOK 13 000. The most popular non-Apple PC is a Huawei MateBook that costs NOK 20 000. The most popular Asus computer, a Zenbook, costs NOK 23 000. Gaming laptops can cost in excess of NOK 40 000. At some point a five year old Asus Zenbook laptop will have to be replaced. A suitable replacement will cost somewhere between NOK 10 – 12 000.
In the coming year, 2021, I know already that I will have an approximately NOK 4 500 expenditure for the network attached storage (NAS) server, in the form of two Toshiba N300 8 TB drives. This is because the NAS is already 73% full, with its current 4 x 10 TB drives. This figure should never exceed 80%, so something will have to be done. Producing less data, does not seem to be an option. Fortunately, the NAS holds up to 12 drives, so that only half of the drive bays will be occupied with this upgrade. Two additional external 4 TB drives will also be purchased, at a cost of about NOK 2 500. Thus, I expect to spend at least NOK 7 000 on backup, each and every year forward. Once all of the 12 drives on the NAS are filled, the oldest ones will have been in place for 6 years, and probably need replacing. The same is also true of the external drives, that are being stored outside of Cliff Cottage.
Taking these purchases into consideration, an equation that has been useful for several years, may prove to be inadequate in the future.
Operating Systems: An Aside
Computer operating systems are all the same, yet each one is different. A consensus emerges among developers, so that systems start resembling one another. At the same time, developers want to assert their independence.
Since 2016, I have used Linux Mint as my primary operating system (OS), with a Cinnamon desktop environment. This is probably about as close as one can get to an updated version of the Microsoft Windows XP OS. XP was released in 2001, with an end of support life that ended between 2009 and 2019. XP received acclaim for its performance, stability, user interface, hardware support and multimedia capabilities.
At Cliff Cottage, we have used many other OSes. Our first home computer used Amiga OS. Then we had machines with Windows, Macintosh, Linux and even Chrome OS. The other resident at Cliff Cottage used Windows, until 2020-08, when she went over to Linux Mint. She claims that the transition did not involve any significant trauma.
While Linux Mint will probably continue to be the main OS at Cliff Cottage, each machine also allows other OSes to be installed, for experimental or other purposes. This includes Windows 10, if it is needed. Thus, Mageia 8, when it is launched, will be installed on a machine for sentimental reasons.
We have used smartphones since 2011, with iOS as well as Android on them. While I have talked, and written about a de-googlized Android OS from the e foundation, I realize that this will have to await the next purchase of a handheld device.
If I could encourage one change, it is for current Windows 10 users, who are unhappy with their OS, to try a user friendly version of Linux to see if they feel more comfortable with it. One such OS is the latest version of Linux Mint with the Cinnamon desktop. This can be done by making a live version, which means copying a bootable version of it onto a USB flash drive/ memory stick/ thumb drive. By booting up from this drive, Linux will be available. Those who, after this trial, feel uncomfortable using Linux do not have to do anything, except to avoid booting up from the USB drive again. Those that find they prefer Linux can, at some point, install it on their machine, either alone or as part of a dual-boot system with their original OS. The memory stick can then be used to boot Linux on other computers. Linux is particularly well suited for older hardware.
Buying computer equipment
The acquisition of computer equipment faces three major challenges. First, equipment (hardware as well as software) is continuously evolving. Yet, while computing power has increased significantly over the past years, changes are more evolutionary than before. Today, there is a greater emphasis on power per watt, than on raw processing power. This applies to personal machines, as well as servers. While hand-held devices (smartphones and tablets) have become more dominant, there is still a need for personal computers – laptops as well as desktop machines. Servers may be hidden in a cloud, or in an attic/ basement/ closet, but they too are performing more work.
Keyboards and mice are the most important input devices, as they have been since 1984. The screen is the most important output device. It has become thinner, with improved resolution. Broadband, and other forms of communication, increasingly allow large quantities of data to move throughout cyberspace.
Second, people continuously age. This may be seem as something positive in a fifteen year old looking forward to being twenty. It may even be regarded as inevitable by a seventy-five year old contemplating eighty.
Younger people should receive a critical education that allows them to appreciate the value technology brings, but to be wary of its detrimental aspects. Technology is not benign. Gaming is a particularly difficult challenge, because many youth become addicted to it. Thus, it may be necessary to restrict computer access to ensure that people get enough sleep, perhaps by disconnecting WiFi and/ or wired internet access, say from 22:00 or 23:00 to 06:00 or 07:00, respectively.
Older youth could be encouraged to use computers productively for the benefit of themselves and their family. On 2020-11-02, the Raspberry Pi Foundation launched the Raspberry Pi 400 Personal Computer kit, the purchase of such of system at £/ €/ $ 100, and the matching of it to an existing display, would provide an ideal development machine for a young person. Many home automation tasks could be implemented by people in this category.
For those approaching midlife, there is a continual need to adapt, and to learn new technological skills. Society should be concerned when thirty/ forty/ fifty/ sixty-five year olds give up on acquiring/ developing new computing skills, while the world/ computer hardware/ computer software moves onwards. It is important to keep abreast of rising trends, but not to be a slave to them. One particularly damaging trend is for employers to make sideways investments in software. The expectation is that these new programs will add capabilities. However, they often end up doing the same thing, just in a slightly different way, that requires old skills to be relearned. This can be very discouraging.
Adaptability also applies to older people, but in a slightly different way. They have to think about impairments (current and potential). They also have to think long term! They may want to keep equipment longer than younger people, who are more adept at handling change. Older people may prefer to make an evolutionary transition to something a little different, rather than a radical change to something totally new.
Third, prices change erratically, so that what seems inaccessible one day, becomes affordable the next – and vice versa. Price is one of the major determinants of what people buy. This topic will be amplified later in this post, with specific examples.
Eratic pricing
Almost every computer equipment purchaser wants to be portrayed as astute. Everywhere, there are hypothetical bargains that save money! The truth of the matter is that many purchasers are undisciplined, and exceed their budgets. This writer is no exception. At the beginning of 2020 the equipment budget for the reserve/ lab/ electronics/ podcasting computer system was NOK 10 000: computer = 4 000, screen = 2 000, other peripherals = 3 000, miscellaneous = 1 000. In contrast, a RPi 400 would have cost about NOK 1 000, and used an existing screen. However, it would not have been able to use many of the ergonomic peripherals, envisioned.
Yet, a budget challenge arose almost immediately after the pandemic struck. The Benq monitor I had contemplated, an upgraded variant of the model used at the workshop in Straumen, had increased in price from a little over NOK 2 000 to almost NOK 3 000, call it a 40% increase in less than a year. It was time to look for something different. This turned out to be an AOC office display with more than adequate specifications. The AOC display started the year off at NOK 3 000 then gradually increased in price to NOK 3 500. Yet, overnight, it was suddenly NOK 1 200 cheaper, and I purchased it for NOK 2 300, NOK 600 less than the Benq with inferior specifications.
Substitutes are not always available. I had always planned to buy a Logitech MX Vertical mouse, and Logitech ERGO K860 keyboard to experience their ergonomic characteristics. At NOK 1050 and NOK 1200, respectively, neither was cheap. Another peripheral on my purchase list was a headset. Many sites with reviews about headsets for the hearing impaired had suggested assorted version of Audio Technica products, commonly the ATH-M50X at NOK 1 100. However, these are headphones for listening, without a microphone for talking. These could be connected with an Audio Technica ATR3350iS omnidirectional condenser lavalier microphone, that comes with an adapter, allowing it to be used with handheld devices. These cost almost NOK 550, for a total price of almost NOK 1 650. Thus, I started to investigate office and gaming headsets. The Logitech G433 and the Logitech G Pro X also seemed too expensive, at NOK 1 250 and NOK 1 350 respectively. I decided that I could stretch myself to buy a Logitech G Pro at NOK 1 000, as a compromise. However, on the day I decided to buy one, the price of the G Pro X at NOK 900, was lower than either the G Pro or G433. It was purchased.
With the Norwegian Krone (NOK) crashing due to the pandemic, the budget couldn’t hold. The Asus PN50 barebone cost just NOK 4 300, but needed a hard drive (Samsung EVO 970 Plus M.2 500 GB = NOK 1 200) and RAM (G Skill Ripjaws4 16 GB = NOK 800). This puts the price at NOK 6 400, which is more than 50% over budget. Yet, it was purchased because it seemed inexpensive, relative to performance. A month after the purchase, the PN50̈́’s barebone price has increased to NOK 5 900. However, the Samsung SSD is now only NOK 1 000, while the G Skill RAM is the same price, NOK 800, for a total of NOK 7 700, over 90% above the initial budget. Given these prices, a less powerful machine would have been chosen.
Todays prices: The Logitech MX Vertical is NOK 850, the ERGO K860 is NOK 1 370, and the G Pro X headset is NOK 1 300. The ACO screen has also wavered in price. Soon after my purchase it increased to about NOK 3 200, then it fell once again to NOK 2 400.
The used Asus A-i-O Pro 500 from 2015, cost NOK 2 500 plus NOK 150 delivery charges. The new price for a similar machine, but with a more modern and capable processor, is over NOK 10 000.
Local Sources
In general, I try to buy products made and/ or sold by local companies. There are different rings of local. It can mean Inderøy – our municipality, Innherred – our region, Trøndelag – our county, Norway – our country, Norden – Sweden, Denmark, Iceland and Finland officially (and Estonia, for me personally); or Europe – our continent, Beyond this, much of the computer equipment purchased is made by Taiwanese or South-Korean companies. These would be bought from local stores, if they bothered to stock them (which they don’t) which means an increased reliance on online suppliers. The two preferred ones are located in small villages: Multicom, located in Åmli (population 1 836), in the extreme south of Norway, and the municipality’s second largest company; Proshop, actually a Danish company, but located in Bø i Telemark (population 6 101) located about 122 km/ 2 hours drive north-east of Åmli.
The used machines that have been purchased have mainly been sourced locally. That is, in close proximity to where a family member lives, which for the past few years has also included Bergen.
Hobby Budgets
For the past 70+ years, I have tried to perfect an incredulous look. When my name, computers and budget are mashed together in a sentence, this is my cue to display this look. Unfortunately, the person I most often try to impress with it, has become totally unfazed by it.
The term hobby refers to a sluice-gate that allows unknown quantities of cash, and other forms of money, to escape a household. In return, assorted pieces of equipment, usually termed junk by non-believers, miraculously appear.
To see an example of how hobbies can get out-of-hand, one is encouraged to watch one or more episodes of Rust Valley Restorers, on Netflix. Mike Hall, at Tappen, British Columbia, near Shuswap Lake, has 400+ rusting vehicles awaiting restoration.
At some point it is necessary to separate what is part of a household infrastructure, from hobby activity. Superficially, items may look very much the same, and there could be a tendency to disguise a hobby purchase as an infrastructure purchase. People are advised to avoid this and other forms of self-deception.
Thus, some computer related purchases are now being budgeted not under the computing infrastructure budget, but as in the hobby electronics category.
Needs/ Wants
Because I have the opportunity to do so, I prioritize the purchase of computer equipment beyond minimal household needs. While these could be considered (and budgeted) as part of the computing infrastructure, a more honest appropriation is to consider them as hobby electronics expenditures.
There were four areas that I wanted to improve, in 2020.
A reserve machine (in case of a breakdown)
A dedicated electronics hobby machine
An audio/ video editor
A soft-synth (computer based synthesizer)
Not all of these were to be used immediately for these purposes, and not all of them required a dedicated machine.
Reserve machine
Normally, a retired computer acts as a reserve, if something should go wrong with an active computer. Towards the end of 2019, the only potential reserve machine had been given away. Thus, throughout most of 2020, I contemplated the purchase of a reserve system, one that could be used by anyone living at or visiting Cliff Cottage.
One thought was to buy a used Asus Zenbook UX305C, identical to one in active use at Cliff Cottage. However, these machines date from 2016, so they are approaching five years old.
If one had waited until after its launch, a RPi 400 (previously mentioned above) would have made an ideal reserve machine. Admittedly, an inferior system to the reserve system that was finally purchased. It also requires a slightly different mind-set to use, since not all programs in daily use (such as Mozilla Firefox) are easily available on the RPi.
Eerie
Happenstance dictated that Eerie, a computer purchased in September/ October, is completely different from the one envisioned earlier in the year. The basic machine is a barebone computer. Wikipedia defines barebone as, “a partially assembled platform or an unassembled kit of computer parts allowing more customization and lower costs than a retail computer system.” It is not an ASRock Beebox (used at the Techno Workshop in Straumen) with an Intel processor, or a Gigabyte Brix with a AMD Richland processor, but an Asus PN50 with a Ryzen 7. The reasons are simple. First, as I approached the age of 72, I decided that I did not want to learn the quirks of a Beebox or a Brix. It is hard enough keeping up with those in the Asus family. Second, the machine has a powerful processor. This makes it useful and durable. Third, the machine is fanless. This makes it silent, useful when recording audio. Fourth, the machine was relatively cheap.
In 2020-09, some of the equipment was ordered, and turned into a functioning system by mid 2020-10. Eerie is not just a reserve machine, it is also being used as a lab Guinea pig, and for podcast recording and editing. In the future, it will also be programmed as a soft-synth. Currently, it is being used to test out ergonomic hardware and software. The name Eerie comes from the Children’s science fiction series in 19 episodes shown in 1992-3.
Eureka
On 2020-12-07, I purchased a used Asus All-in-One Pro computer. It is a computer inside a screen. This will make a better reserve machine than Eerie. It will be used as a tool for practical electronic hobby activities. One specific need is to construct room controllers. These will probably involve Raspberry Pi units, Power over Ethernet, sensors and touch screens.
Eureka is named after the family science fiction series in 77 episodes shown between 2006 and 2012, made in Burnaby, Chilliwack and Ladysmith, British Columbia.
In the future, a control unit for a CNC milling machine in the workshop will be needed. My last day as a construction worker is scheduled for Monday, 2023-10-30. Even though there is still considerable time before a milling machine controller is needed, it is useful to evaluate the Asus All-in-One unit in this role.
YouTube
If a picture is worth a thousand words, a YouTube video can be worth a hundred pictures. The main problem is that some people produce excessively long videos. Fifteen minutes is about all I can take, unless the producer is extremely pedagogical. Here are the top channels that I watch regularly:
Know the characteristics of the equipment you want and – perhaps more importantly – your reasons for wanting it. Then determine an acceptable price you are willing to pay. That way, if a bargain appears at a price below the target price, you can purchase it without hesitation. Regardless of whether the initial price seems high or low, it is the lifespan of the product that is important. An inexpensive device that lasts less than a year, can be a much worse investment than buying something twice as expensive that lasts four years or more.
On 2022-11-18 at 17:00 a correction was made to the size of the hard-drives used on our server. They have always been 10 TB, not 8 TB, as previously written.