A 10 MB HDD for USD 3 500 in 1980, was not an excessive price. The 1 MB of RAM on the Digital Equipment Corporation (DEC) VAX-11/750 mini computers I used cost over NOK 1 000 000 each in 1980. That is about USD 200 000 in 1980, or about USD 620 000 today (2019). The HDD pictured would cost over USD 10 000 today (2019) taking the value of money into account, which would make the cost of 1TB of storage equal to USD 1 000 000 000 today (2019). Yup, that’s one billion dollars!

SSD = Solid State Drive; HDD = Hard Disk Drive.

The Summary:

For daily operations on a desktop or laptop computer, SSDs are better (read: faster, quieter, more energy efficient, potentially more reliable) than HDDs. However, HDDs cost considerably (6.5 times) less than SSDs. Thus, HDDs are still viable for backup storage, and should be able to last at least five years. At the end of that time, it may be appropriate to go over to SSDs, if prices continue to fall.

The Details:

This weblog post is being written as I contemplate buying two more external hard disk drives (HDDs), one white and one blue. These will be yet more supplementary backup disks to duplicate storage on our Network Attached Storage (NAS) server, Mothership, which features 4 x 10 GB Toshiba N300 internal 3.5″ hard drives rotating at 7200 RPM. These were purchased 2018-12-27. While the NAS has its own backup allowing up to two HDDs to fail simultaneously, a fire or other catastrophe would void this backup. Thus, external HDDs are used to store data at a secret, yet secure location away from our residence.

The last time external hard disks were purchased was 2018-09-04. These were Western Digital (WD) My Passport 4TB units, 2.5″ form factor, rotating at 5 400 RPM, with a USB 3.0 contact. One was red (costing NOK 1 228) and the other was yellow (at NOK 1 205). However, we have nine other 2 – 4TB units, some dating from 2012-11-15. Before this we had at least 4 units with storage of 230 GB – 1 TB, dating to 2007-09-01. (We are missing emails before 2006, so this is uncertain territory, although if this information were required, we have paper copies of receipts that date back to 1980).

The price of new WD My Passport HDD 4TB units has fallen to NOK 1 143. New WD My Passport Solid State Drive (SSD) units cost NOK 2 152 for 1TB, or NOK 3 711 for 2TB. That is a TB price of about NOK 1 855, in contrast to about NOK 286 for a HDD. This makes SSDs about 6.5 times more expensive than HDDs.

I am expecting to replace the disks in the NAS, as well as on the external drives, about once every five years. Depending on how fast the price of SSDs sink in relation to HDDs, these proposed external HDDs could be the last ones purchased.

As the price differential narrows, other disk characteristics become more important. Read/write speed is especially important for operational (as distinct to backup) drives. Typically, a 7200 RPM HDD delivers an effective read/write speed of 80-160MB/s, while an SSD will deliver from 200 MB/s to 550 MB/s. Here the SSD is the clear winner, by a factor of about three.

Both SSD drives and HDD’s have their advantages and disadvantages when it comes to life span.

While SSDs have no moving parts, they don’t necessarily last longer. Most SSD manufacturers use non-volatile NAND flash memory in the construction of their SSDs. These are cheaper than comparable DRAM units, and retain data even in the absence of electrical power. However, NAND cells degrade with every write (referred to as program, in technical circles). An SSD exposed to fewer writes will last longer than an SSD with more. If a specific block is written to and erased repeatedly, that block would wear out before other blocks used less extensively, prematurely ending the SSD’s life. For this reason, SSD controllers use wear levelling to distribute writes as evenly as possible. This fact was brought home yesterday, with an attempt to install Linux Mint from a memory stick on a new laptop. It turned out that the some areas of the memory stick were worn out, and the devise could not be read as a boot drive. Almost our entire collection of memory sticks will be reformatted, and then recycled, a polite term for trashed!

Flash memory was invented in 1980, and was commercialized by Toshiba in 1987. SanDisk (then SunDisk) patented a flash-memory based SSD in 1989, and started shipping products in 1991. SSDs come in several different varieties, with Triple Level Cells (TLC) = 3 bit cells offering 8 states, and between 500 and 2 000 program/ erase (PE) cycles, currently, the most common variety. Quad Level Cells (QLC) = 4 bit cells offering 16 states, with between 300 and 1 000 PE cycles, are starting to come onto the market. However, there are also Single Level Cells (SLC) = 1 bit cells offering 2 states, with up to 100 000 PE cycles and Multi-Level Cells (MLC) = two level cells with 2 bits, offering 4 states, and up to 3 000 PE cycles. More bits/cell results in reduced speed and durability, but larger storage capacity.

QLC vs TLC Comparisons:

Samsung 860 EVO SSDs use TLCs while Samsung 860 QVO SSDs use QLCs. The 1TB price is NOK 1 645 (EVO) vs 1 253 (QVO), almost a 25% price discount. The EVO offers a 5-year or 600 TBs written (TBW) limited warranty, vs the QVO’s offers 3-years or 360 TBW.

With real-world durability of the QVO at only 60% of the EVO, the EVO offers greater value for money.

It should also be pointed out that both the EVO and QVO have a 42GB cache that allow for exceptionally fast writes up to that limit, but slow down considerably once that limit has been reached.

In contrast to SSDs, HDDs rely on moving parts for the drive to function. Moving parts include one or more platters, a spindle, an read/ write head, an actuator arm, an actuator axis and an actuator. Because of this, an SSD is probably more reliable than an HDD. Yet, HDD data recovery is better, if it is ever needed. Several different data recovery technologies are available.

The Conclusion:

The upcoming purchases of two My Passport 4TB external HDDs may be my last, before going over to SSDs for backup purposes, both on internal as well as external drives. Much will depend on the relative cost of 10TB SSDs vs HDDs in 2023, when it will be time to replace the Toshiba N300 10TB HDDs.

For further information on EVOs and QVOs see Explaining Computers: QLC vs TLC SSDs; Samsung QVO and EVO.

Cut/Copy and Paste

The most influential computer ever made, original Xerox Alto featuring bit-mapped black and white display sized 606×808 (the same dimensions as a regular 8.5″x11″ sheet of paper, aligned vertically; 5.8 MHz CPU; 128kB of memory (at the cost of $4000); 2.5MB removable cartridge hard drive; three button mouse; 64-key keyboard and a 5-finger key set. It was on such a machine that Bravo and Gypsy were developed, and cut/copy and paste invented. (Photo: Xerox PARC)

Larry Tesler (1945 – ), invented cut/copy and paste. Between 1973 and 1976, Tesler worked at Xerox PARC (Palo Alto Research Center), in Palo Alto, California, on the programming language Smalltalk-76, and especially the Gypsy text editor, referred to then as a document preparation system. It was on this project, he implemented a method of capturing text and inserting it elsewhere.

Xerox PARC was initiated by Xerox Chief Scientist Jacob E. “Jack” Goldman (1921 – 2011) who previously worked at Carnegie Tech and directed the Ford Scientific Laboratory, who hired a physicist, George Pake (1924 – 2004) to create it in 1970.

Xerox PARC was largely responsible for developing laser printing, the Ethernet, the modern personal computer, the graphical user interface (GUI) and desktop paradigm, object-oriented programming, ubiquitous computing, electronic paper, amorphous silicon (a-Si) applications, and advancing very-large-scale integration (VLSI) for semiconductors.

For a more complete story, see: Larry Tesler, A Personal History of Modeless Text Editing and Cut/Copy-Paste (2012)

While most people focus on the cut/copy-paste tool, the concept of modeless software had even greater impact. A mode is a distinct setting within a computer program, in which the same user input will produce different results, because of other settings. Caps lock when pressed puts the user’s typing into a different mode, CAPITAL LETTERS. If it is pressed a second time, the original made will be reactivated, resulting in lower-case letters.

Most interface modes are discouraged because of their potential to induce errors especially when the user is expected to remember the mode state the interface is in. The situation is somewhat better if there is an on-screen state/ mode indicator, such as a change in the colour of an icon, when a mode change is made.

If the user is unaware of an interface mode, there may be an unexpected and undesired response. Mode errors can be disorienting as the user copes with a transgression of user expectations. Not all mode changes are initiated by users,

Mode changes can be initiated by the system, by previous users or by the same user who has disremembered the state change. In such a situation, an operation with the old mode in mind, will disrupt user focus as the user becomes aware of the mode change. This is especially important when a user cannot find how to restore the previous mode.

Prior to Gypsy, Butler Lampson (1943 – ), Charles Simonyi (1948 – ) and others developed Bravo at Xerox PARC in 1974. It was a modal editor where characters typed on the keyboard were usually commands to Bravo, except when in “insert” or “append” mode. Bravo used a mouse to mark text locations and to select text, but not for commands.

Although similar in capabilities to Bravo, the user interface of Gypsy was radically different. In both, a command operated on the current selection. But Bravo had modes and Gypsy didn’t. In Bravo, the effect of pressing a character key depended on the current mode, while in Gypsy, pressing a character key by itself always typed the character.

In the Wikipedia article on Gypsy, the difference between Bravo and Gypsy is illustrated by three examples:

  1. Insert In Bravo’s Command Mode, pressing “I” entered Insert Mode. In that mode, pressing character keys typed characters into a holding area (“buffer”) until the Escape key was pressed, at which time the buffer contents were inserted before the selection and the editor returned to Command Mode.
    In Gypsy, no command or buffer was needed to insert new text. The user simply selected an insertion point with the mouse and typed the new text. Each inserted character went directly into the document at the insertion point, which was automatically repositioned after the new character.
  2. Replace In Bravo, to replace existing text by new text, the user pressed “R” to enter Replace Mode. That mode was just like Insert Mode except that the buffer contents replaced the selection instead of inserting text before it.
    In Gypsy, to replace text, the user simply selected the old text and typed the new text. As soon as the user began to type, Gypsy deleted the old text and selected an insertion point in its stead.
  3. Copy In the then-current version of Bravo, the user selected the destination, pressed “I” or “R” to enter Insert or Replace Mode, selected the source (which highlighted differently from the destination), and pressed Escape to perform the copy and return to Command Mode. While in Insert or Replace Mode, the user could scroll and could select a source, but could not invoke another command, such as opening a different document. To copy text between documents was more complex.
    In Gypsy, the user could select the source text, press the “Copy” function key, select the destination text or insertion point, and press the “Paste” function key. Between Copy and Paste, the system was, as usual, not in a mode. The user could invoke other commands, such as opening a different document.

Fewer modes meant less user confusion about what mode the system was in and therefore what effect a particular key press would have. Gypsy and Bravo both used a three-button mouse, where the second and third buttons were intended for experts.

New users could learn to work with Gypsy in only a few hours. Drag-through selection, double-click and cut-copy-paste were quickly adopted elsewhere, and have become standard on most text editors.

This text was originally written in June 2009 as a draft for a weblog post. It was removed from the weblog, but subsequently revived without the original date and time stamps. New text was added at irregular intervals, including 13 May 2016, 23 April 2018, and 06 May 2019. The publication date of this weblog post celebrates the 10th anniversary of this weblog.

The Charm of the Demoscene

A Commodore Amiga 2000 with 3.5 inch floppy drive, 20 MB hard drive, keyboard and mouse. A cathode Ray Tube (CRT) monitor is missing. (Photo: Trafalgarcircle

Imagine home computing in the late 1970s. Machines are weak. Software is unrefined. Popular models include Apple II and its clones, ZX Spectrum, Commodore 64 and Amstrad CPC. The IBM PC, and its clones, have not yet arrived.

I remember a friend showing off his Apple II. It would show a line of text, Name? followed by a blinking cursor. When I typed in my name, and pressed return, it would respond by writing: Hello, Brock! It was easy to be impressed by technology in the late 1970s.

Inspiration for today’s demoscene first came in 1980, when Atari used a looping demo with visual effects and music to show off the features of the Atari 400/800 computers.

Demoscene is a type of computer art, that will be described in more detail later in this post, and in chronological order. It has a darker past, but a lighter present. In this weblog post, many of the terms used will be defined. It is an artform that generally avoids mainstream exposure. According to some sources, about 10 000 people are involved with it.

Cracker = a programmer who alters video game code to remove copy protection. Cracking crew is used where more than one person is involved in the cracking process.

Cractro = (crack intro) an introductory screen used by a cracker/ cracking crew to claim credit for cracking a game. They became very complex a medium to demonstrate superior programming skills, advertise BBSes, greet friends, snub rivals and gain recognition.

More important in Europe, than in other parts of the world, the cractro transmutes into the demo. A cracker community emerges then evolves into an entity independent of gaming and software sharing.

New machines are better suited to support the scene, most specifically the Commodore Amiga and the Atari ST. Some IBM clones are acceptable, if they have sound cards. Not the Apple Macintosh.

More inspiration came in 1985 when Atari demonstrated its latest 8-bit computers with a demo that alternated between a 3D walking robot and a flying spaceship.

That same year, Amiga released a signature demo showing the hardware capability of its Amiga machine, with a large, spinning, checkered ball that cast a translucent shadow.

Demo = a self-contained, originally extremely small, computer program that produces an audio-visual presentation. Its purpose is to demonstrate the programming, visual art and musical skill of its producer.

Demoparty = a festival where demos are produced, after a day or weekend long coding marathon, then presented, voted on by attendees, then released, originally on floppy disks and on bulletin board services (BBS).

Compo = a demoparty competition, traditionally divided into categories where submissions must adhere to certain restrictions: production on a specific type of computer, or a maximum data size. Submissions are almost always rendered in real time. This contrasts with animated movies, which simply record the result of a long and intensive rendering. The purpose of a compo is to push computing hardware to its limits.

Demoscene = computer art subculture focused on producing demos, international in scope.

Demoscener = a computer artist focused on technically challenging aesthetics, but with a final product that is visually and aurally pleasing.

Demogroup = a small, tightly-knit group of demosceners, centered around a coder/ programmer, a musician and a graphician. Some groups may have supporting roles and grow to tens of people, but this is the exception. Demogroups always have names. Individuals within the group have unique handles for self-expression. Demogroups use wordmarks, logos, catchphrases and slogans. They are skilled at public relations and even human resource management. The demogroup is undoubtedly the most important social unit in the demoscene.

While belonging to a group is often synonymous to being a demoscener, there are individual productions. Not infrequently, this individual will adopt a group name. There are also fake groups, involving secret identities for making humorous, political or vulgar productions without harming the reputation of the original group. Individuals invent new handles, or pseudo-pseudonyms.

There used to be an American demoscene, but it barely exists today. Who killed the American demoscene? The simple answer is the American crackdown on software piracy. European copyright law only criminalized for-profit breaches. In many European countries, including the Netherlands, Greece, Finland, Sweden and Norway, it was possible for the cracker to repent and to transform into a law-abiding demoscener.

The Amiga 2000

Our first family computer was a Commodore Amiga 1000, on loan to us while we waited for our Amiga 2000 to arrive, which it did some weeks later. In 1986/ 7, these were the best residential computers money could buy. If I remember correctly, the Amiga 2000 cost NOK 19 000 (a little over US$ 2 000 then or about US$ 4 000 in 2019.)

We bought the Amiga while living in Bodø, in Northern Norway. The company that sold it consisted of two young male idealists, who were among the most active Amiga enthusiasts in the country. In addition to selling machines, they developed software and also published a Norwegian language Amiga magazine. Some of my work appeared there. They had the largest collection of 3.5 inch Amiga floppy disks in Norway, which contained software and content on every conceivable topic. They made cractros.

The Amiga 2000 was an advanced machine. Some even claimed at the time that it would last into the 21st century. In contrast to the Amiga 1000, it allowed expansion cards to be added internally: SCSI host adapters, memory cards, CPU cards, network cards, graphics cards, serial port cards, and PC compatibility cards were available. We used a SCSI adapter with a hard drive, and a PC card, that allowed us to run both Amiga and PC-DOS programs. The Amiga 2000 also had five Zorro II card slots, the motherboard also has four PC ISA slots, two of which are inline with Zorro II slots for use with the A2088 bridgeboard, which provided IBM PC XT compatibility.

There were about 4 850 000 Amiga machines of all types sold. The machines were most popular in the United Kingdom and Germany, with about 1.5 million sold in each country. Sales in the high hundreds of thousands were made in other European nations. The machine was less popular in North America, where only about 700 000 were sold


I need read no further than the first word of an announcement for FOSDEM (Free and Open Source Developers’ European Meeting) to see that it is not an event for me. It is not that I fear that my non-drinking won’t be tolerated. Rather, I don’t want to have to put up with inebriated or half inebriated people.

Far too often in my working life, I have had to attend events populated by people who didn’t know the limits of propriety, after consuming alcohol.

Even though I share many of the aspirations of FOSDEM, this ad clearly demonstrates that this non-commercial, volunteer-based free and open-source software development community, is not sufficiently mature to warrant my attention, at least at this annual event held at the Université Libre de Bruxelles since 2000.

Alcohol use is a health concern, and with so much abuse in society it should not be promoted.

Good Enough Websites

Websites have many uses. Perhaps two of the more important, involve the sharing of information, through emails and web logging, or blogging. The first email was sent by Ray Tomlinson (1941-2016) in 1971. He is quoted as saying that these first “test messages were entirely forgettable and I have, therefore, forgotten them.” The first web log post is 25 years old, this week. It was published on 1994-01-27 by Justin Hall. Here is a link to it:

Statistics are hard to come by, but it seems at least half a billion people have their own web logs. On 2017-09-14, one blogger reported 440 million blogs just on Tumblr, Squarespace and WordPress. Most of these people, including myself, don’t have either the interest or the skills to set up a website that follows best practices. I am not sure that they even want to, I don’t. Instead, they want something that is simple but good enough for the needs of themselves and their families.

Why stress to obtain the best, when good enough will do? Patmos, Farm vehicle near Chora, Greece. Photo: Brock McLellan/ Digitization: Patricia McLellan. 1979-07-27.

Having gained some experience through work, with Moodle, a Learning Management System, and being dissatisfied with a couple of web hosting providers that were supposed to support this product, I opted for as a host, on the advice of someone I trust. The first lesson, then, is to ask for help from 1) someone who has experience with family oriented websites, 2) is trustworthy and 3) knows you, your family and your situation.

The proposed solution, which was implemented, may not be the world’s best hosting service, but it is certainly adequate, inexpensive and good enough for my family’s purposes. No issues have arisen during the past year that make me want to change vendor.

We purchased, or more correctly rent on an annual basis, a domain based on our family name, which provides email addresses for members of our family, but they are not in active use by everyone. We also paid for “Starter” web hosting services on a server.

Like most web hosts, our provider tries to make customers feel that they are getting a lot, or at least something, for their $3 a month in hosting fees. They try to impress with a content list that includes ten items: Unlimited bandwidth; Email on your own domain; Unlimited email accounts; Unlimited email aliases; Spam & Virus Protection; Fully featured professional webmail; Individual Calendar & Address Book per email account; IMAP/POP; Single domain; and, 25 GB SSD Storage. While I have a theoretical interest in some of these services, including spam and virus protection, the main product being purchased is storage space. This storage is being used for emails, as well as web logs.

Now, a second domain name has been purchased/ rented for a family member with a different surname. This has necessitated an upgrade to a “Professional Plus” web hosting service, offering hosting of multiple domains, eight times more storage (200 GB) as well as Backup & Restore facilities. Above this there is a “Business” level that offers 500 GB of storage, as the only significant difference.

Sometimes an upgrade is necessary. But that does not mean it has to be expensive. Honda Van, Exeter, England. Photo: Brock McLellan/ Digitization: Patricia McLellan. 1979-06-21.

An aside: Wouldn’t it be wonderful if names didn’t have to have elitist attributes? Why not name products after winds? The breeze, the gale and the hurricane. Or birds? The crake, the coot and the crane. I would even accept the apprentice, the journeyman and the master, or even a simple level 1, 2 and 3.

The needs of most families are relatively simple. They are not running businesses that need complex e-commerce solutions, with marketing and sales support, traffic management and guaranteed up-time. Everyone finds downtime detrimental, but it is something that can be lived with.

So, one of the first questions to ask is: Why not just use Gmail/ Hotmail/ Outlook/ Yahoo? Yes, some of these offer lots of storage space, spam and virus protection, and much more. Google offers 15 GB of storage for each user, Yahoo offers 1 TB. Personally, I am not using more than 10 GB, for email and web log, and other family members are using considerably less.

Similarly, one can ask: Why not just use Facebook to post information/ opinions that would otherwise end up in a web log?

The main reason to avoid multi-national corporations, is to protect families from the effects of long-term exploitation. These corporations are mining data and monetizing it. Yes, that is a big word, and it means they are making money off of your data. In the long-term, this will make you and your family poorer and less secure, while the elites grow richer. By using your own website, you will prevent these corporations from accessing the data they need to manipulate consumers and voters. These corporations, and a few others, are instruments effectively used by an elite, to consolidate their power.

Another important reason for having a family domain is for blogging. Roger McNamee, an early investor in Facebook, has written that information and disinformation look the same; the only difference is that disinformation generates more revenue, so it gets better treatment, at Facebook or Google. He claims that there is no way for these giants to avoid influencing the lives of users and the future of nations. Recent history suggests that this threat to democracy is real. McNamee proposes fundamental changes to their business models to reduce the harm they cause to democracy

The rest of humanity cannot wait for these enterprise Titanics to turn in an attempt to avoid icebergs of dictatorship and oppression. People must take control of their own lives back again, to the degree that this is possible. This means reducing our presence on Google, Facebook and Twitter, and increasing our presence on our own personal websites.

Blog is short for weblog, an online journal or informational website displaying information in posts, generally accessed in reverse chronological order. Some blogging platforms are run by the Titan(ic)s. Blogger (previously called Blogspot) is owned by Google. Tumblr is owned by Verizon. Instagram is not so much a blog, as a photo and video-sharing social networking service owned by Facebook. Two open source platforms are WordPress and Joomla.

Joomla is powerful and flexible enough to be used to build any kind of website or blog. There are enough templates to choose from, to customize any site. Extensions add more features. Yet, because Joomla has a shorter reach than WordPress, there are fewer themes and addons, and less support. Backups, updates, and security take more work.

WordPress provides sufficient control website, and allows one to add extra features like forums and an online store, if that is the direction of travel. Website management can have its challenges, especially things like backups and security. Despite some imperfections, WordPress is the platform used on Brock at Cliff Cottage. Personally, I do not see any advantages to throwing away my insights with this platform, just to select a different platform that will require more time to learn.

Video bloggers are called vloggers. Many choose to upload their videos to YouTube, another Google owned site. Here, one creates a free and simple vlogging channel, with an existing audience close by. Other websites for video content include:, vimeo and veoh. However, there is nothing to prevent a vlogger from using their own site to host their own videos. This, in fact, is in the spirit of this weblog post. WordPress offers several plugins especially designed for vlogging.

Some web logs are very specialized: auto repair, cooking, fashion, music, Norse mythology and robotics come to mind. In addition to vloggers, there are podcasters, who make web logs featuring audio tracks. Some people create portfolios of their work. Others just want a place to display their photographs, or their paintings/ drawings/ etchings. It is all up to the individual. Artists and artisans may want to upgrade a website for business purposes, including the display and sale of merchandise. It is relatively easy to build out WordPress with plugins to accommodate new needs.

WordPress can be updated using plugins, into a website for e-commerce. Three-wheeled electric milk van, Exeter. Photo: Brock McLellan/ Digitization: Patricia McLellan. 1979-06-22.

There are several WordPress books for beginners. The one I prefer is: Michal Bradek 2017 WordPress Guide for Beginners. The only challenge with this book is that it is based on WordPress version 4.8. Version 5.0 was released 2018-11-19, and is becoming standard. The greatest change with this update is the Gutenberg editor, which is actually easier to use than the previous “classic” editor, but is different – so some skills have to be unlearned, and others learned.

V2: Minor corrections made 2019-01-24 19:47


As we enter 2019, Cliff Cottage is transitioning.

Mothership has been selected as the generic name for the constellation of products and services provided by the central server rack at Cliff Cottage. While cloud is a buzzword referring, especially, to somebody else’s server, we tried to find a specific cloud variety that we could use for a name. Our choice refers to one of the most beastly type of clouds found on earth.

Mothership Clouds, also referred to as Supercell Thunderstorms, bring long-lived, dangerous storms with strong updrafts and rotation. They generate violent (F2-F5) tornadoes, cause downburst damage and produce large hailstones. Warm, humid conditions promote rapid lifting of air, quick changes of wind speed and/or direction increase rotational speed.

Mothership Cloud (Photo: Nevadanista) See:

A mothership is also a large vehicle/ vessel/ craft that leads, serves or carries other smaller vehicles/ vessels/ craft, including aircraft or spacecraft. For our purposes, it is a large digital device serving a number of smaller devices/ computers/ peripherals.

For the past 14 years, we have used an ADSL-based internet, which was a dramatic improvement over a dial-up modem. We have now gone over to fiber-optic broadband and cut out our landline. Our handheld personal devices, aka cell phones, are being updated to more advanced variants. We have replaced our inkjet printer, with a laser printer. CAT 6A cables are being installed throughout the house. While our network speed is currently 50 Mb up and down, increasing speeds to 1 Gb is simply an email away. So this is probably the last major communications upgrade in our lifetime.

In another post, a clustered NAS (Network Attached Storage) server system has been discussed (2018-06-21). This is still the goal. While we are not there yet, we are replacing our current NAS, with one designed and built by Alasdair. While we previously maxed out at 24 TB of data, the new NAS will start off with 40 TB. It is expandable to 120 TB. While many of the components are old and used, they are more appropriate for our needs. Typically, they are commercial products, produced by Cisco, but made redundant in commercial environments.

It is not my intention to publish further details about the Mothership in this web-log, at the moment. Rather, detailed information will be made available after a period of implementation and testing, to ensure that proposed solutions work properly.

If you, your close friends or family have developed technological solutions to modern problems, please consider making them freely available, and publishing them in a web-blog, or through other channels.


The Old Colossus

Colossus was the world’s first digital, electronic, programmable computer, although it was programmed by switches and plugs and not by a stored program. It was constructed at Bletchley Park, England in 1943-4.  It was designed by research telephone engineer Tommy Flowers (1905-1998), assisted by William Chandler, Sidney Broadhurst and Allen Coombs; with Erie Speight and Arnold Lynch developing the photoelectric reading mechanism.. Twelve machines were built, which were used for military (decryption) purposes during World War II.

Colossus, the world’s first electronic, digital, programmable computer build 1943-4, and on display at The National Museum of Computing, at Bletchley Park, England.

A colossus machine used massive amounts of electrical energy (8.5 kW) compared with a today’s devices (sometimes less than 50 W), but it was able to undertake massive amounts of computation – for its day, the value of which far exceeded its electrical consumption. 272 women (Wrens) and 27 men were needed to operate ten machines.

Fast forward to today. My aspiration for the Internet (and computing in general), is that it will (help) transform the World, by allowing everyone, including the poorest, access to vital information on numerous topics, including but not limited to: weather and climate, health, nutrition, education, appropriate technology, assorted innovations, ethics and art. We must treat all people as equal citizens with dignity, welcome in a digital world that is still in the process of being created. We must forge peace, not wage war!

The New Colossus

The world anno 2019 does not need an old colossus. Big data, and the information that derives from it, fuels the world. A new colossus is needed, server farms, that can provide data and information to everyone. Unfortunately, the major technological firms are less interested in supplying data, than they are in collecting it, especially personal data.

The new colossus has an energy challenge. For every watt needed to run a server, half a watt is needed to cool it. Selecting a location for a server farm can be as important as selecting a processor, to achieve energy efficiency. Iceland is a preferred location, not just because of its cold climate all, but also because of its cheap and carbon-neutral geothermal electricity.  Fibre optic cables connect it to North America and Europe.  Other prime locations are in Canada, Finland, Sweden and Switzerland. At the most well regulated sites, waste heat from servers warms residential, commercial and even factory buildings, compensating their computing usage.

A modern server farm, at Visa Data Centre, Baskingstoke, England

An Aside

It is in this spirit that the words of Emma Lazarus (1849–1887) are repeated. She wrote them in 1883 to raise money for the construction of a pedestal for the Statue of Liberty. In 1903, her sonnet was cast onto a bronze plaque and mounted inside the pedestal’s lower level:

The New Colossus

Not like the brazen giant of Greek fame,
With conquering limbs astride from land to land;
Here at our sea-washed, sunset gates shall stand
A mighty woman with a torch, whose flame
Is the imprisoned lightning, and her name
MOTHER OF EXILES. From her beacon-hand
Glows world-wide welcome; her mild eyes command
The air-bridged harbor that twin cities frame.
“Keep, ancient lands, your storied pomp!” cries she
With silent lips. “Give me your tired, your poor,
Your huddled masses yearning to breathe free,
The wretched refuse of your teeming shore.
Send these, the homeless, tempest-tost to me,
I lift my lamp beside the golden door!”


Computer is an inappropriate term to describe personal devices used to access and manipulate data.  These devices seldom compute! Several equipment manufacturers produce devices for the poor of this world. Often, these are referred to as phones, but larger devices, such as tablets, laptops and desktop machines, are also provided. One of the most important device categories was the netbook, that emerged in 2007 and died in 2012. The netbook did not simply appear, but was part of an evolution that had a past and has a future.


Miniaturization has always been important for computer development, and I have always been attracted to small computers. One of the first of these was the Apple eMate 300. It had a 172 mm diagonal screen ( with 480 x 320 pixel resolution),  16-shade grayscale display with a backlight, stylus pen, keyboard with about 85% the size of a standard keyboard, infrared port and standard Macintosh serial/LocalTalk ports. Its rechargeable batteries lasted up to 28 hours on full charge. It used a 25 MHz ARM 710a RISC processor. It was first introduced on 1997-03-07. While I waited patiently for it to come to Norway, it was discontinued less than a year later, 1998-02-27.

The eMate was not a netbook, but an inspiration. While the Internet existed, it was nothing like it is today. Public and commercial use of the Internet began in mid-1989. By the end of 1990, Tim Berners-Lee had created WorldWideWeb, the first web browser,  and had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup Language (HTML), tHTTP server software, the first web server and the first Web pages (they described the project). By 1995 many of the components that characterize the current concept of the Internet had been developed, including near instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls,  discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. About the only thing missing were ordinary people, and internet speeds beyond what a dial-up modem could provide.

While ADSL was available in Norway from about 1998, we were only able to obtain it in 2004. Norway was a rich country at this time, so one can only imagine what was happening (or more correctly, not happening) in the poorer regions of the world. Even then, we lived almost at the limit of what could be provided through the copper wires of the telephone system. After using it for fourteen years, we have now progressed to fiber broadband. We have chosen 50 Mbps, but could have chosen anything up to 1 Gbps, if we had wanted to pay for it. We didn’t.

Apple Newton eMate 300. Photograph: Ryan Schultz 2005-03-19


The One Laptop Per Child (OLPC) is known for its innovation in producing a durable, cost- and power-efficient netbook for developing countries, it is regarded as one of the major efforts to encourage computer hardware manufacturers to create low-cost netbooks for the consumer market. Seymour Papert (1928 – 2016) provided the pedagogical inspiration with his version of constructionism, encouraged the use of computers by young children, to enable full digital literacy. Nicholas Negroponte (1943-) was chief promoter of the project, starting in 2005 at the World Economic Forum, at Davos, Switzerland.

OLPC  XO-1 original design proposal


A netbook is a category of small, lightweight, inexpensive laptop computers. These are legacy-free, meaning that they use USB ports to replace bulky components such as floppy drives and device-specific ports. This allows machines to be more compact.

The first real netbook was a Asus EEE PC 700. Originally designed for emerging markets, the 225 × 165 x  35 mm device weighed 922 g. By today’s standards it was very primitive, with a 178 mm diagonal display (800 x 480 pixels), a keyboard approximately 85% the size of a normal keyboard (yes, the same as an eMate 300), a 2, 4 or 8 GB solid-state drive  and Linux with a simplified user interface.

I was immediately attracted to the Asus EEE PC 700, the world’s first netbook, launched 2007-10-16.

Between 2009 and 2011, netbooks grew in size and features, and converged with smaller, lighter laptops. At this point, the netbook’s popularity fizzled as manufacturers tried to add features to prevent netbooks from cannibalizing their more profitable laptops. Peak Netbook,  at about 20% of the portable computer market, was reached in early 2010. After that, netbooks started to lose market share. In 2011 tablet sales overtook netbooks. The netbook era ended in 2012 when netbook sales fell by 25 percent, year-on-year. Asus, Acer and MSI announced they would stop manufacturing their most popular netbooks in September 2012. At the same time, Asus announced a focus on their Transformer line.


Chromebooks are in many ways the new netbooks. They are  laptops and tablets running Linux-based Chrome OS, used to perform a variety of tasks using a browser, with most applications and data residing in the cloud (read: servers run by major corporations) rather than on the machine. They were first introduced by Acer and Samsung in June 2011.

In 2013, Chromebooks became the fastest-growing segment of the PC market. While current Chromebooks function better offline than before, they are still dependent on an Internet connection  to function optimally.

Netbook sized computers, including Chromebooks, offer several distinct advantages. First, their compact size and weight make them appropriate in compact work areas, such as cafes and classrooms. Second, the size makes them easy to carry and transport. Third, they are low priced. They are fully capable of accomplishing general tasks: word processing, presentations, Internet access and multimedia playback.

In North America, especially, schools have limited budget to provide computing resources. This has led to a rise of tablets, including iPads. Yet, Chromebooks provide a more complete hardware solution, such as a full-size keyboard. There has been a transition away from tablets to Chromebooks, so that almost 60% of school computers are Chromebooks. However, the most important factor for success in education, has little to do with the physical machine, it has to do with the human resources need for large-scale deployment. Chromebooks save IT (information technology) workers time!

An Acer Chromebook 11. This is the same type of machine that was purchased 2018-11-16.

Our Computers

It is not my job to support computer manufacturing companies so that they can reward executives with excessive salaries and bonuses. Thus, I want to avoid purchasing expensive computer equipment, and stick to minimal products. For example, much of our server equipment is purchased used, and I am a proponent of single board computers, such as the Raspberry Pi.

Here is a history of our netbook related computers since 2010, with comments.

An Asus EEEbox 1501p was purchased 2012-10-28 as a media centre. It ran assorted versions of Linux Mint, through its life. Unfortunately, it always ran hot, and developed heating issues that required repair. It was replaced by an Asus Tinkerboard, a Raspberry Pi clone, that was purchased 2017-03-31.

Since my employer supplied me with a laptop, I never felt that I needed a second one. Thus, I used an 2010 Acer Aspire tower for 6.5 years, until it was replaced with an Asus VivoMini VC65 desktop in 2017. Both of these were used at a height adjustable desk.

When I retired at the end of 2016, and handed in my workplace laptop, I used Trish’s retired Asus, a U31F 13.3″ laptop originally purchased 2011-02-13 with an i3 core, 4 GB RAM, a 500 GB HDD, running Windows 7, before it was modified to run Linux Mint. Trish retired this machine because it was running hot, and the battery needed replacement. On 2017-05-15, I decided not to replace the battery, but instead bought an Asus Vivobook E402SA. This was not a good decision. While this new machine came with Windows 10, it was a direct decendent of the Asus EEEbook. Linux Mint was installed on the machine, but it never worked correctly. The screen would freeze, and the machine would have to be powered off and restarted. This could happen up to several times a day. It stopped working entirely in September 2018. Undoubtedly my worst computer purchase ever.

An Acer Chromebook 11 was acquired on 2018-11-16. This version allows Linux apps, such as LibreOffice (for word processing, presentations and spreadsheets), Mozilla Firefox web browser and Thunderbird mail server to be installed. In addition to its role as a workhorse, it was also purchased so that I could gain hands-on experience of Chromebooks, as a concept. The main problem with the machine is that the currently installed version of Firefox, ESR (Extended Support Release), will not play audio, although it will display video.  It will be uninstalled, and replaced with other versions, to see if there is one that works.

Most people do not need high-end devices. By opting for machines with modest specifications, modest machines will continue to be made.

Appendix: Asus

Former Asus CEO Jerry Shen attracted my attention in 2007 when he created (what I regard as) the first netbook, the Eee PC, in 2007. Shen is now off to lead a new AIoT (AI = artificial intelligence; IoT = internet of things, often referring to smart home applications) startup, iFast.

Wikipedia describes Asus as “a Taiwanese multinational computer and phone hardware and electronics company headquartered in Beitou District, Taipei, Taiwan. Its products include desktops, laptops, netbooks, mobile phones, networking equipment, monitors, WIFI routers, projectors, motherboards, graphics cards, optical storage, multimedia products, peripherals, wearables, servers, workstations, and tablet PCs. The company is also an original equipment manufacturer (OEM). Asus is the world’s 5th-largest PC vendor by 2017 unit sales.”

On 2018-12-13, wrote about Asus chairman Jonney Shih announcing a comprehensive corporate transformation involving the resignation of CEO Jerry Shen, a new co-CEO structure, and a shift in mobile strategy to focus on gamers and power users. There will be more ROG Phones and fewer ZenFones. During his 11 years as CEO, Shen oversaw the launch of the PadFone series, Transformer series, ZenBook series and ZenFone series. It may seem a worrisome development, but the place abandoned by Asus will undoubtedly be taken over by other companies who see the merits of supplying devices to people with lower incomes.

The McLellans have a history of buying Asus technology including numerous laptops, NAS (Asustor = Network Attached Storage), home theatre desktop (EEEBox), Tinkerboard single board computers, repeaters, etc. Of course, we also have family members who use Apple products exclusively, and another family member who uses Chinese developed products such as Lenovo computers and Huawei phones. Even I am forced to admit that my latest purchase was an Acer, after a difficult year of owning a Asus Vivobook.



Ethan & Ethel 04: Computer Control Basics

In the last post, Ethan & Ethel had to do a lot of work, to keep track of their heating costs.

Time used = Time turned off – Time turned on.  Example: 17h05m –  15h31m = 94m

They wrote down the time they turned on their heater, and then the time when they turned it off. They then subtract the “on” time from the “off” time to find the number of minutes the heater was on. This had to be repeated for every visit to the workshop with heat on. At the end of the month, they had to add all of these minutes together to find their monthly usage. What a boring job, and so unnecessary when a computer can do it, automatically! All that is needed is a few lines of code. Code that has already been written, and is waiting to be reused.

Workshop computer control means that computing equipment is running hardware and software that senses and activates workshop activities.

Stop the Press!

This post was originally written 2018-03-02. It is now 2018-08-11, more than five months later. Reviewing it at the time I was dissatisfied with the previous paragraph, that continued with, “a Raspberry Pi single-board computer will be used to run Home-Assistant software. The raspberry pi will be connected to two different Arduino micro-controllers using USB cables.”

The problem, both then and now, is that while the above solution would work, it is not optimal. Why should one use three components, when one should do? Ideally, a single microprocessor should be able to run 1) home automation software, in this case Home-Assistant; 2) connect to analogue sensors and have analogue input data converted to digital data; 3) connect digitally to relays to trigger activators; 4) communicate with other components on the local area network using wires (Ethernet); 5) receive electrical power over those same wires.

The best way forward to gain an understanding of workshop problems is to pretend that the ideal solution exists, a fantasy Unicorn IoT (Internet of Things) microcontroller.


If Ethan and/or Ethel are to work in a computer controlled workshop, one of the first things they need to control is the workshop computer. It should be designed in such a way that it can respond to their needs turning on and off lights, heat, tools, etc.

While a Raspberry Pi (and its clones and near relatives) is capable of running this software, an Arduino microcontroller is not.


In a workshop there can be a large number of different sensors measuring all sorts of things. There can also be a large number of actuators using the same data. For example, both a heater and a vent may use data from a room temperature sensor, but in different ways. The heater may be activated if the work space is too cold. Once it gets hot enough it will shut off. If the temperature continues to rise, then a different actuator, the vent will be activated, but only if the outside temperature is lower than the inside temperature. To determine that, there needs to be a second temperature sensor, this one measuring the outside air.

A sensor is any device that measures a physical quantity. Temperature sensors can be found not only in the workshop space, but also inside machines.  This Wikipedia article lists sensors by sensor type:

Some of the other sensors in a workshop include: humidity, measuring water vapour in the air; infra red, detecting body heat; light, measuring light levels; smoke, detecting fires. Those are sensors that can be used anywhere in a house. There can be some sensors that are specific to a workshop: wood moisture content and dust particles in air.

Having so many sensors can be a major distraction, so from now on the focus will be on just one, a LM35 temperature sensor.

LM35 Temperature sensor

Several companies make temperature sensors, but Texas Instruments makes one that is world famous, the LM35. It costs about $1.50.

A LM35 temperature sensor, inexpensive and accurate. At pin 1 any voltage can be used from 4 to 20 V. Pin 2 provides output that will be connected to an analog pin. Here, the voltage proportional to the temperature. It can vary from -0,5V to +1.5V. Pin 3 is the ground (or negative terminal). It completes the electrical circuit..

While information about the LM35 is available in a data sheet that contains more than enough information about every aspect of the sensor, most people don’t need to read it. Why? Because all of the engineering work has been done before. Since Ethan and Ethel will be using an Arduino, they just need to know how to connect a LM35 with an Arduino. Then they have to find a (free) program that uses the LM35, and upload it onto the Arduino.  With a little practice, anyone can get a sensor working on an Arduino in about five minutes.

The LM35 is cool. The main reason is shown in this graph. Most sensors express themselves as a voltage that varies smoothly with the quantity being measured. On a graph this makes a straight line. The LM35 is exceptional, because at 0°C output voltage  is 0V. Every 1°C up or down adds (with positive temperatures) or subtracts (with negative temperatures) precisely 10 mV. At 100°C, the output voltage is exactly 1V. The LM35 is also very flexible regarding input voltage. It can use anything from 4V to 20V.

LM35 graph
This may be the first time in your life where you see a graph that is actually useful! This shows the output voltage of a LM35 temperature sensor for temperatures that range from  -50° C to  +150° C.



Computers use digital data, and can’t normally read voltages directly. On micro-controllers there are Analog to Digital Converters (ADC) that automatically change an input voltage into a digital value. On the Arduino Uno, there are six analog pins that can read voltages from 0 V to 5 V (or 0 mV to 5 000 mV). This means that up to six different sensors can be connected to an Arduino board. There are ways to add more, if needed. Each sensor then has its voltage converted into a digital values between 0 and 1023. These analog pins have a sensitivity of 4.9 mV. So a voltage from 0 to 4.8 mV will be given a value of 0. From 4.9 mV to 9.8 mV they will be a value of 1. This will continue right up to 4 995.1 mV to 5.0 mV, where they will be given a value of 1023.

It takes about 100 µs (100 microseconds or 0.0001 s) to read an analog input. The maximum reading rate is 10 000 times a second. Usually, reading a temperature once a second is good enough. In fact, in some circumstances reading it every five minutes or every hour would make better sense, especially if all this data has to be stored.

Arduinos have ADC units, Raspberry Pis do not.


Microcontrollers do not respond well to large currents, and will be permanently damaged if connected to too many volts, amps or watts. If one wants to turn on an electric heater to warm up a space, this is typically done by a relay.   A Relay is an electrically operated switch. When an electromagnet is activated with a low voltage, typically 5 V, it makes or breaks a high voltage circuit.

Many microcontrollers have supplementary boards that attach directly to pins on their main boards. Both the Raspberry Pi and the Arduino have them. On a Raspberry Pi they are called  Hats (Hardware Attatched on Top). On the Arduino they are called shields. The Raspberry Pi hats allow the main board to identify a connected hat and automatically configure the pins.

A Seeed Relay Shield V 2.0. It allows a single Arduino to control up to 4 relays. (Photo: Seeed)


For automation systems, wired communication is preferred. The most common form of wired communication is the Ethernet, developed at Xerox PARC (Palo Alto Research Center) in 1973-4 and used ever since. Most people would be advised to use CAT 6A cable, for workshop automation.

In the future, almost every significant power tool in a workshop will be connected to the local area network, including dust collection and air filtering equipment. Even in home workshops, various variants of CNC (computer numeric controlled) equipment will be taken into use, including 3D printers and laser cutters.

Microprocessors in the 1970 would process data in a single program that ran continuously. In the 21st century, not so much. The reason for this is that each sensor (and each actuator) is treated as a separate object. Sensors publish data, about a specific state, and actuators subscribe to the data they need to make decisions about how they will operate. To do this they use a publish-subscribe protocol called MQTT. It has been around sine 1999.

Sensor actuator
Home Assistant uses a MQTT broker that allows sensors to publish and actuators to subscribe. With this information, a heater can be turned on and off as required.

PoE (Power over Ethernet)

Power over Ethernet allows electrical power to be sent over the same Ethernet cable used to send and receive data to a device. This simplifies life considerably. There are no batteries to change or high-voltage power cables to install. The main challenge is that only a few microcontrollers are capable of utilizing this feature. Using a Hat or shield with PoE connectivity is another possibility.



The myth of Kanban is set in the late 1940s,  when Toyota began to optimize inventory levels. Several authors describe this as an engineering process. It isn’t, but myths are not invented to tell truths, but to elevate the mundane into  eloquent and compelling stories. In this myth, Toyota’s goal was to align inventory levels with the actual consumption of materials. To communicate capacity levels in real-time on the factory floor (and to suppliers), workers would pass a card, or kanban, between teams. When a bin of materials used on the production line was emptied, a kanban was passed to the warehouse describing the material and its quanity. The warehouse would then send a replacement to the factory floor, and their own kanban to their supplier. The supplier would then ship a bin of material to the warehouse. While the physical cards may have disappeared, Just In Time (JIT) remains at its heart of modern manufacturing.

As stated in the image itself, this is an example of a Kanban Board. (Illustration: Ian Mitchell, 2012)

Kanban moved from esoteric knowledge about Japanese business practices, to myth in 2010, when David Anderson wrote Kanban: Successful Evolutionary Change for Your Technology Business. The book is only incidentally about Kanban. It is more about the evolution of a software development approach at Microsoft in 2004, to a better approach at the Bill Gates owned digital image business, Corbis, in 2006-7, designated Kanban. About the same time there were several others who were name-dropping Kanban, and suggesting variations of it as a form of lean production, especially for software.

The key point with Kanban is that it works in organizations engaged in processing, either in terms of products or services. It is less applicable in organizations working on projects.

Service organizations, can establish teams to supply these services, using JIT principles. This requires them to match the quantity of Work In Progress (WIP) to the team’s capacity. This provides greater flexibity, faster output, clearer focus and increased transparency. Teams can implement virtual kanban methodology.

In Japanese, kanban translates as visual signal. For kanban teams, every work item is represented as a separate card on the board.A kanban board is a tool used to visualize work and to optimize work flow across a team. Virtual boards are preferred to physical boards because they are trackable and accessibile at every workstation. They visualize work. Workflow is standardized. Blockers and dependencies are depicted, allowing them to be resolved. Three work phases are typically shown on a kanban board: To Do, In Progress and Done. Since the cards and boards are virtual, the unique needs of any team can be mapped to reflect team size, structure and objectives.

Truthing is at the heart of kanban. Full transparency about work and capacity issues is required. Work is represented as a card on a kanban board to allow team members to track work progress visually. Cards provide critical information about a particular work item, including the name of the person responsible for that item, a brief description and a time estimate to completion. Virtual cards can have documents attached to them, including text, spreadsheets and images. All team members have equal access to every work item, including – but not restricted to – blockers and dependencies.

An important management task is to (re)prioritize work in the backlog. This does not disrupt team efforts because changes outside current work items don’t impact the team. Keeping the most important work items on top of the backlog, ensures a team is delivering maximum value. A kanban team focuses on active works in progress. Once a team completes a work item, they select the next work item off the top of their backlog.

Cycle time is the time it takes for a unit of work to travel through the team’s workflow–from the moment work starts to the moment it ships. This metric can provide insights into delivery times for future work.

When a single person holds a skill set, that skill set can potentially becomes a workflow bottleneck. Overlapping skill sets eliminates that potential bottleneck and may reduce cycle times. Best practices, encourage team members to share skills and to spread knowledge. Shared skill sets mean that team members can enrich their work, which may further reduce cycle time. In kanban, work completion is a team responsibility.

A kanban feature is to set a limit on the number of works in progress. Control charts and cumulative flow diagrams provide a visual mechanism for teams to see change. A control chart shows the cycle time for each work item, as well as a rolling average. A cumulative flow diagram shows the number of work items in each state. Combined, these allow a team to spot bottlenecks and blockages.

My interest in Kanban is tied to my son, Alasdair’s use of it for process management with the Council for Religious and Life Stance Communities Bergen (Samarbeidsrådet for tros- og livssynssamfunn Bergen, STLB). See: . It is available as an app for Nextcloud, Deck. See: . This will be installed on our upcoming server, tentatively named Qayqayt.

Arduino Revisited

I started to write this post on 2017 November 22. The original title was MKR: Arduino Revisited. Two hundred and fifty days should be long enough to write any post, but this one has defied me.

An Arduino MKR 1000, the most basic board in this new Arduino family. In November, I expected to be buying several of them for a home automation project. (Photo:

Soon, it will be ten years since I started using and teaching Arduino. In November, I was looking forward to the new series of Arduinos, the MKR (Maker) series, a small form factor microcomputer with a number of outputs and headers (electrical pin connectors), a battery management system and USB, that could be useful for building home automation room control units, small enough to fit inside almost anything, such as a light switch box inside a wall.

Each room in our house would have its own personal microprocessor. Some rooms, such as the workshop might have several. Each microprocessor would then connects to multiple sensors that would measure/ detect things like temperature, humidity, light and motion inside that room. Analogue data would be converted inside the microprocessor to digital data, then sent onwards to a central controller, located somewhere in the house. If specific conditions were met, the controller will initiate an action, sending a message to either the same or another microprocessor, and ordering it to activate an activator (as they are called) such as opening a vent or turning on a light. These microprocessors would be fried if they received power exceeding a few milliwatts, so they use relays to indirectly switch on components that can consume up to 2 500 Watts.

There are just two problems with microprocessors like this. First, they should not use wireless communication (including but not limited to WiFi, Bluetooth or radio), but should be wired. Second, they should not use batteries or mains current as a power source.

Both of these problems can be resolved using Ethernet, which is network cabling technology, initially developed at Xerox PARC (Palo Alto Research Center) between 1973 and 1974. For most home automation circuits there is no great urgency to turn on or off an activator. A millisecond or two will not make much of a difference, so there is no need for gigabit per second Ethernet.  Megabit per second is good enough. Batteries and mains current can be eliminated by sending electrical power through the Ethernet cable.

At the moment, I have lost my enthusiasm for Arduino MKR boards, and am looking for a replacement. Why?  It all has to do with open-source, or lack thereof, if not in practice, at least in spirit.

Arduino, despite its open-source claims, has not always been transparent.  The following is a summary of some of the Arduino disputes, taken from:

In 2008, the five cofounders of the Arduino project created a company, Arduino LLC to hold the trademarks associated with Arduino. They transfered ownership of the Arduino brand to the newly formed company. The same year one of the five cofounders, Gianluca Martino through his company, Smart Projects, registered the Arduino trademark in Italy and kept this a secret from the other cofounders for about two years. Later, negotiations with Gianluca to bring the trademark under control of the original Arduino company failed. In 2014, Smart Projects began refusing to pay royalties. It then renamed itself Arduino SRL and created the website, copying the graphics and layout of the original In January 2015, Arduino LLC filed a lawsuit against Arduino SRL. In May 2015, Arduino LLC created the worldwide trademark Genuino, used as brand name outside the United States. In 2016, Arduino LLC and Arduino SRL merged into Arduino AG. In 2017 BCMI, founded by four of the five original founders (with the initials representing those of their respective last names) acquired Arduino AG and all the Arduino trademarks.

In July 2017, Massimo Banzi announced that the Arduino Foundation would be “a new beginning for Arduino. ”  That same month, former CEO Federico Musto allegedly removed many open source licenses, schematics and code from the Arduino website.  See:

Currently, I have been unable to find any further information about an Arduino foundation, and any new start. Instead, I find in May 2018, Arduino announcing the sale of engineering kits to encourage the use of Arduino at university level.  Unfortunately, the kits require the use of Mathlab and Simulink, closed-source and expensive software packages from MathWorks, despite open-source alternatives, such as R.  Admittedly, the kits contain a one year licence for the software. See:

Back in 2013, Massimo Banzi was more enthusiastic about open-source hardware, as he explained in an Arstechnica interview. See:

Here are some quotes from the article:  As an open source electronic prototyping platform,  Arduino releases all of its hardware design files under a Creative Commons license, and the software needed to run Arduino systems is released under an open source software license.  Why is openness important in hardware? Because open hardware platforms become the platform where people start to develop their own products. For us, it’s important that people can prototype on the BeagleBone [a similar product] or the Arduino, and if they decide to make a product out of it, they can go and buy the processors and use our design as a starting point and make their own product out of it. … With the Raspberry Pi you cannot even buy the processor….  Raspberry Pi is a PC designed for people to learn how to program. But we [Arduino] are a completely different philosophy. We believe in a full platform, so when we produce a piece of hardware, we also produce documentation and a development environment that fits all together with hardware.

Even the Arduino website has changed its language. Before it might have distinguished between an original board and a clone, emphasizing the open source nature of boards. Now this nuance is missing entirely, it states:  “If you are wondering if your Arduino board is authentic you can learn how to spot a counterfeit board here [with link].” See:

While I live in hope that Arduino will reform, I don’t want to financially support it any more. What I want is an open-source board with its own Ethernet connectivity. They did have one, but retired it without a direct replacement. Currently, this feature seems to be unavailable from Arduino. They are undoubtedly too busy developing boards that connect an assortment of proprietary communications technology.

The Arduino Ethernet with PoE (Power over Ethernet), an almost ideal solution, except that it has been retired. (Photo:

I have considered using other boards, even Italian ones, such as the Fishino, but they also lack Ethernet connectivity:

The board that seems to offer the most promise comes from DF Robot:

DF Robot W5500 Board with PoE, available at a price of USD 45 or less. (Photo: DF Robot)

Their W5500 Ethernet with PoE board is based on the ATmega32u4 in an Arduino Leonardo package with a W5500 Ethernet chip . The latter is TCP/IP hardwired for embedded systems, although not gigabit enabled. The board is compatible with most Arduino shields and sensors. Version 2.0 has an upgraded PoE power regulation circuit, which makes PoE power more reliable.

The main difference between a Uno and a Leonardo package is that the latter uses a more sophisticated chip, an ATmega32U4 chip.  It shares the same form factor and I/O placement (analog, PWM, I2C pins in the same place) as the Uno, which means that it can use the same shields, which are additional boards that sit on top of the Arduino, and connect directly to the pins below.

A Seeed relay shield, showing the pins that allow it to be directly connected to most Arduino boards. Yes, Seeed is spelled with 3 e’s. (Photo:

A relay shield provides several, typically four,  switches that can control high current loads. They can be wired as NO (Normally Open) or NC (Normally Closed) circuits. These are important, because most electrical equipment cannot be controlled directly by a microprocessor’s pins. Relays are useful for switching AC appliances such as fans, lights, motors as well as high current DC solenoids.


DF Robots, an alternative to Arduino. (Illustration: DF Robots)

The fact that the DF Robots W5500 board is open-source, means that everyone is permitted to make their own boards from scratch, if they should want. Most of the time, this is not an economic choice.