Devices Future

Volkswagen and D-Wave Systems have used quantum computing to find optimal routes, as illustrated here in Lisbon, Portugal, and available as an app near you. (Photo: Volkswagen)

… and the answer is, everywhere.

Now for the question, where do people want to use computing devices?

Guestimations

After trying to collect and interpret validated statistics, I have given up and present some numbers than might approach something meaningful and coherent. Some are based on information collected by Simon Kemp, dated 2019-01-31. Other bits come from Wikipedia, such as this article, along with a variety of other places with assorted dates.

With a world population of 7.7 billion people, there are over 5 billion handheld devices, the vast majority also referred to as mobile phones, increasingly smartphones, although they do much more than connect people using voice communication. It would be much more honest to eliminate any reference to phone in the description. The German Handy or the French Portable, are both better. Other devices in this category include tablets, and similar devices lacking keyboards. Regardless, Android operating system variants clearly and increasingly dominate, with at least 75% of market share, with Apple’s iOS declining market share taking most of the remainder. It remains to be seen if Huawei will be able to introduce a viable alternative to Android.

There are two important characteristics that distinguish handheld devices from larger personal computers. They are the large screen size and the use of a keyboard input device. Minor differences also include the use of a mouse or some other pointer, They are often referred to as laptop and desktop machines. In terms of the world, this is small segment of machines compared to mobile devices, with its importance decreasing. Part of the reason for this decline is their inability to be used everywhere.

There is general agreement that the billionth personal computer shipped in 2002, and that there were one billion such computers in operation in 2008. The dispute is how many are in use now. Some are looking for a magic number of 2 billion, but 1.5 billion units is far more likely. Windows will be installed on at least 75% of machines, MacOS on, say, 13% (which to me seems high), ChromeOS on 6% (at least in the US, and higher than I experience in Norway) and Linux on 2%. The 2019 Stack Overflow developer survey gives very different figures on what is found on machines used by computing professionals. In round numbers: Windows on 45%, MacOS on 30%, and Linux on 25%.

Another category of computer is the embedded device. One essential aspect of these is the electronic control unit (ECU). Domotics refers to home robotics. It includes all aspects of smart home technology, including sensors that monitor the environment and actuators that activate controls. These include temperature, lighting and security. However, it is pervasive, found everywhere from electric toothbrushes, to toasters and every other form of kitchen machine. Today, even a lightbulb can be considered an ECU. A typical smarthouse may contain hundreds of these devices.

The vast number of ECUs expected, plus its vulnerability in terms of security, means that WiFi can only be a temporary solution. While communication can be built on top of 120/240 V AC circuits, most devices, including LED lights, actually use low voltage DC power. Anyone building something new should be installing Ethernet cable 6A at a minimum, with connections to every room. Power over Ethernet, (PoE) can then provide DC power to almost everything needed.

I expect clothing will soon include embedded devices, so that personal data can be continuously collected and monitored. In Sweden, I note that several individuals have voluntarily inserted RFID devices into their bodies, so that they can use these to identify themselves, rather than relying on PIN codes. Unfortunately, it is probably only a matter of time before these devices become mandatory.

Embedded devices are also found in cars where even the most primitive contain 25 – 35 ECUs. More luxurious models may have 70 or more ECUs. Hopefully, autonomous vehicles will soon be on streets near you. The last thing this world needs is a nut behind the wheel, especially one that feels provoked into road rage at the slightest offence. Electric vehicles are already here, with Tesla’s innovations leading the way. In Norway, there will be no opportunity for people to buy fossil fueled vehicles (including hybrids) after 2024. Everything will probably be battery electric, as an explosion at a hydrogen fueling station has dimmed everyone’s interest.

Command and control (C2) is defined by Marius Vassiliou, David S. Alberts and Jonathan R. Agre in C2 Re-Envisioned: the Future of the Enterprise (2015) as a “set of organizational and technical attributes and processes … [that] employs human, physical, and information resources to solve problems and accomplish missions.” (p. 1) This definition can apply to individuals, households, organizations, small businesses, large enterprises or even the military. One major challenge has been the tendency of large manufacturers of ECUs to consider just their own product range, and to make controllers for these and only these. This is not a viable solution. Our household has opted for the most inclusive solution, found in Home Assistant.

Miniaturization will continue into the future. I am uncertain about the future form factor of personal devices/ phones. Asked if they will shrink to wristwatch size or remain about the size they are today? Today’s form factor wins. Yes, one can imagine screen technology being built into glasses, or wrist watches, but will it happen? It will be interesting to see what has happened in 2040 and beyond.

In terms of PCs, they could be doomed to extinction. Physically smaller personal devices will be capable of doing everything PCs do. However, there may be situations where a person may want a larger screen, a keyboard and a pointing device. So the personal device will have to interact with these. I am not certain when voice control will replace the keyboard. When I first studied computing, in the mid-1970s, 1980 was even considered a target date for its replacement. However, that was based on people going from card punches to something else.

In terms of servers, one can also envisage a household having something the size of a small media centre, perhaps 100 x 100 x 50 mm (4″ x 4″ x 2″) which is about the size of our Asus PN 40 media player. At the current rate of miniaturization, it should be able to hold at least 100 TB by 2040. One could ask why anyone would need so much storage capacity, but today everyone seems capable of using every last byte of storage they have, and I see no reason for attitudes to change. Computers will be used in new areas because people have the processing power and data storage capacity to do it.

Perhaps the greatest change will come as quantum computing matures. Quantum computing is real. It allows computations to be made in seconds that would take a conventional supercomputer considerably longer. Google claims that its Sycamore processor with 54 Qubits, has achieved quantum supremacy, and is the most advanced quantum computing processor in the world, capable of processing in 200 s, what a Summit supercomputer would use 10 000 years to accomplish, making quantum computing 1 577 880 000 times faster. IBM has countered this, stating that it would only take 2.5 days, making quantum computing about 1 000 times faster. Regardless, quantum computing will provide faster calculations.

With my origins in Vancouver/ New Westminster, and with some of my most positive learning experiences at the British Columbia Institute of Technology, I will end this post by mentioning its Burnaby neighbour, D-Wave systems. They announced in 2019 their next-generation Pegasus quantum processor chip, the world’s most connected commercial quantum system, with 15 connections per qubit, and with more than 5000 qubits, to be available in mid-2020.

Devices Past

3D Rendering of computer center with IBM System/370-145 and IBM 2401 tape drives (Illustration: Oliver Obi)

In ancient times, computing meant batch systems that required users to drive across town to a computing centre, to punch their programs onto cards, then to submit those cards so they could be read by a card reader. An IBM 3505 Model B1 card reader from 1971 could read 80 column cards at the rate of 1200 CPM (cards per minute). It was based on the Hollerith Keyboard punch, from 1890. The programs were then run on a mainframe computer, such as an IBM System /370 dating from 1970. A machine consisted of several units housed in a large air-conditioned machine room with a raised floor to improve cooling, and conceal wiring. Processing took time, and results were provided an hour or two later, from high-speed printers, such as an IBM 3211, printing at about 150 lines per minute, more than enough to keep up with the punched card input. This was the basic situation from the mid-1950s until at least the mid-1970s, with variations.

The IBM System /370 Model 145 had 500 kB of RAM, 233 MB of hard disk space, and ran at 2.5 MHz. It cost from US$ 705 775 to US$ 1 783 000. The US Bureau of Labor Statistics consumer price index, states that US$ 1 in 1970 is worth US$ 6.63 in 2020. So that the IBM System /370 Model 145 would cost from about US$ 4.7 million to almost US$ 12 million in 2020 dollars.

Computers are a mix of hardware and software. Writing system software was a craft where a select few excelled. They wrote elegant but lean code that executed fast. In ancient times, when the hardware was primitive, craftsmanship mattered. Compilers and operating systems had to be written in assembly/ assembler language for increased efficiency and space savings. A programmer had to think like a processor, moving code into and out of registers. As computer hardware improved, the need to write parsimonious code gradually disappeared. Programmers started becoming verbose. Programming as a profession expanded far beyond the few.

To gain an understanding of the situation facing professional programmers, at this time, one of the best books to read is The Mythical Man-Month (1975) by Frederick Brooks (1931 – ). During Brooks’ exit interview with IBM’s legendary CEO Thomas Watson Jr. (1914 – 1993), a seed for the book was planted. Watson asked why it was harder to manage software projects than hardware projects. In this book the answer is stated, now known as Brooks’ law: “Adding manpower to a late software project makes it later.”

A 2020 Raspberry Pi 4 Model B is available with 1, 2 or 4 GB of RAM. That is anywhere from 2 to 8 000 times more than that found on the IBM machine in the previous paragraph. A 16 GB (or larger) SD card, contrasts with 233 MB of hard disk space. That is 68 times more. The speed of 1.5 GHz with 4 cores competes with 2.5 MHz, with a single core. Potentially there is a 2 400 times speed increase. More than anything else, with a RPi costing between US$ 35 and US$ 55, the IBM machine cost about 100 000 times more.

By the 1980s, card punches had given way to terminals, consisting of a screen (that frequently offered green text on a black background) and a keyboard. These were connected indirectly to a mini-computer, that replaced the mainframe. Digital Equipment Corporation were especially fond of using Ethernet cable to connect terminals to their VAX Mini-computers. Offices were starting to be interconnected. These machines still required their own machine rooms with adequate cooling, as well as the drive to the office.

To understand this new mini-machine period of computing, there is yet another book to read, The Soul of a New Machine (1981) by Tracy Kidder (1945 – ). Data General needs a machine to compete with Digital Equipment’s VAX, 32-bit computer. In South Carolina, they start project “Fountainhead”, where they divert almost all of their senior design personnel. A few remaining senior designers in Massachusetts are allegedly engaged in improving Data General’s existing products. However, Tom West (1939 – 2011), starts a skunkworks project, “Eagle”, that becomes a backup in case Fountainhead fails (which it does). It is a high risk project using new technology and misusing newly graduated engineering.

There are lots of candidates for declaring the first PC, as in personal computer. Personally, I opt for the 1973 Xerox Alto, since it offered both hardware and software that worked. Others may refer to the 1976 Apple II, 1977 Commodore PET 2001 or 1977 Radio Shack TRS-80 or even the 1981 IBM PC.

Most people were still using a terminal, rather than a PC, until about 1990. Terminals didn’t die when PCs arrived, because there was initially no easy way to connect a PC to the mini-computer. The two machine types had incompatible operating systems, MS-DOS on PCs, and a host of proprietary operating systems on the assorted mini-machines. Novell NetWare and Banyon Vines offered solutions, but these were weak and difficult to implement. Important data was stored and backed up on tapes, that required special readers located in a machine room. When PCs did finally connect to larger computers, the PC usually required an ethernet card, the entire building had to be wired for ethernet cables, and the name of the mini-computer was changed to server, that lived inside 19-inch racks with 1.75 inch rack-units, a system standardized by AT&T around 1922.

The other first PC, as in portable computer, today better known as a laptop, is a matter of debate. The Xerox Dynabook from 1972 was a fantastic machine, except for one fatal flaw – it was never actually built in hardware, only as a conceptual model. Most other early machines were either too heavy or were equipped with screens that were too small. This situation continued until 1985, when Toshiba finally produced the T1100, fairly accurately described as “the world’s first mass-market laptop computer”.

Both LANs (Local Area Networks) and WANs (Wide Area Networks) started interconnecting users in the early 1990s. The need for servers brought about a need for a standardized operating system. The first steps involved the use of different flavours of Unix, first developed in the 1970s at Bell Labs, along with the C programming language. The Unix modular design provides a set of simple tools that each performs a limited, well-defined task. It uses its unified filesystem as the primary means of communication, along with shell scripting.

A number of unfortunate issues related to the proprietary origins of Unix, led many to seek an open-source solution. It was found in the use of BSD (Berkeley Software Distribution) and the Linux kernel based operating system distributions, along with other related products that could be used freely. Linux was able to address a variety of different segments, servers, consumer desktops, laptops, tablets, phones and embedded devices. This is assisted by the modular design of the Unix model, which allowed the sharing of components.

Initially, home users had the choice of Windows or Apple operating systems. In the mid- to late 1990s, low-speed, dial-up modems allowed Internet access. People even started wiring their houses for the Internet, with Ethernet cables. However, most office and home computers were still beige boxes.

Fictional tablets first appeared in Stanley Kubrik’s (1928 – 1999) A Space Odyssey (1968). Real tablets first appeared in the last two decades of the 20th century. However, it wasn’t until 2010, when Apple released the iPad, that the tablet achieved widespread popularity.

Cell phones are often incorrectly referred to as mobile devices. They are more correctly handheld devices, even if they spend most of their times in assorted pockets and bags. It is the human with the bag or pocket that is mobile. On 1966-12-01, the first commercial cellular network (OLT) was launched in Norway. This was subsequently replaced, in 1981, with the Nordic Mobile Telephone (NMT) system, in operation in Denmark, Finland, Norway and Sweden. These used what would be regarded today as massive phones. Thus, the first personal data assistant (PDA) that could be accepted today as a handheld device, was the 1984 Psion Organizer, although PDA was not used as a term until 1992.

The 1996 Nokia 9000 Communicator, can be regarded as the first primitive smartphone. It was actually a hybrid combining PDA and conventional cell phone features. Canadians, especially, will want to date the smartphone to Research in Motion’s 2002 BlackBerry. The company founder, Mihal “Mike” Lazaridis (1961 – ) is in some ways the Canadian equivalent of Steve Jobs (1955 – 2011).

Corrections: Alasdair McLellan has corrected two errors. I had exaggerated the size difference of RAM between the IBM /System 370 and a Raspberry Pi, by a factor of 1 000. It is not 2 – 8 million times larger, but only 2 – 8 thousand times larger. The first commercial cellular network was not Japanese, but Norwegian. Documentation for it can be found in this Norwegian language source.

The Norwegian Wikipedia also writes about it, stating: Offentlig Landmobil Telefoni (OLT) var det første mobiltelefonnettet i Norge. Det ble opprettet 1. desember1966 og ble nedlagt i 1990 (stopp for nye konsesjoner var 1. november 1989). Ved intruduksjonen av NMT i 1981 var det ca. 22 000 abonnenter på OLT. OLT var manuelt, og ikke et automatisk mobilsystem, og regnes derfor som før “1G”.

Translation into English: Public Land Mobile Telephone (OLT) was the first mobile telephone network in Norway. It was established on December 1, 1966 and closed in 1990 (stopping for new licenses was November 1, 1989). At the introduction of NMT in 1981, there were approx. 22,000 OLT subscribers. The OLT was manual, and not an automatic mobile system, and is therefore considered as before “1G”.

Computing: The Series

Red lighted Keyboard (Photo: Taskin Ashiq, 2017)

In 2020, a series of weblog posts about computing devices will be written and published. The first in this series, about the end of support for Windows 7, was already published one week ago, on 2020-01-07.

Many people, do not know what types of devices will be advantageous for them to acquire. Even when they know the type of device, they do not understand how to evaluate that category in order to make appropriate purchases. Brand names and price become proxies for device quality. Unfortunately, this can result in inappropriate devices being selected. Not all devices need to be purchased new. Many older, even discarded, devices are often suitable for continued use, but may require the installation of different, more appropriate software.

This series consists of:

  1. Windows 7 (2020-01-07)
  2. Computing: The Series (2020-01-14)
  3. Devices Past (2020-01-21)
  4. Devices Future (2020-01-28)
  5. Clouds & Puddles (2020-02-04)
  6. Universal Serial Bus (2020-02-11)
  7. Video connectors (2020-02-18)
  8. Power supply/ charging (2020-02-25)
  9. Input & Output Peripherals (2020-03-03)
  10. Computer Access & Assistance (2020-03-10)
  11. External Drives (2020-03-17)
  12. Printers (2020-03-24)

Starting 2020-04-01, the focus at Cliff Cottage, will be on outdoor building construction. There will be limited time for blogging, with the exception of a single monthly update. Blogging will resume again 2020-10-06. There are several different categories of computing devices that most people may use/ acquire for work and leisure:

  1. Handheld devices (2020-10-06)
  2. Laptop & desktop devices (2020-10-13)
  3. Media players (2020-10-20)
  4. A Practical Server (2020-10-27)
  5. Vehicle devices (2020-11-03)
  6. Smart home devices (2020-11-10)
  7. Other embedded systems (2020-11-17)
  8. Visual impairment (2020-11-24)
  9. Hearing impairment (2020-12-01)
  10. Dexterity impairment (2020-12-08)
  11. Mobility impairment (2020-12-15)
  12. Computing: A Summary (2020-12-22)

Many of these will focus on the needs and limitations of older users, and how to mitigate the impact of various impairments.

Each topic, including publication dates, is subject to revision. People who want other topics covered can contact me via email: brock@mclellan.no

Update: On 2020-02-04 kl. 13: 40 Two of the topics on this post were changed. 09. Input Devices (2020-03-03) and 10. Output Devices (2020-03-10) were merged into 09. Input & Output Devices (2020-03-03), and a new topic 10. Computer Access & Assistance (2020-03-10) was created. On 2020-02-16 kl. 07:00 Input & Output Devices was changed to Input & Output Peripherals.