Power supply/ charging

Deutschen Post electric vans in 1953-09-29 in Berlin. Electric vehicle batteries can be used to provide emergency electric power during power outages, to ensure that all necessary computer controlled functions in a house continue to operate.

Computer devices are dependent on electricity to operate. Increasingly, devices use battery storage/ power, to gain a temporary independence from the electrical power network. Various forms of small scale, local energy production (solar, wind) can even lead to a more permanent independence. However, not everyone is in a position to become permanently independent from the grid.

An electrical power blackout/ cut/ failure/ outage is the loss of electrical power somewhere in the network affecting the supply to an end user. These may be caused by faults/ damage at power stations, substations, transmission lines, short circuits, circuit breaker operation. A transient fault is a temporary loss of power, that is automatically restored once the fault is cleared. A brownout is a drop in voltage in an electrical power supply. Brownouts can cause operational problems. A blackout is the total loss of power to an area. They may last from minutes to weeks depending on circumstances. Rolling blackouts are planned blackouts that occur when demand exceeds supply. It rotates customers, so that at any given time some receive power at the required voltage, while others receive no power at all. Preventative blackouts are also used as a public safety measure, for example, to prevent wildfires around poorly maintained transmission lines.

Batteries in electric vehicles as well as solar panels are ensuring that there is always a minimal amount of electric power available, even if there is a grid related blackout. Circuits have to be designed so that electricity is not fed into the grid at these times, because that power could represent a hazard to people working on the lines to restore power.

Desktop and Tower computers

There are many different form factors used to make desktop and tower computers. These motherboard specifications determine dimensions, power supply type, location of mounting holes and ports. This ensures that parts are interchangeable over time. Two are especially important. The ATX specification was created by Intel in 1995, and is still the most popular form factor. Here the power supply offers several different voltages to meet the specific needs. These include +3.3 V, -3.3 V, +5 V, -5 V, +12 V and -12 V. There were and are attempts to offer just 12 V, but this reduces the selection of components available.

This type of computer has a power supply built into the computer case. It is suitable for power intensive assignments such as gaming or video rendering. Some can provide 500 W of power. The tower format is especially attractive if a large number of expansion cards are needed. One challenge with these is the need for active cooling. Fans can be noisy. The open nature of the case means that it will attract and accumulate dust, which will have to be cleaned at regular intervals.

The Mini-ATX family of motherboard specifications was designed by AOpen in 2005 with Mobile on Desktop Technology (MoDT). This adapts mobile CPUs for lower power consumption. With a simple but effective thermal design, it is suited for passive cooling, making it virtually silent. Manufacturing costs and overall operating power requirements are lower relative to active cooling designs. A DC-to-DC converter removes the power supply from the case, reducing system size. This type of computer is suitable for general use. The number of expansion cards is limited.

Laptops

Almost all modern laptops use Lithium Ion (LiIon) batteries and use 19 V chargers. Each battery arranges LiIon cells in series. A LiIon cell has a maximum charging voltage of 4.2 V, although slightly more voltage is applied in practice. Voltage needs: One cell = 4.2 V; two cells = 8.4 V; three cells = 12.6 V; four cells = 16.8 V; five cells = 21 V. A charger uses switched mode power supply (SMPS) to convert the supply voltage to required voltage, sometimes using a boost converter (steps voltage up), but usually a buck converter (steps voltage down). A 19 V buck converter could charge up to 4 cells in series. When a LiIon cell is close to fully discharged it’s terminal voltage is about 3 V. A buck converter can accommodate this reduced voltage to maintain its charging efficiency, which can exceed 95 %.

Handheld Devices

There are three charger plugs in common use on handheld devices: micro-USB-B, USB-C and Apple Lightning. A cable will typically have a connector for one of these standards on one end, and a standard USB-A 2.0 connector on the other, that can be plugged into a charger, which – in turn – is plugged into a wall socket.

The micro-USB and lightning connectors will likely disappear in Europe, as the European Union has mandated use of a common charging connector, which will probably end up being USB-C.

USB4 & USB-C

As stated in an earlier post, USB originally allowed power to flow downstream from a Type-A connected device to a Type-B connected device. This situation has changed with the introduction of USB-C connectors, which combine A and B characteristics. It is not easy to see where power is flowing.

Type of DeliveryCurrentVoltagePower
Low Power USB 3.0150 mA5 V0.75 W
High Power USB 3.0900 mA5 V4.5 W
Battery Charging 1.25 A5 V25 W
Standard USB-C3 A5 V15 W
Power Delivery 1.0 Micro-USB3 A20 V60 W
Power Delivery 2.0/3.0 USB-C5 A20 V100 W

One of the most common devices needing power to operate are external drives. For example, Western Digital My Passport external hard drives that feature 2.5 inch drives, are powered by the computer, using a USB Type-A connector for data and power. In contrast, Western Digital My Book external hard drives feature 3.5 inch drives, that also come with a USB Type-A connector for data, but with an AC adapter with wall socket plug for power.

USB Battery Charging defines a charging port, which may be a charging downstream port (CDP), with data, or a dedicated charging port (DCP) without data. Dedicated charging ports on USB power adapters can run attached devices and battery packs. Charging ports on a host are labelled such.

In the computer world, one is perpetually in a transition period. There is always something newer and potentially better, and something older but tested and, hopefully, more reliable. Thus, a Western Digital G-Drive comes with three cables: USB-C to USB-C, USB-C to USB-A and a power cable that attaches to an AC power adapter. For computers that can provide power over a USB-C connector, only the first cable is needed. For computers without USB-C power, this first cable connects the AC adapter to the G-Drive, with the second cable connects the G-Drive to the computer.

Power over Ethernet (PoE)

With domotics (smart houses) becoming increasingly popular, it is becoming increasingly advantageous to install and use Ethernet cables to ensure devices are able to communicate effectively. While some switches only provide data transfer, many switches offer power, using one of the many Power over Ethernet standards.

There are two different approaches to providing power. Power sourcing equipment (PSE) that provide power on the Ethernet cable, typically a network switch, is called an endspan or endpoint. The alternative is an intermediary device, a PoE injector, also referred to as a midspan device.

A powered device (PD) is any device powered by PoE. One common device is a room controller which has sensors collecting data about a room, and actuators such as a solenoid capable of opening and closing a heating vent. In the future many computing devices should be able to receive power directly from an Ethernet cable.

PoE is expected to become a DC power cabling standard, replacing individual AC adapters in a building. While some are concerned that PoE is less efficient than AC power, others argue that it is a better solution because a central PoE supply replaces several/ many AC circuits with inefficient transformers and inverters, compensating for any power loss from cabling.

TypePoEPoE+4PPoEType-4
Standard 802.3af Type 1at Type 2bt Type 3bt Type 4
Power at PSE15.4 W 30 W 60 W 100 W
Power at PD 12.95 W 25.530.0 W 51 W 71 W
Voltage at PSE44.0–57.0 V50.0–57.0 V[50.0–57.0 V 52.0–57.0 V
Voltage at PD 37.0–57.0 V42.5–57.0 V42.5–57.0 V41.1–57.0 V

Inductive Charging

Inductive charging involves wireless power transfer, using electromagnetic induction to provide electricity to portable devices. The most common application is the Qi wireless charging standard. Devices are placed near an inductive pad. There is no need for electrical contact with a plug, or for devices to be precisely aligned, for energy transfer. Advantages of wireless charging include: corrosion protection and reduced risk of electrical faults such as short circuits due to insulation failure; increased durability, because there is significantly less wear on the device plug and the attaching cable; No cables; Automatic operation.

There are some disadvantages: Slower charging, devices can take 15 percent longer to charge; more expensive because of drive electronics and coils in both device and charger; there is a certain inconvenience cause by an inability to move a device away from its pad, while it is being charged; there is an assortment of incompatible standards that not all devices support; devices get hot when charging, and this continued exposure to heat can result in battery damage.

The Qi standard is supported on devices fro Apple, Asus, BlackBerry, Google, HTC, Huawei, LG Electronics, Motorola Mobility, Nokia, Samsung, Sony and Xiaomi. Released in 2008, the Qi standard had by 2019 been incorporated into more than 160 handheld devices.

Uninterruptible Power Supply

An uninterruptible power supply (UPS) provide electricity to an attached device if the main supply becomes unavailable. The duration of this protection varies. This is particularly important for desktop and server devices that do not have built-in batteries, in contrast to those with batteries, found in handheld devices and laptops. Even if a UPS is designed for mission critical equipment, it can be nice to have in a residence. Perhaps the most important device to connect is to a UPS is the router, to allow communication outside the residence. The local internet service provider (ISP) should have the rest of the network protected with their own UPS. What they actually provide varies.

Data or voice communication with a handheld device on cellular network can be the most effective way of communicating in an emergency situation, during a blackout. This is because the cellular base stations should have their own backup power sources, allowing them to operate normally.

Surge Protection

All electronic devices should be protected against surges = situations where voltages increase above specified levels, even if just for one or more seconds. Surge protectors can prevent this form of damage. This is probably best provided by having surge protection built into the circuit-breaker box.

Video Connectors

Some technical journalists are more dramatic than others. One described a display/ monitor/ screen as a window opening onto the soul of a computer, and the purchase of an appropriate one, a make or break experience. Both statements are exaggerations. Within a given budget, one must attempt to optimize the value of each component, and find a compromise.

There are several terms that will have to be understood to appreciate video connectors. Source, here, will be used generally to describe a machine producing a video signal, possibly through the internet, or stored as a file, or produced by a program. A display is a video screen capable of showing video images. A source is usually connected to a display using a cable. The ends of the cable, as well as the physical ports on the machines are connectors.

Most of the time graphic content on a display is something relatively static, including a desktop image, a text being written, or a list of files. At other times, the content may be more dynamic, such as a video film or a game.

A video connector is often overlooked because it is smaller than a display. Yet, it determines which displays can be used. This applies to laptop as well as desktop machines, home media centres, and handheld devices. There are a lot of standards associated with computer displays. Wikipedia lists 69 display standards, 22 signal and 32 connector standards. Fortunately, most of them can be forgotten as relics of the past.

One of the challenges with writing about video connectors is that there are several groups of intensive video users, who have specific demands. Some of these people are video editors, others are film fanatics, but most are gamers. They probably know very precisely what they want, and have other sources for advice. Intensive users will probably want to use a DisplayPort, version 2.0 on both source and display, connected using an ultra-high bit rate 10 (UHBR 10) cable. Mac users with USB-C connectors will want to connect to USB-C displays. With these extremists out of the way, the rest of this weblog will consider the needs of more ordinary users.

Some comments about specific video connectors

In computing’s Middle Ages, a Video Graphics Array (VGA) connector emerged in 1987. It was used on video cards, computer displays, laptop computers, projectors and even high definition television sets. Later, a smaller mini-VGA port was sometimes provided on laptop computers. This connector is still found on large numbers of sources and displays.

The Digital Video Interface (DVI) was designed to replace VGA in 1999-04. It can transmit uncompressed digital video in three modes: DVI-A (analog only), DVI-D (digital only) or DVI-I (digital and analog). It is backward compatible for use with VGA displays.

High-Definition Mulitimedia Interface (HDMI) dates from 2002-12. It is backward compatible with DVI-D and DVI-I, but not DVI-A. There are five types of connectors in use Type A = standard-HDMI; Type B = dual-link-HDMI (never used); Type C = mini-HDMI; Type D = micro-HDMI; and Type E = Automotive Connection System (ACS).

DisplayPort dates from 2006-05. It is backward compatible directly or indirectly, with all of the previously mentioned connectors. All DisplayPort cables are compatible with all DisplayPort devices, but may have different bandwidth certification levels (RBR, HBR, HBR2, HBR3). The major difference between DisplayPort and HDMI, is that while HDMI originated with consumer electronics, DisplayPort was oriented towards computer standards, especially at the more expensive end of the market. While there are several other DisplayPort connectors, only the Full-size DisplayPort and Mini-DisplayPort connectors will be discussed further.

What types of connectors are being used on each machine? Machines can be divided into sources and displays. Sources include different types of handheld devices (aka mobile/ cell phones, tablets), laptops, desktops and media players. Displays include monitors, televisions and projectors.

If any sources or displays support only VGA or DVI connections, these units should be considered for replacement. Modern TVs and displays have HDMI ports, DisplayPorts and/ or USB-C ports. Laptops typically have HDMI, micro-HDMI, standard DisplayPort or Micro-DisplayPort or, increasingly, USB-C ports.

The easiest connections are between machines that use the same family of connectors. It doesn’t matter what type of DisplayPort plug one is using, as long as the connector ends match those on the devices. One should buy cables with the certification level of the most advanced device (source or display).

A similar situation occurs with HDMI. The ends have to match, and the cable quality should mirror that of the best machine.

If any source equipment features DisplayPort Dual-Mode (DP++) this is a standard which allows DisplayPort sources to use simple passive adapters to connect to HDMI or DVI displays. Again, it is a relatively simple solution.

If both source and display have HDMI ports, an HDMI cable can be used. If different technologies are used an adapter, such as HDMI-to-DisplayPort, or vice versa, can be used. Similarly, there are cables with VGA, DVI, Micro-DisplayPort, DisplayPort or USB-C connectors at one end and HDMI at the other end.

A USB-C to HDMI / USB-C/ USB 3.0 multiport adapter is practical for computers using USB-C for AV output to displays, televisions and projectors, with video resolutions up to 4K (3840X2160P/30HZ). Source devices include MacBooks, Chromebooks, as well as several Windows and Linux devices.

Using this approach with laptops gives all of the disadvantages of a desktop without any of its the advantages. The laptop may not have enough ports for everything. It can be time-consuming and frustrating unplugging then reattaching peripherals after every move.

Previously, docking stations solved these problems. Peripherals, such as keyboard, mouse, display were plugged into the docking station, along with the laptop. Unfortunately, these were often brand and sometimes even model dependent. General-purpose docking towers are now available.

Handheld Devices

A Mobile High-Definition Link (MHL) adapter and a standard-HDMI to standard-HDMI cable is used to connect Android devices to HDMI displays. Here are the steps to follow.

  1. Use a MHL (micro-USB to standard-HDMI) adapter.
  2. Plug the male micro-USB (small) end of the adapter into the Android device.
  3. Plug one end of a HDMI cable into the MHL adapter.
  4. Plug the other end of a HDMI cable into a TV.
  5. Turn on the TV.
  6. Change your TV’s input to the relevant HDMI port.
  7. Wait for the Android screen to display on the TV.
  8. To work the depicted adapter needs a 5V 1A external power supply (5V 1A) to.power the adapter and to charge the Android device. Plug the charger’s male micro-USB end into the female micro-USB port on the adapter.
  9. Note: When using such an adapter for the first time, the phone must be rebooted after the adapter is connected to the phone, otherwise there will be no HDMI output.

Similar procedures need to be followed to connect an iPhone Lightning connector to a HDMI connector on a television using a adapter. Once again, but depending on the adapter, it may be necessary to have the charger plugged into the adapter for it to work. Others only need to be plugged in if the iPhone needs to be charged.

USB4 & USB-C

As stated in the previous weblog post, the future of video connectivity is the USB4 protocol and the USB-C connector. While every USB-C port looks the same, they do not all provide the same functions. This is especially true for both power and video. Its main advantages include high data/ video throughput, and ability to transfer electrical power. This means that USB-C monitors will get power as well as data from the source device they are connected to.

Randy Suess (1945 – 2019)

Randy Suess in 2004 as he appeared in BBS: The Documentary. Photo: Jason Scott

This weblog post is published on the 42nd anniversary (2020-02-16) of the opening of the CBBS (1978-02-16), the world’s first bulletin board service. It also commemorates the life of Randy John Suess (1945-01-27 – 2019-12-10). Born in Skokie, Illinois, Suess served for two years in the U.S. Navy, before attending the University of Illinois at Chicago Circle. He worked for IBM and Zenith.

In the 1970s, Suess was an amateur radio operator, with call sign WB9GPM. He was an active member of the Chicago FM Club, where he helped with maintenance on their extensive radio repeater systems.

However, Suess is most famously remembered as the co-founder and hardware developer of the first bulletin board system (BBS), along with partner and software developer Ward Christensen (1945 – ). They met as members of the Chicago Area Computer Hobbyists’ Exchange, or CACHE about 1975. Development of the BBS started during the Great Blizzard in Chicago, and was officially online (an expression not used at the time) as CBBS = Computerized Bulletin Board System, on 1978-02-16.

The early development of this and other bulletin board systems is documented in a previous weblog post, and more extensively in BBS: The Documentary an 8-episode documentary series created by computer historian Jason Scott, made from 2001-07 to 2004-12.

The CBBS consisted of a S-100 computer with an 8-bit, ca. 1 MHz processor, 64kB RAM and two single-sided 8″ diskettes each holding 173 kB formed the basis of the system, along with a Hayes MicroModem 100 running at 300 baud. The operating system was CP/M, but it ran other software was written in 8080 assembler, and automatically loaded whenever someone dialed in at: 312-545-8086.

Attention to detail was important for the survival of the system. The floppy disk drive motors ran from mains electricity, and quickly burned out if left on. So the system was modified by Suess to turn itself on when the phone rang, and to keep going for a few seconds after the caller had finished to let the computer save its data, and then quietly go back to sleep. A unique feature of CBBS, was that if callers typed inappropriate words, these would be recognized and the system would log the caller out. Entering too many unproductive keystrokes would have the same effect.

Suess hosted CBBS, because his house in the Wrigleyville section of Chicago could be called without paying long-distance charges by anyone in Chicago. By the time the system was retired in the 1980s, its single phone line had logged over 11 000 unique users, and received more than 500 000 calls. A version of CBBS has run periodically, more than forty years after its inception, demonstrating the state of technology at the end of the 1970s.

Because of his interest in Unix systems, in 1982 he created the world’s first public-access Unix system, then called “wlcrjs”. In 1984 this became Chinet (Chicago Network), which connected to the internet through a satellite radio. It ran on one of the earliest Compaq portable machines, with a 4 MHz 8088 processor, 640kB of memory, and a 10 MB hard drive.

In the early days of Chinet, the Internet was still a research tool, unavailable to the general public. E-mail and newsgroup accounts were obtained from a university computer or a BBS-like system. There were no ISPs (Internet Service Providers). E-mail and newsgroup postings were relayed from one computer to the next using UUCP (Unix-to-Unix Copy), a suite of computer programs/ protocols allowing remote execution of commands and transfer of files, email and netnews between computers, mostly at night, mostly over regular telephone lines at 1200 or 2400 Baud. The entire content on the internet was so small that it could be downloaded in a single evening. This meant that Chicago area users could browse a global collection of data without paying long-distance telephone rates.

In the late 80’s, Chinet had 12 dialup ports for between 300 and 600 active users. Later these were replaced with 22 phone lines that connected to a bank of modems, operating at considerably higher speeds. Eventually, the UUCP model became obsolete, with more companies getting direct Internet access, and ISPs offering inexpensive access to consumers. Chinet’s dial-up port usage started to decline.

Chinet then started using PicoSpan to replace its BBS software. Eventually, yapp (Yet Another Picospan Program) replaced PicoSpan and remained in use until Chinet migrated from Unix shell-based access to web based interfaces in the late 1990’s.

Despite a fire in 1996-05, Chinet still exits today, entirely web-based, running Simple Machines Forum software on a Debian GNU/Linux system.

Suess was married twice, first to Agnes Kluck and then to Dawn Hendricks, both marriages ended in divorce. He had two daughters, Karrie and Christine and one son, Ryan.

Randy Suess died 2019-12-10 in Chicago, Illinois. Currently, CBBS is operative, with information available about Randy Suess and his death.

Full disclosure: I am a registered user of CBBS/ Chinet, Chicago.

Universal Serial Bus

Older USB connectors are being replaced with USB-C (bottom right). Photo: Stéphane Mottin.

Conclusion: If you are considering investing in new computing devices carefully consider ones equipped with USB-C ports. This is the technology of the present and future. USB-A, USB-B, Mini-B and Micro-B connectors are relics of the past. However, devices with them should not be discarded. Cables with two different USB types on each end enable interconnectivity.

The Universal Serial Bus (USB) standard simplifies and improves the interface between a host computer and peripheral devices, with minimal operator action. Its interface is self-configuring. Connectors are standardized, so any peripheral can use any available receptacle. Usually there are no user-adjustable interface settings. The interface is “hot pluggable”, so peripherals can be used without rebooting the host. Small devices can be powered directly from the interface. Protocols for recovery from errors are defined, increasing reliability.

The USB standard specifies cables, connectors and protocols for connection, communication and supplying power to and between computers (Host or A-side) and peripheral devices (B-side). Peripherals include external disk drives, keyboards, mice and printers. However, they are not the only connectors. Screens typically use HDMI and other video connectors, the subject of a later weblog post. Similarly, Ethernet cables are preferred for connecting desktop computers and other devices to computer networks.

Today, most new Android handheld devices aka smart phones use USB-C (type C) connectors for charging and data transfer. Older Android phones have a Micro-B port. Apple iPhones (since iPhone 5) and most iPads use a Lightning connector. While the USB-C and Lightning look very similar, they are not interchangeable.

Fourth-generation MacBook Pro laptops, first released in October 2016, use USB-C for all data ports, and for charging. Windows laptops using USB-C ports for charging include some Acer Aspire, Asus Zenbook, Dell XPS, HP Sprectre and Lenovo ThinkPad models. Many other laptop models use an assortment of chargers, usually incompatible with everything else.

While the European Union has relied on consensus to standardize handheld device connections, this has not worked. While most manufacturers use USB-C connectors, Apple uses a Lightning connector. Now, the EU has said it will legislate compliance that will force all providers of handheld devices, including Apple, to use USB-C connectors. When implemented, this will probably have implications for the entire world.

USB connectors are at the heart of legacy-free computers, a term devised by Microsoft to describe a PC without a lot of the equipment previously found on beige boxes. Much of it large and slow. Most users appreciate this redesign, and especially the fact that a legacy-free PC must be able to boot (start up) from a USB device. The exception is that gamers, because of latency (time delay) issues, want to retain their PS/2 keyboard connectors.

Work on USB began in 1994, with a mandate from seven companies: Compaq, DEC, IBM, Intel, Microsoft, NEC, and Nortel. The goal was to replace a multitude of connectors with a single interface, and to increase data rates to and from external devices. A development team was established by Intel at Folsom, California led by Ajay Bhatt (1957 – ). The first integrated circuits supporting USB were produced by Intel in 1995.

The USB standard is currently maintained by the USB Implementers Forum (USB-IF), with four generations of specifications: USB 1.x (1996), USB 2.0 (2000), USB 3.x (2008, 2013, 2014, 2017) and USB4 (2019).

USB 1.0 ports are no longer relevant. However, efforts have been made to make all interfaces made after that backwards compatible. Thus, USB4 is backwards compatible with everything between USB 3.2 and USB 2.0.

ProtocolsUSB 2.0
2000-04
USB 3.0
2008-11
USB 3.1
2013-07
USB 3.2
2017-08
USB4
2019-08
Data rate 480 Mbit/s5 Gbit/s10 Gbit/s20 Gbit/s40 Gbit/2
Type AOriginalBlueBlueNoNo
Type BOriginalBlueBlueNoNo
Type CYesYesYesYesYes

USB 1.0 from 1996-01 provided 12 Mbit/s of data transfer. When USB 2.0 was introduced, an unshielded cable version allowed for the attachment of inexpensive peripherals at a data rate of 1.5 Gbit/s.

USB Connectors

There are two versions of USB connectors one on a device and the other that fits into it on a cable. The device or female connector is referred to as a receptacle or port. The cable or male connector is referred to as a plug. Originally, a USB connection was always between a host or hub at the A connector end, and a device or hub’s upstream port at the B connector end. This is why peripherals, such as printers, use Type-B connectors. With handheld devices, the charger is regarded as the A end, while the device is regarded as the B end. Things are no longer so simple.

Port Type-A Type-B Mini Type-B Micro Type-B USB-C
Pins 4 to 9 4 to 9 4 to 9 4 to 9 24
Insertion/ removal cycles 1 500 1 500 5 000 10 000 10 000

The Future: USB4 & USB-C

The future of USB connectivity is the USB4 protocol and the USB-C connector. Yes, both of these could be replaced, and probably will be sometime in the future, but both represent reality now. While every USB-C port looks the same, they do not all provide the same functions. The main functions are data, video and power transfer. It is not possible to tell by looking at the port what is incorporated. If there is no documentation stating otherwise, consumers have to assume that they are simply data ports.

The USB-C connector began its use using the USB 3.1 protocol, which allows data transfers at 10 Gbps, theoretically twice as fast as USB 3.0. The USB 3.1 protocol can also be applied to USB 3.1 Type-A ports. Note: USB-IF has created unnecessary name confusion in that 3.0 connectors are also known as USB 3.1 Gen 1 & USB 3.2 Gen 1 x 1 while USB 3.1 connectors are also known as USB 3.1 Gen 2 & USB 3.2 Gen 2 x 1. The new USB 3.2 port was referred to as USB 3.2 Gen 2 x 2. The x 1 or x 2 refers to the number of lanes.

USB4 incorporates the Thunderbolt 3 protocol into the USB mainstream. The Thunderbolt interface was developed by Intel and Apple. It combines PCI Express and DisplayPort into two serial signals and provides DC power. Thunderbolt 1 and 2 use a Mini DisplayPort connector, whereas Thunderbolt 3 uses the USB-C connector.

USB Cables

Depending on phone, computer and vehicle configuration several different cable types may be required. In Scandinavia, Clas Ohlson stores offer a large selection, as do many specialist stores and online stores.

  • Android devices
    • USB-Micro B to USB-A: Connect older Android phones to older chargers, older computers, and older automobile data ports.
    • USB-C to USB-A: Connect newer Android phones & newer iPhones to older chargers, older computers and older automobile data ports.
    • USB-C to USB-C: Connect newer Android phones to USB-PD (power delivery) chargers, USB PD compatible batteries, some computers (with USB-C ports), 12V car chargers, newer automobile data ports.
  • Apple devices
    • Lightning to USB Type-A: Connect most iOS devices to legacy automobile data ports for CarPlay
    • Lightning to USB Type-C: Connect most iOS devices to current generation macOS devices using USB PD compatible batteries, wall chargers, and 12V car chargers.
  • Other
    • USB Type-A to proprietary cable/magnetic connector charger for Apple Watch or Samsung Galaxy Gear/Android Wear or older Apple Mac equipment.

Additional information on USB-C ports will be presented in the two next weblog posts in this series: 7. Video connectors (2020-02-18) and 8. Power supply/ charging (2020-02-25).

An Aside: PS/2 keyboards

First, a PS2 keyboard is not a PS/2 keyboard. The former refers to Sony’s Play Station 2, launched in 2000. The latter, to ports on IBMs third-generation personal computer, the Personal System/ 2, launched in 1987. It is an ancient and outdated system. Yet, gamers often prefer to use PS/ 2 keyboards (and sometimes even mice) for several reasons. First, PS/2 is analog. Whenever a button is pressed, it sends a command to the computer immediately. This contrast with USB, where the computer polls USB ports, and through them attached devices, at a rate dependent on the frequency of the port. Previously, this was about 125 Hz or so, which could introduce a latency (delay) from 0 to about 8 ms. Polling frequency on modern computers is about 1000 Hz, which reduces this latency to a maximum of 1 ms.

PS/2 keyboards also support N-Key rollover, which allows users to press as many keys as they want, simultaneously. This was not possible with USB keyboards, however many newer USB gaming keyboards support this now.

PS/2 peripherals work immediately without drivers. This is especially useful when diagnosing motherboard and related problems that USB devices cannot detect.

PS/2 devices are not hot-swappable. If a device is plugged into a PS/2 port when a computer is operating, the machine will have to be restarted for the device to function.

Unlike keyboards, USB mice have an adjustable polling rate, allowing them to have polling rates of up to 1000 Hz. Thus, they have had far fewer issues than USB keyboards.

PS/2 hardware is being phased out, and is unavailable on many modern gaming motherboards = a printed circuit board containing the main computer components along with various connectors. Unless that hardware is built into the motherboard, there is no point in using PS/2 equipment, and no point in using a USB adapter to correct any of USB’s deficiencies. At that point it is “game over” for PS/2, and the user might as well use USB peripherals.

Clouds & Puddles

Clouds are not always cute. Sometimes they are threatening, with the potential to damage, injure and even kill. Inderøy, Norway 2015-08-11. Photo: Patricia McLellan.

This weblog post is written to help people gain a better understanding of a house/ home/ residential computer network.

The Xample family with mother Ada, father Bob, cat Cat and daughter Deb are going to serve as an example. Currently, the family has the following equipment. Both Ada and Deb have iPhones, but Bob has a Samsung phone. In addition, Ada uses a Mac laptop, Bob has an old Acer desktop machine, while Deb uses a Chromebook laptop, that belongs to her school. The family is connected to the internet with a router, and they have an old Canon printer.

Some basic vocabulary

Sometimes, language can be confusing. To communicate about computers, people have to use words the same way, or misunderstandings will occur.

Users are people. Admittedly, there could be situations where something other than a human is a user. For example, at some point in the future Cat might be able to activate a feeding station. That would make Cat a user.

A computer, frequently known as a device, consists of hardware and software. Hardware is made up of physical/ mechanical components. Yet, these components will not work without software, a collection of instructions and associated data, that provide services.

Software capable of doing something is called a program. A computer program that is running/ working is usually referred to as a process. Software is written in a computing language that humans can read, and programmers can understand. This source code is then translated into machine code that the specific hardware on a machine can understand.

Operating systems are an important category of software. An operating system is the most fundamental program. It controls the functioning of the hardware and direct its operations. It manages the hardware including component parts, but hides the details from users. The Xample family, like most families, use several different operating systems. The iPhones use iOS, the Samsung phone uses Android, Ada’s Mac laptop uses macOS, Bob’s desktop uses Windows 7, Deb’s Chromebook laptop uses ChromeOS. Both the router and the printer also have their own special operating systems.

Networks

The term network can be used as either a noun or a verb. Begin with the verb, to network. This is a situation where two or more devices are able to exchange information with each other. The noun network refers to the sum of devices that are connected together. When networking, each connected device is referred to as a node. In the above case, an end node is connected to a network node, the iPhone and the router, respectively.

Nodes are connected to each other using data links. These links may be wired or wireless. The most common type of wired link is the Ethernet cable. The most common type of wireless link is WiFi. While some houses have only WiFi links, there can be good reasons for using cabled links if there is an opportunity for it.

Nodes don’t need to have a direct connection to each other, to be part of a network. Thus, one node could be located in Bergen, another in Detroit, and a third in New Westminster. This is an example of a wide area network (WAN). If the nodes are all located in the same cluster of buildings it is a local area network (LAN). For example, there could be several nodes inside a house, but one in a garage and another in a shed.

Computer professionals often use the terms client and server to describe a computing model. These are confusing terms. Fortunately, most people do not need to use them. The important thing to know is that a client either does local work or requests an external service. A server either provides that service directly to a client, or supervises an external supplier working on behalf of a client. Both terms can refer to either hardware or software. Focus on the software to understanding what is happening.

To help the Xample family transition to a network suitable for them, we are going to look at four challenges facing the family.

Challenge #1 Sending an email with an attachment

Ada has struggled for the past year to knit herself a sweater. It is finally finished, and she has taken a photo of it with the camera on her iPhone. She wants to send a copy of the photo to her brother Ely.

When Ada uses her phone to write an email, she is using an email client that is built into the iPhone. This client also allows her to attach the photo. When it comes time to send the email and its attachment, Ada uses an email server built into the iPhone that allows it to use the router to send out and receive emails.

In this example, there are two communicating devices, Ada’s iPhone and the family router. There is a network, even if it is small, simple and temporary. The two devices are connected using WiFi,

The router breaks both the email message and the photo into data packets. Each packet is equipped with a coded form of Ely’s email address. To find out where to send information, the router looks up an address using a routing table. If Ada receives an email from someone, the router will reassemble incoming data packets so that these can be understood. At home, most people use a digital subscriber line (DSL) or cable router connected to the Internet through an Internet service provider (ISP).

A DSL router typically integrates a modem. A modem sends packets of data across telephone lines, TV cables or optical fibers. A modem, by itself, does not provide the functions of a router.

Challenge #2 Printing

Ada wants to take a copy of the photo on her next visit to her grandmother Fay (1919 – ). Fay is not computer literate, but likes to decorate the walls of her room with photos. Before looking at what is happening in detail, we are going to learn a few more terms. To print a letter on paper, a print server will coordinate the printing process.

The main challenge with the family Canon printer is that it is so old that it doesn’t have any WiFi connection and won’t connect directly to the iPhone.

Ada connects her iPhone to her MacBook using the iPhone’s charger cable. She plugs the charging end into the iPhone, and the USB end into the MacBook. She then opens her Mac Photos app on her Mac laptop, clicks on Import, selects the photo she wants to transfer, clicks on Import (#) Selected, then finally clicks on Albums. Now the photo is on her laptop, and Ada can disconnect the charger cable.

To print the photo, Ada takes a USB cable, permanently attached to the printer, and plugs the other end into a USB port on her computer. Using a printer program and drivers, previously installed on her laptop, she can now print the photo. By plugging in the cable, Ada has once again set up a small, simple and temporary computer network. This time, it consists of the MacBook laptop and the Canon printer.

Challenge #3 A Permanent Network

Deb has her bedroom upstairs. If she wants to use the printer she has to take her Chromebook downstairs to attach it into the printer, using a USB cable. This is inconvenient. Most individuals/ couples/ families need system resources that can be shared by all/ many/ some users effortlessly. The basis for a permanent network may already be in place with the WiFi capabilities that are built into most domestic routers.

WiFi is a set of standards that allow devices to communicate with each other wirelessly. Devices that can use WiFi include desktops, laptops, smartphones, tablets, smart TVs, printers, digital audio players, digital cameras, cars and even drones.

Some families may consider replacing their printer with one that has WiFi capabilities. An alternative approach is to keep the printer, but to invest in a Network Attached Storage (NAS) server. A NAS can act as a print server, letting approved users print on a common printer.

Equally important, it can also act as a file server, so that common files can be stored in a central place and used by everyone. Such files include media files: video files, audio files, e-books; family photographs, and documents used by the entire family.

Everyone in the family will have to become users of the NAS, with their own log-in. Be sure to add an additional user, called Guest. Not all users are treated equally. Users have to log their client devices onto the system with a user name and a password, or some other approved form of identification. In this way, random visitors are prevented from accessing the server and its resources, without permission. Guest will typically be able to access the internet, but not able to access files on the NAS or use the printer.

A NAS can also backup personal files. These backups can be encrypted and password protected,so that they are unaccessible to others.

The Xample family decide that a QNAP TS-251A best suits their needs. They equip it with 4 TB of storage, which they regard as adequate for their needs. The printer can now be permanently connected to the NAS using a USB port, given access to users. Many printer drivers are instantly available, although older printers may require some effert to download appropriate drivers. If the printer is compatible with the NAS, it will display a message to confirm that the printer is connected.

Challenge #4 Clouds & Puddles

The Xample family now have 3 smartphones, 2 laptops, 1 old desktop, 1 printer, 1 router and 1 NAS. The NAS can function as a print server and a file server. It is also a media centre, serving videos and audio.

Computer hardware manufacturing companies are always keen to describe old products with new names. They are always looking at ways to make their rather dull equipment seem more important than it actually is. Edge and cloud computing are two such names.

An edge computer, in dataspeak, is a local server device at the edge of the internet, hence the term. Many will find it difficult to distinguish an edge computer from any other server, because almost everything today is connected to the Internet. However, in years past, many local servers were only connected to local devices using a local area network (LAN) typically wired with Ethernet cable. There was no connection to the outside world.

The cloud, in dataspeak, refers to someone else’s server. Companies that offer cloud services to the public often claim that they do the heavy lifting, storing and safeguarding data. This is not always the case. Sometimes they lose data. Sometimes they lend data to others. They might even keep a copy of it, when you ask to have it back. The misuse of data held in trust, may have economic as well as other consequences. Adobe, Amazon and many other companies are very keen for consumers to visit a nearby cloud, and use software as a service. This is the most profitable for them. Using a cloud can be expensive.

For a short period, just after cloud computing came into vogue, it became fashionable to name non-cloud servers after bodies of water. Large businesses might refer to their in-house servers as lakes. Referring to a server at home as a lake verges on the pretentious. Modesty dictates referring to smaller bodies of water: a personal (puddle) server, a nuclear family (pool) server or an extended family (pond) server.

Fun Assignment: The reader is asked to distinguish a carbon based error from a silicon based error. Assistance with this problem can be found here.

Devices Future

Volkswagen and D-Wave Systems have used quantum computing to find optimal routes, as illustrated here in Lisbon, Portugal, and available as an app near you. (Photo: Volkswagen)

… and the answer is, everywhere.

Now for the question, where do people want to use computing devices?

Guestimations

After trying to collect and interpret validated statistics, I have given up and present some numbers than might approach something meaningful and coherent. Some are based on information collected by Simon Kemp, dated 2019-01-31. Other bits come from Wikipedia, such as this article, along with a variety of other places with assorted dates.

With a world population of 7.7 billion people, there are over 5 billion handheld devices, the vast majority also referred to as mobile phones, increasingly smartphones, although they do much more than connect people using voice communication. It would be much more honest to eliminate any reference to phone in the description. The German Handy or the French Portable, are both better. Other devices in this category include tablets, and similar devices lacking keyboards. Regardless, Android operating system variants clearly and increasingly dominate, with at least 75% of market share, with Apple’s iOS declining market share taking most of the remainder. It remains to be seen if Huawei will be able to introduce a viable alternative to Android.

There are two important characteristics that distinguish handheld devices from larger personal computers. They are the large screen size and the use of a keyboard input device. Minor differences also include the use of a mouse or some other pointer, They are often referred to as laptop and desktop machines. In terms of the world, this is small segment of machines compared to mobile devices, with its importance decreasing. Part of the reason for this decline is their inability to be used everywhere.

There is general agreement that the billionth personal computer shipped in 2002, and that there were one billion such computers in operation in 2008. The dispute is how many are in use now. Some are looking for a magic number of 2 billion, but 1.5 billion units is far more likely. Windows will be installed on at least 75% of machines, MacOS on, say, 13% (which to me seems high), ChromeOS on 6% (at least in the US, and higher than I experience in Norway) and Linux on 2%. The 2019 Stack Overflow developer survey gives very different figures on what is found on machines used by computing professionals. In round numbers: Windows on 45%, MacOS on 30%, and Linux on 25%.

Another category of computer is the embedded device. One essential aspect of these is the electronic control unit (ECU). Domotics refers to home robotics. It includes all aspects of smart home technology, including sensors that monitor the environment and actuators that activate controls. These include temperature, lighting and security. However, it is pervasive, found everywhere from electric toothbrushes, to toasters and every other form of kitchen machine. Today, even a lightbulb can be considered an ECU. A typical smarthouse may contain hundreds of these devices.

The vast number of ECUs expected, plus its vulnerability in terms of security, means that WiFi can only be a temporary solution. While communication can be built on top of 120/240 V AC circuits, most devices, including LED lights, actually use low voltage DC power. Anyone building something new should be installing Ethernet cable 6A at a minimum, with connections to every room. Power over Ethernet, (PoE) can then provide DC power to almost everything needed.

I expect clothing will soon include embedded devices, so that personal data can be continuously collected and monitored. In Sweden, I note that several individuals have voluntarily inserted RFID devices into their bodies, so that they can use these to identify themselves, rather than relying on PIN codes. Unfortunately, it is probably only a matter of time before these devices become mandatory.

Embedded devices are also found in cars where even the most primitive contain 25 – 35 ECUs. More luxurious models may have 70 or more ECUs. Hopefully, autonomous vehicles will soon be on streets near you. The last thing this world needs is a nut behind the wheel, especially one that feels provoked into road rage at the slightest offence. Electric vehicles are already here, with Tesla’s innovations leading the way. In Norway, there will be no opportunity for people to buy fossil fueled vehicles (including hybrids) after 2024. Everything will probably be battery electric, as an explosion at a hydrogen fueling station has dimmed everyone’s interest.

Command and control (C2) is defined by Marius Vassiliou, David S. Alberts and Jonathan R. Agre in C2 Re-Envisioned: the Future of the Enterprise (2015) as a “set of organizational and technical attributes and processes … [that] employs human, physical, and information resources to solve problems and accomplish missions.” (p. 1) This definition can apply to individuals, households, organizations, small businesses, large enterprises or even the military. One major challenge has been the tendency of large manufacturers of ECUs to consider just their own product range, and to make controllers for these and only these. This is not a viable solution. Our household has opted for the most inclusive solution, found in Home Assistant.

Miniaturization will continue into the future. I am uncertain about the future form factor of personal devices/ phones. Asked if they will shrink to wristwatch size or remain about the size they are today? Today’s form factor wins. Yes, one can imagine screen technology being built into glasses, or wrist watches, but will it happen? It will be interesting to see what has happened in 2040 and beyond.

In terms of PCs, they could be doomed to extinction. Physically smaller personal devices will be capable of doing everything PCs do. However, there may be situations where a person may want a larger screen, a keyboard and a pointing device. So the personal device will have to interact with these. I am not certain when voice control will replace the keyboard. When I first studied computing, in the mid-1970s, 1980 was even considered a target date for its replacement. However, that was based on people going from card punches to something else.

In terms of servers, one can also envisage a household having something the size of a small media centre, perhaps 100 x 100 x 50 mm (4″ x 4″ x 2″) which is about the size of our Asus PN 40 media player. At the current rate of miniaturization, it should be able to hold at least 100 TB by 2040. One could ask why anyone would need so much storage capacity, but today everyone seems capable of using every last byte of storage they have, and I see no reason for attitudes to change. Computers will be used in new areas because people have the processing power and data storage capacity to do it.

Perhaps the greatest change will come as quantum computing matures. Quantum computing is real. It allows computations to be made in seconds that would take a conventional supercomputer considerably longer. Google claims that its Sycamore processor with 54 Qubits, has achieved quantum supremacy, and is the most advanced quantum computing processor in the world, capable of processing in 200 s, what a Summit supercomputer would use 10 000 years to accomplish, making quantum computing 1 577 880 000 times faster. IBM has countered this, stating that it would only take 2.5 days, making quantum computing about 1 000 times faster. Regardless, quantum computing will provide faster calculations.

With my origins in Vancouver/ New Westminster, and with some of my most positive learning experiences at the British Columbia Institute of Technology, I will end this post by mentioning its Burnaby neighbour, D-Wave systems. They announced in 2019 their next-generation Pegasus quantum processor chip, the world’s most connected commercial quantum system, with 15 connections per qubit, and with more than 5000 qubits, to be available in mid-2020.

Devices Past

3D Rendering of computer center with IBM System/370-145 and IBM 2401 tape drives (Illustration: Oliver Obi)

In ancient times, computing meant batch systems that required users to drive across town to a computing centre, to punch their programs onto cards, then to submit those cards so they could be read by a card reader. An IBM 3505 Model B1 card reader from 1971 could read 80 column cards at the rate of 1200 CPM (cards per minute). It was based on the Hollerith Keyboard punch, from 1890. The programs were then run on a mainframe computer, such as an IBM System /370 dating from 1970. A machine consisted of several units housed in a large air-conditioned machine room with a raised floor to improve cooling, and conceal wiring. Processing took time, and results were provided an hour or two later, from high-speed printers, such as an IBM 3211, printing at about 150 lines per minute, more than enough to keep up with the punched card input. This was the basic situation from the mid-1950s until at least the mid-1970s, with variations.

The IBM System /370 Model 145 had 500 kB of RAM, 233 MB of hard disk space, and ran at 2.5 MHz. It cost from US$ 705 775 to US$ 1 783 000. The US Bureau of Labor Statistics consumer price index, states that US$ 1 in 1970 is worth US$ 6.63 in 2020. So that the IBM System /370 Model 145 would cost from about US$ 4.7 million to almost US$ 12 million in 2020 dollars.

Computers are a mix of hardware and software. Writing system software was a craft where a select few excelled. They wrote elegant but lean code that executed fast. In ancient times, when the hardware was primitive, craftsmanship mattered. Compilers and operating systems had to be written in assembly/ assembler language for increased efficiency and space savings. A programmer had to think like a processor, moving code into and out of registers. As computer hardware improved, the need to write parsimonious code gradually disappeared. Programmers started becoming verbose. Programming as a profession expanded far beyond the few.

To gain an understanding of the situation facing professional programmers, at this time, one of the best books to read is The Mythical Man-Month (1975) by Frederick Brooks (1931 – ). During Brooks’ exit interview with IBM’s legendary CEO Thomas Watson Jr. (1914 – 1993), a seed for the book was planted. Watson asked why it was harder to manage software projects than hardware projects. In this book the answer is stated, now known as Brooks’ law: “Adding manpower to a late software project makes it later.”

A 2020 Raspberry Pi 4 Model B is available with 1, 2 or 4 GB of RAM. That is anywhere from 2 to 8 000 times more than that found on the IBM machine in the previous paragraph. A 16 GB (or larger) SD card, contrasts with 233 MB of hard disk space. That is 68 times more. The speed of 1.5 GHz with 4 cores competes with 2.5 MHz, with a single core. Potentially there is a 2 400 times speed increase. More than anything else, with a RPi costing between US$ 35 and US$ 55, the IBM machine cost about 100 000 times more.

By the 1980s, card punches had given way to terminals, consisting of a screen (that frequently offered green text on a black background) and a keyboard. These were connected indirectly to a mini-computer, that replaced the mainframe. Digital Equipment Corporation were especially fond of using Ethernet cable to connect terminals to their VAX Mini-computers. Offices were starting to be interconnected. These machines still required their own machine rooms with adequate cooling, as well as the drive to the office.

To understand this new mini-machine period of computing, there is yet another book to read, The Soul of a New Machine (1981) by Tracy Kidder (1945 – ). Data General needs a machine to compete with Digital Equipment’s VAX, 32-bit computer. In South Carolina, they start project “Fountainhead”, where they divert almost all of their senior design personnel. A few remaining senior designers in Massachusetts are allegedly engaged in improving Data General’s existing products. However, Tom West (1939 – 2011), starts a skunkworks project, “Eagle”, that becomes a backup in case Fountainhead fails (which it does). It is a high risk project using new technology and misusing newly graduated engineering.

There are lots of candidates for declaring the first PC, as in personal computer. Personally, I opt for the 1973 Xerox Alto, since it offered both hardware and software that worked. Others may refer to the 1976 Apple II, 1977 Commodore PET 2001 or 1977 Radio Shack TRS-80 or even the 1981 IBM PC.

Most people were still using a terminal, rather than a PC, until about 1990. Terminals didn’t die when PCs arrived, because there was initially no easy way to connect a PC to the mini-computer. The two machine types had incompatible operating systems, MS-DOS on PCs, and a host of proprietary operating systems on the assorted mini-machines. Novell NetWare and Banyon Vines offered solutions, but these were weak and difficult to implement. Important data was stored and backed up on tapes, that required special readers located in a machine room. When PCs did finally connect to larger computers, the PC usually required an ethernet card, the entire building had to be wired for ethernet cables, and the name of the mini-computer was changed to server, that lived inside 19-inch racks with 1.75 inch rack-units, a system standardized by AT&T around 1922.

The other first PC, as in portable computer, today better known as a laptop, is a matter of debate. The Xerox Dynabook from 1972 was a fantastic machine, except for one fatal flaw – it was never actually built in hardware, only as a conceptual model. Most other early machines were either too heavy or were equipped with screens that were too small. This situation continued until 1985, when Toshiba finally produced the T1100, fairly accurately described as “the world’s first mass-market laptop computer”.

Both LANs (Local Area Networks) and WANs (Wide Area Networks) started interconnecting users in the early 1990s. The need for servers brought about a need for a standardized operating system. The first steps involved the use of different flavours of Unix, first developed in the 1970s at Bell Labs, along with the C programming language. The Unix modular design provides a set of simple tools that each performs a limited, well-defined task. It uses its unified filesystem as the primary means of communication, along with shell scripting.

A number of unfortunate issues related to the proprietary origins of Unix, led many to seek an open-source solution. It was found in the use of BSD (Berkeley Software Distribution) and the Linux kernel based operating system distributions, along with other related products that could be used freely. Linux was able to address a variety of different segments, servers, consumer desktops, laptops, tablets, phones and embedded devices. This is assisted by the modular design of the Unix model, which allowed the sharing of components.

Initially, home users had the choice of Windows or Apple operating systems. In the mid- to late 1990s, low-speed, dial-up modems allowed Internet access. People even started wiring their houses for the Internet, with Ethernet cables. However, most office and home computers were still beige boxes.

Fictional tablets first appeared in Stanley Kubrik’s (1928 – 1999) A Space Odyssey (1968). Real tablets first appeared in the last two decades of the 20th century. However, it wasn’t until 2010, when Apple released the iPad, that the tablet achieved widespread popularity.

Cell phones are often incorrectly referred to as mobile devices. They are more correctly handheld devices, even if they spend most of their times in assorted pockets and bags. It is the human with the bag or pocket that is mobile. On 1966-12-01, the first commercial cellular network (OLT) was launched in Norway. This was subsequently replaced, in 1981, with the Nordic Mobile Telephone (NMT) system, in operation in Denmark, Finland, Norway and Sweden. These used what would be regarded today as massive phones. Thus, the first personal data assistant (PDA) that could be accepted today as a handheld device, was the 1984 Psion Organizer, although PDA was not used as a term until 1992.

The 1996 Nokia 9000 Communicator, can be regarded as the first primitive smartphone. It was actually a hybrid combining PDA and conventional cell phone features. Canadians, especially, will want to date the smartphone to Research in Motion’s 2002 BlackBerry. The company founder, Mihal “Mike” Lazaridis (1961 – ) is in some ways the Canadian equivalent of Steve Jobs (1955 – 2011).

Corrections: Alasdair McLellan has corrected two errors. I had exaggerated the size difference of RAM between the IBM /System 370 and a Raspberry Pi, by a factor of 1 000. It is not 2 – 8 million times larger, but only 2 – 8 thousand times larger. The first commercial cellular network was not Japanese, but Norwegian. Documentation for it can be found in this Norwegian language source.

The Norwegian Wikipedia also writes about it, stating: Offentlig Landmobil Telefoni (OLT) var det første mobiltelefonnettet i Norge. Det ble opprettet 1. desember1966 og ble nedlagt i 1990 (stopp for nye konsesjoner var 1. november 1989). Ved intruduksjonen av NMT i 1981 var det ca. 22 000 abonnenter på OLT. OLT var manuelt, og ikke et automatisk mobilsystem, og regnes derfor som før “1G”.

Translation into English: Public Land Mobile Telephone (OLT) was the first mobile telephone network in Norway. It was established on December 1, 1966 and closed in 1990 (stopping for new licenses was November 1, 1989). At the introduction of NMT in 1981, there were approx. 22,000 OLT subscribers. The OLT was manual, and not an automatic mobile system, and is therefore considered as before “1G”.

Computing: The Series

Red lighted Keyboard (Photo: Taskin Ashiq, 2017)

In 2020, a series of weblog posts about computing devices will be written and published. The first in this series, about the end of support for Windows 7, was already published one week ago, on 2020-01-07.

Many people, do not know what types of devices will be advantageous for them to acquire. Even when they know the type of device, they do not understand how to evaluate that category in order to make appropriate purchases. Brand names and price become proxies for device quality. Unfortunately, this can result in inappropriate devices being selected. Not all devices need to be purchased new. Many older, even discarded, devices are often suitable for continued use, but may require the installation of different, more appropriate software.

This series consists of:

  1. Windows 7 (2020-01-07)
  2. Computing: The Series (2020-01-14)
  3. Devices Past (2020-01-21)
  4. Devices Future (2020-01-28)
  5. Clouds & Puddles (2020-02-04)
  6. Universal Serial Bus (2020-02-11)
  7. Video connectors (2020-02-18)
  8. Power supply/ charging (2020-02-25)
  9. Input & Output Peripherals (2020-03-03)
  10. Computer Access & Assistance (2020-03-10)
  11. External Drives (2020-03-17)
  12. Printers (2020-03-24)

Starting 2020-04-01, the focus at Cliff Cottage, will be on outdoor building construction. There will be limited time for blogging, with the exception of a single monthly update. Blogging will resume again 2020-10-06. There are several different categories of computing devices that most people may use/ acquire for work and leisure:

  1. Handheld devices (2020-10-06)
  2. Laptop & desktop devices (2020-10-13)
  3. Media players (2020-10-20)
  4. A Practical Server (2020-10-27)
  5. Vehicle devices (2020-11-03)
  6. Smart home devices (2020-11-10)
  7. Other embedded systems (2020-11-17)
  8. Visual impairment (2020-11-24)
  9. Hearing impairment (2020-12-01)
  10. Dexterity impairment (2020-12-08)
  11. Mobility impairment (2020-12-15)
  12. Computing: A Summary (2020-12-22)

Many of these will focus on the needs and limitations of older users, and how to mitigate the impact of various impairments.

Each topic, including publication dates, is subject to revision. People who want other topics covered can contact me via email: brock@mclellan.no

Update: On 2020-02-04 kl. 13: 40 Two of the topics on this post were changed. 09. Input Devices (2020-03-03) and 10. Output Devices (2020-03-10) were merged into 09. Input & Output Devices (2020-03-03), and a new topic 10. Computer Access & Assistance (2020-03-10) was created. On 2020-02-16 kl. 07:00 Input & Output Devices was changed to Input & Output Peripherals.

Windows 7

Windows 7 (2009-07-22 – 2020-01-14) R.I.P.

This weblog post was written to discuss the situation facing people currently using Windows 7, and who will find themselves without support after 2020-01-14.

Summary: If you currently run Windows 7 your machine soon will be insecure. If you intend to keep older hardware, you will probably have to transition to a Linux distro because that hardware won’t be able to handle Windows 10. If you intend to upgrade the hardware, you may still prefer to transition to something like Linux Mint, because the learning curve will be gentler. Windows 7 is closer to Linux Mint, than it is to Windows 10.

If you are still running Windows 7, you are not alone. Many are in the same end-of-live situation. It is requiring many people to rethink their operating system strategy. Should they keep on using Windows 7, or upgrade to Windows 10, which may require a hardware investment, or should they keep the same hardware and go over to something like Linux Mint? Chris Barnatt has three videos where he discusses the challenges facing Windows 7 users. The first from 2018-08-26 deals with a transition to Linux Mint for Windows users. The second from 2019-11-17 is about hardware that allows for a quick change of drives between the two operating systems. The third from 2019-12-22 shows what Chris is planning to do. Yes, it includes keeping Windows 7, but disconnecting it from the internet. Don at Novaspirit Tech also has some insights. Heal My Tech provides an alternative view.

There are also a large number of other Linux distros that could be selected instead of Linux Mint, but there is only a short window of opportunity left to experiment and make a selection. Distros can be tested by running them as a virtual machine, or running them from a memory stick.

VirtualBox is open-source and can run on a Windows 7 host machine. Multiple guest operating systems can be loaded, started, paused and stopped independently within its own virtual machine.

I used Windows regularly because my employer insisted on it. This situation ended upon my retirement at the end of 2016. Looking back, Windows XP was my favourite Windows version, even though it only reaches warm on a scale ranging from cold to hot. My opinion of Windows 7 is that it ranks luke-warm, but definitely much warmer than the cold assigned to Windows 8 and 8.1 (the last version I was compelled to use).

Here is part of the Windows timeline. Certain versions are deliberately missing: Windows XP, codenamed Whistler – after the British Columbia mountain and resort, was released in 2001, supported until 2009 with extended support lasting until 2014. Windows 7 (Blackcomb – another BC mountain) was released in 2009, supported until 2015, with extended support ending this month, 2020-01-14. Windows 8 (no codename) was released in 2012, and version 8.1 (Blue) in 2013. Regular support ended in 2018, and extended support will end in 2023. Windows 10 (Redstone) was released in 2015, and is currently supported.

These days, irregular use of Windows 10 is required because another resident has a computer with Windows 10 installed, an Asus ZenBook laptop, and I seem to have some maintenance responsibilities. I have tried to find Linux equivalents for all of the software she uses regularly, but have not been able to port her library management system (BookCAT), and catalogue of almost 4 000 paper books to anything open-source. I find Windows 10 not just complex, but also confusing, and grade it frosty.

I have also used MacOS, in the 1990s when we owned up to several different Apple Macintoshes, then at school in the early 2000s, when I was teaching Media and Communication. Most recently, I used it when I inherited a MacBook Pro from my kind daughter, along with an iPhone 5S in 2015. I liked the operating system, and ranked it balmy, slightly above Windows XP. Yet, vendor lock-in prevents me from ranking it any hotter, and from going out and buying anything with an Apple label. Walt Mossberg provides an overview of developments at Apple, over the past forty years.

Regretfully, I am an experienced Acer ChromeBook 11 user. This system receives the grade of frozen, if only because it refused to play sound with Firefox. It also had several other faults, so that one year and one day after purchasing it, I gave it away. Chrome OS essentially functions as a host for Google’s other products where all files and folders are stored in Google’s cloud. It exists solely so that Google can offer software-as-a-service (SaaS). Its vendor lock-in is far worse than Apple’s. This insight partially rehabilitated Apple in my eyes.

Microsoft with Windows 10 is, in this respect, also imitating Chrome OS. Microsoft Office 365 provides text processing/ spreadsheet/ presentation and other programs with files stored in Microsoft’s cloud. This is one reason I won’t allow it on my computers. I don’t want my personal work spread indiscriminately throughout the world, and I would have no idea what Microsoft is doing with it, and no guarantee that it is behaving properly.

SSD vs HDD

A 10 MB HDD for USD 3 500 in 1980, was not an excessive price. The 1 MB of RAM on the Digital Equipment Corporation (DEC) VAX-11/750 mini computers I used cost over NOK 1 000 000 each in 1980. That is about USD 200 000 in 1980, or about USD 620 000 today (2019). The HDD pictured would cost over USD 10 000 today (2019) taking the value of money into account, which would make the cost of 1TB of storage equal to USD 1 000 000 000 today (2019). Yup, that’s one billion dollars!

SSD = Solid State Drive; HDD = Hard Disk Drive.

The Summary:

For daily operations on a desktop or laptop computer, SSDs are better (read: faster, quieter, more energy efficient, potentially more reliable) than HDDs. However, HDDs cost considerably (6.5 times) less than SSDs. Thus, HDDs are still viable for backup storage, and should be able to last at least five years. At the end of that time, it may be appropriate to go over to SSDs, if prices continue to fall.

The Details:

This weblog post is being written as I contemplate buying two more external hard disk drives (HDDs), one white and one blue. These will be yet more supplementary backup disks to duplicate storage on our Network Attached Storage (NAS) server, Mothership, which features 4 x 10 GB Toshiba N300 internal 3.5″ hard drives rotating at 7200 RPM. These were purchased 2018-12-27. While the NAS has its own backup allowing up to two HDDs to fail simultaneously, a fire or other catastrophe would void this backup. Thus, external HDDs are used to store data at a secret, yet secure location away from our residence.

The last time external hard disks were purchased was 2018-09-04. These were Western Digital (WD) My Passport 4TB units, 2.5″ form factor, rotating at 5 400 RPM, with a USB 3.0 contact. One was red (costing NOK 1 228) and the other was yellow (at NOK 1 205). However, we have nine other 2 – 4TB units, some dating from 2012-11-15. Before this we had at least 4 units with storage of 230 GB – 1 TB, dating to 2007-09-01. (We are missing emails before 2006, so this is uncertain territory, although if this information were required, we have paper copies of receipts that date back to 1980).

The price of new WD My Passport HDD 4TB units has fallen to NOK 1 143. New WD My Passport Solid State Drive (SSD) units cost NOK 2 152 for 1TB, or NOK 3 711 for 2TB. That is a TB price of about NOK 1 855, in contrast to about NOK 286 for a HDD. This makes SSDs about 6.5 times more expensive than HDDs.

I am expecting to replace the disks in the NAS, as well as on the external drives, about once every five years. Depending on how fast the price of SSDs sink in relation to HDDs, these proposed external HDDs could be the last ones purchased.

As the price differential narrows, other disk characteristics become more important. Read/write speed is especially important for operational (as distinct to backup) drives. Typically, a 7200 RPM HDD delivers an effective read/write speed of 80-160MB/s, while an SSD will deliver from 200 MB/s to 550 MB/s. Here the SSD is the clear winner, by a factor of about three.

Both SSD drives and HDD’s have their advantages and disadvantages when it comes to life span.

While SSDs have no moving parts, they don’t necessarily last longer. Most SSD manufacturers use non-volatile NAND flash memory in the construction of their SSDs. These are cheaper than comparable DRAM units, and retain data even in the absence of electrical power. However, NAND cells degrade with every write (referred to as program, in technical circles). An SSD exposed to fewer writes will last longer than an SSD with more. If a specific block is written to and erased repeatedly, that block would wear out before other blocks used less extensively, prematurely ending the SSD’s life. For this reason, SSD controllers use wear levelling to distribute writes as evenly as possible. This fact was brought home yesterday, with an attempt to install Linux Mint from a memory stick on a new laptop. It turned out that the some areas of the memory stick were worn out, and the devise could not be read as a boot drive. Almost our entire collection of memory sticks will be reformatted, and then recycled, a polite term for trashed!

Flash memory was invented in 1980, and was commercialized by Toshiba in 1987. SanDisk (then SunDisk) patented a flash-memory based SSD in 1989, and started shipping products in 1991. SSDs come in several different varieties, with Triple Level Cells (TLC) = 3 bit cells offering 8 states, and between 500 and 2 000 program/ erase (PE) cycles, currently, the most common variety. Quad Level Cells (QLC) = 4 bit cells offering 16 states, with between 300 and 1 000 PE cycles, are starting to come onto the market. However, there are also Single Level Cells (SLC) = 1 bit cells offering 2 states, with up to 100 000 PE cycles and Multi-Level Cells (MLC) = two level cells with 2 bits, offering 4 states, and up to 3 000 PE cycles. More bits/cell results in reduced speed and durability, but larger storage capacity.

QLC vs TLC Comparisons:

Samsung 860 EVO SSDs use TLCs while Samsung 860 QVO SSDs use QLCs. The 1TB price is NOK 1 645 (EVO) vs 1 253 (QVO), almost a 25% price discount. The EVO offers a 5-year or 600 TBs written (TBW) limited warranty, vs the QVO’s offers 3-years or 360 TBW.

With real-world durability of the QVO at only 60% of the EVO, the EVO offers greater value for money.

It should also be pointed out that both the EVO and QVO have a 42GB cache that allow for exceptionally fast writes up to that limit, but slow down considerably once that limit has been reached.

In contrast to SSDs, HDDs rely on moving parts for the drive to function. Moving parts include one or more platters, a spindle, an read/ write head, an actuator arm, an actuator axis and an actuator. Because of this, an SSD is probably more reliable than an HDD. Yet, HDD data recovery is better, if it is ever needed. Several different data recovery technologies are available.

The Conclusion:

The upcoming purchases of two My Passport 4TB external HDDs may be my last, before going over to SSDs for backup purposes, both on internal as well as external drives. Much will depend on the relative cost of 10TB SSDs vs HDDs in 2023, when it will be time to replace the Toshiba N300 10TB HDDs.

For further information on EVOs and QVOs see Explaining Computers: QLC vs TLC SSDs; Samsung QVO and EVO.