It has now been over a month since I have published a weblog post. The reason for this is best described in the photograph below.
Yes, construction work is time consuming, but provides healthy exercise. The goal of the construction work is to re-construct the house so that it is suitable for a couple of old people to live in. In March 2019, we will have lived in the house for 30 years.
The outer wall of the main level of the house has been replaced. From the outside inwards the new wall consists of 25 x 340 mm horizontal siding (cladding), 23 x 48 vertical nailing strips, a wind barrier, 36 x 198 mm (vertical) studs with 200 mm of insulation, a vapour barrier, 48 x 48 horizontal nailing strips with 50 mm of insulation, 12 mm inside paneling in panels measuring 600 x 2390.
The horizontal nailing strips allow space for services for power and communication. Potentially, other services such as water and waste water could also use this space, but they are not needed along this particular wall. Services are not permitted to penetrate the space provided by the 36 x 198 studs.
On blustery days, work is being done in the attic. Previously, ceiling/ roof insulation consisted of 150 mm of insulation between the floor joists of the attic. In the extension (added on about 1984, five years before we moved into the house in March 1989) to the right in the photograph, it was impractical to add any extra insulation. For the original house, two solutions were needed to upgrade the insulation. In the center third of the attic insulation is being added between the rafters, a 50 mm airspace is being provided to prevent moisture buildup (and wood rot). Then 48 x 48 mm strip is being added to the bottom of each rafter so that 200 mm of insulation can be added. In both of the two remaining outer thirds of the attic, 36 x 198 joists are being added at 90° to the original joists. These measures will provide a total of about 350 mm of insulation.
During the summer, Alasdair was of great help during the construction process. Since he returned to Bergen, I have usually worked alone, with help being provided by Trish as needed. Since the photograph was taken, scaffolding has been added to the wall to facilitate the replacement of studs and the window in the attic. A chain pulley block, fastened to the peak of the roof, will allow heavy objects to be freighted up and down.
Today, Monday, 2018-08-13, the real-life Ethan is 16 years old. Happy Birthday Ethan! This date also marks the day when I have spent precisely half my life as a father.
Ethan & Ethel are wanting to improve their woodworking workshop by buying a stationary machine. They estimate that using this type of machine can increase their production capacity. Because these machines are expensive, they will have to plan which one to buy first. They are thinking that if they make the right investments, they will be able to build things that others want, and make some money. For example, they have an aunt who wants a garden shed, other relatives who need new kitchen cabinets; and family friends who want hardwood furniture. However, they can’t build all these things at once, and decide to concentrate on building garden sheds.
There are many designs for garden sheds. Usually they are small uninsulated buildings. Ethel & Ethan are thinking of using softwood lumber for framing, then covering it with OSB. When they made up a cut list, they realized that they should build it using full sheets of OSB. That means dimensions of 1 200 mm, 2 400 mm or 3 600 mm. Beyond that, and the buildings would be far too big with their limited skill sets. They decided that their first building should be 2 400 mm long by 2 400 mm wide by 2400 mm high at the eaves. At the ridge, it would be 600 mm higher, or 3 000 mm. It is a small shed, but there is less that can go wrong, and less time and equipment is needed to make it.
They started thinking about pre-cutting the pieces for one shed to save time in the short building season. Then they thought that if they could pre-cut one, they could precut more. Then they contacted other people in their network to see if they could find buyers. After these conversations, they estimated that they could build and sell five of these small sheds in the summer.
They want to buy a chop saw, and know the maximum dimensions of the material they will be working with on the sheds will be about 50 mm by 100 mm. What they really want is a flipover saw, such as a DeWalt DW743. One challenge with this model is its inability to cut joists, if they work on bigger projects. The saw was originally made by the German power tool company, ELU, that was bought by DeWalt in 1994.
At the Unit One workshop, a Ryobi compound sliding mitre saw is used. It was selected because of price, and the fact that the sliding mechanism is in front of the saw. This prevents it from being used with materials thicker than about 100 mm. Most sliding saws go back towards the wall, which means that the saw has to be set further forward, increasing the amount of space used. Perhaps the best mitre saw is a Bosch axial-glide saw. Its main disadvantage is price, costing almost four times that of a Ryobi.
Lumber dimensions: A 2 × 4 when dressed is 1-1/2″ × 3-1/2″ or 38 mm × 89 mm. In Europe it is 48 mm x 98 mm.
A sliding compound mitre saw and a plunge saw make a versatile pair. Table saws are more dangerous than either of these saws, because the operator holds the material being cut, instead of the saw, making it easier to accidentally move hands into the spinning blade.
In the last post, Ethan & Ethel had to do a lot of work, to keep track of their heating costs.
Time used = Time turned off – Time turned on. Example: 17h05m – 15h31m = 94m
They wrote down the time they turned on their heater, and then the time when they turned it off. They then subtract the “on” time from the “off” time to find the number of minutes the heater was on. This had to be repeated for every visit to the workshop with heat on. At the end of the month, they had to add all of these minutes together to find their monthly usage. What a boring job, and so unnecessary when a computer can do it, automatically! All that is needed is a few lines of code. Code that has already been written, and is waiting to be reused.
Workshop computer control means that computing equipment is running hardware and software that senses and activates workshop activities.
Stop the Press!
This post was originally written 2018-03-02. It is now 2018-08-11, more than five months later. Reviewing it at the time I was dissatisfied with the previous paragraph, that continued with, “a Raspberry Pi single-board computer will be used to run Home-Assistant software. The raspberry pi will be connected to two different Arduino micro-controllers using USB cables.”
The problem, both then and now, is that while the above solution would work, it is not optimal. Why should one use three components, when one should do? Ideally, a single microprocessor should be able to run 1) home automation software, in this case Home-Assistant; 2) connect to analogue sensors and have analogue input data converted to digital data; 3) connect digitally to relays to trigger activators; 4) communicate with other components on the local area network using wires (Ethernet); 5) receive electrical power over those same wires.
The best way forward to gain an understanding of workshop problems is to pretend that the ideal solution exists, a fantasy Unicorn IoT (Internet of Things) microcontroller.
Home-Assistant
If Ethan and/or Ethel are to work in a computer controlled workshop, one of the first things they need to control is the workshop computer. It should be designed in such a way that it can respond to their needs turning on and off lights, heat, tools, etc.
While a Raspberry Pi (and its clones and near relatives) is capable of running this software, an Arduino microcontroller is not.
Sensors
In a workshop there can be a large number of different sensors measuring all sorts of things. There can also be a large number of actuators using the same data. For example, both a heater and a vent may use data from a room temperature sensor, but in different ways. The heater may be activated if the work space is too cold. Once it gets hot enough it will shut off. If the temperature continues to rise, then a different actuator, the vent will be activated, but only if the outside temperature is lower than the inside temperature. To determine that, there needs to be a second temperature sensor, this one measuring the outside air.
A sensor is any device that measures a physical quantity. Temperature sensors can be found not only in the workshop space, but also inside machines. This Wikipedia article lists sensors by sensor type: https://en.wikipedia.org/wiki/List_of_sensors
Some of the other sensors in a workshop include: humidity, measuring water vapour in the air; infra red, detecting body heat; light, measuring light levels; smoke, detecting fires. Those are sensors that can be used anywhere in a house. There can be some sensors that are specific to a workshop: wood moisture content and dust particles in air.
Having so many sensors can be a major distraction, so from now on the focus will be on just one, a LM35 temperature sensor.
LM35 Temperature sensor
Several companies make temperature sensors, but Texas Instruments makes one that is world famous, the LM35. It costs about $1.50.
While information about the LM35 is available in a data sheet that contains more than enough information about every aspect of the sensor, most people don’t need to read it. Why? Because all of the engineering work has been done before. Since Ethan and Ethel will be using an Arduino, they just need to know how to connect a LM35 with an Arduino. Then they have to find a (free) program that uses the LM35, and upload it onto the Arduino. With a little practice, anyone can get a sensor working on an Arduino in about five minutes.
The LM35 is cool. The main reason is shown in this graph. Most sensors express themselves as a voltage that varies smoothly with the quantity being measured. On a graph this makes a straight line. The LM35 is exceptional, because at 0°C output voltage is 0V. Every 1°C up or down adds (with positive temperatures) or subtracts (with negative temperatures) precisely 10 mV. At 100°C, the output voltage is exactly 1V. The LM35 is also very flexible regarding input voltage. It can use anything from 4V to 20V.
ADC
Computers use digital data, and can’t normally read voltages directly. On micro-controllers there are Analog to Digital Converters (ADC) that automatically change an input voltage into a digital value. On the Arduino Uno, there are six analog pins that can read voltages from 0 V to 5 V (or 0 mV to 5 000 mV). This means that up to six different sensors can be connected to an Arduino board. There are ways to add more, if needed. Each sensor then has its voltage converted into a digital values between 0 and 1023. These analog pins have a sensitivity of 4.9 mV. So a voltage from 0 to 4.8 mV will be given a value of 0. From 4.9 mV to 9.8 mV they will be a value of 1. This will continue right up to 4 995.1 mV to 5.0 mV, where they will be given a value of 1023.
It takes about 100 µs (100 microseconds or 0.0001 s) to read an analog input. The maximum reading rate is 10 000 times a second. Usually, reading a temperature once a second is good enough. In fact, in some circumstances reading it every five minutes or every hour would make better sense, especially if all this data has to be stored.
Arduinos have ADC units, Raspberry Pis do not.
Relays
Microcontrollers do not respond well to large currents, and will be permanently damaged if connected to too many volts, amps or watts. If one wants to turn on an electric heater to warm up a space, this is typically done by a relay. A Relay is an electrically operated switch. When an electromagnet is activated with a low voltage, typically 5 V, it makes or breaks a high voltage circuit.
Many microcontrollers have supplementary boards that attach directly to pins on their main boards. Both the Raspberry Pi and the Arduino have them. On a Raspberry Pi they are called Hats (Hardware Attatched on Top). On the Arduino they are called shields. The Raspberry Pi hats allow the main board to identify a connected hat and automatically configure the pins.
Communication
For automation systems, wired communication is preferred. The most common form of wired communication is the Ethernet, developed at Xerox PARC (Palo Alto Research Center) in 1973-4 and used ever since. Most people would be advised to use CAT 6A cable, for workshop automation.
In the future, almost every significant power tool in a workshop will be connected to the local area network, including dust collection and air filtering equipment. Even in home workshops, various variants of CNC (computer numeric controlled) equipment will be taken into use, including 3D printers and laser cutters.
Microprocessors in the 1970 would process data in a single program that ran continuously. In the 21st century, not so much. The reason for this is that each sensor (and each actuator) is treated as a separate object. Sensors publish data, about a specific state, and actuators subscribe to the data they need to make decisions about how they will operate. To do this they use a publish-subscribe protocol called MQTT. It has been around sine 1999.
PoE (Power over Ethernet)
Power over Ethernet allows electrical power to be sent over the same Ethernet cable used to send and receive data to a device. This simplifies life considerably. There are no batteries to change or high-voltage power cables to install. The main challenge is that only a few microcontrollers are capable of utilizing this feature. Using a Hat or shield with PoE connectivity is another possibility.
The California water crisis is an emblematic wicked problem. My personal awareness of the problem began in the 1950s, with the North American Water and Power Alliance proposing to divert British Columbia water to California. For many, awareness came with Chinatown, Roman Polanski’s 1974 neo-noir film. Other people were much more directly affected – having to live their daily lives in a drought-ridden California, or becoming environmental refugees.
For passionate insight rather than raw emotion, the standard work is Marc Reisner’s (1948-2000) , Cadillac Desert, 1986. With the book being over 30 years old, B. Lynn Ingram and Frances Malamud-Roam have written a worthy follow-up, The West Without Water, 2013.
Today’s weblog post is not about the California water crisis, as gruesome as it is for some, and could be for many more. It is about wicked problems. The essence of a wicked problem is that it is so complex, that it is impossible to understand all its implications. Any resolution will require a bespoke solution, which will only partially resolve disputes.
Wicked is a term used in operations research. Some practitioners infrequently or never apply it, while others use it more extensively. Regardless, many hold it with reverence. Working with these ultimate problems has the potential to elevate or destroy one’s professional reputation. More importantly, resolution of a wicked problem may positively affect the lives of millions, in some cases – such as world poverty, billions of people.
Operations research as a subject area is, itself, often misunderstood. Part of the problem is that practitioners value precision to such a degree that they find it difficult to define words. Sometimes, one suspects, their motivation is to discourage or to impress readers, rather than to clarify. In one common definition, the words advanced analytical methods appear. While most people may have a basic understanding of what method means, their understanding may be fuzzier when it comes to understanding the term analytical. Adding advanced onto that, just leaves people dumbfounded. A simpler approach is to define operations research as: the process of designing solutions to complex problems.
Wicked problems arise when operations researchers feel out of their comfort zone, which is a very numerical place. Wicked problems usually involve several groups of people, stakeholders, who see a problem from many different, and sometimes opposing, perspectives. Challenges with wicked problems often begin with finding a suitable definition of a problem and end with finding a suitable stopping point for proposed solutions. By then other related problems are revealed or created of because of complex interdependencies.
The term, wicked problem, originated with Horst Rittel (1930-1990) but was popularized by C. West Churchman (1913-2004), while both of them along with Melvin Webber (1920-2006) worked at The University of California, Berkeley. Churchman wanted operations research to take moral responsibility “to inform the manager in what respect our ‘solutions’ have failed to tame his wicked problems” ( Churchman, C. West (December 1967). “Wicked Problems” Management Science 14 (4).) Tame problems are so simple, that they can be resolved using basic mathematical and other computational tools.
Rittel and Webber specified ten characteristics of wicked problems (Rittel, Horst W. J.; Webber, Melvin M. (1973). “Dilemmas in a General Theory of Planning” Policy Sciences. 4: 155–169)
There is no definitive formulation of a wicked problem.
Wicked problems have no stopping rule.
Solutions to wicked problems are not true-or-false, but better or worse.
There is no immediate and no ultimate test of a solution to a wicked problem.
Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial and error, every attempt counts significantly.
Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
Every wicked problem is essentially unique.
Every wicked problem can be considered to be a symptom of another problem.
The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.
The social planner has no right to be wrong (i.e., planners are liable for the consequences of the actions they generate).
Over thirty years later, Jeffrey Conklin (?-) generalized the concept (Conklin, Jeffrey (2006). Dialogue mapping : building shared understanding of wicked problems. Chichester, England: Wiley):
The problem is not understood until after the formulation of a solution.
Wicked problems have no stopping rule.
Solutions to wicked problems are not right or wrong.
Every wicked problem is essentially novel and unique.
Every solution to a wicked problem is a ‘one shot operation.’
Wicked problems have no given alternative solutions.
A wicked problem is so interconnected with other problems that one can’t intervene somewhere without impacting something else. It involves incomplete or contradictory knowledge, a large number of people and opinions, a large economic burden either to live with it or to resolve it.
Strategies
Nancy Roberts (1943?-) identified three strategies to cope with wicked problems. See: Roberts, N. C. (2000). “Wicked Problems and Network Approaches to Resolution”. International Public Management Review. 1 (1)
Authoritative. These strategies limit problem-solving to an elite group of stakeholders, typically including experts and those with financial or political weight. This reduces problem complexity, as many competing points of view are eliminated at the start. The disadvantage is that authorities and experts charged with solving the problem may not have an appreciation of all the perspectives needed to tackle the problem.
Competitive. These strategies attempt to solve wicked problems by pitting opposing points of view against each other, requiring parties that hold these views to come up with their preferred solutions. The advantage of this approach is that different solutions can be weighed up against each other and the best one chosen. The disadvantage is that this adversarial approach creates a confrontational environment in which knowledge sharing is discouraged. Consequently, the parties involved may not have an incentive to come up with their best possible solution.
Collaborative. These strategies aim to engage all stakeholders in order to find the best possible solution for all stakeholders. Typically these approaches involve meetings in which issues and ideas are discussed and a common, agreed approach is formulated.
Before Roberts, the collaborative approach was the only one acknowledged, at least in public.
IBIS
On the surface, wicked problems have a simple answer, and its name is IBIS, Issue-Based Information Systems. What distinguishes IBIS from other solutions, is that it views design as argumentation. That is, the design process requires people to reflect on the problem, deliberate, and to argue for and against different perspectives. It is also instrumental. Yes, another big word, which in this case refers to something being goal oriented. (Hulme, Mike (2009). Why We Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity. Cambridge University Press.)
Computer-based versions of IBIS are available in Windows (up to version 8), Mac and Linux variants, at: http://compendiumld.open.ac.uk/download.html . While IBIS was conceived in 1968, it had to await for appropriate technology to become an effective tool. Using hypertext data-structures, the latest incarnation was implemented by Douglas E. Noble (?-).
Social Media
Much social media discussion involves wicked problems, but without the poster understanding that they are dealing with such a comprehensive issue. Instead, much of the discussion may involve a very specific personal challenge, deliberately isolated from its context. From there responses are solicited, ranging from a like to a supportive comment. Yet, the response may be anything but positive. While the first poster’s position may be attacked, not infrequently there will personal attacks as well.
It is here that social media fails. It is very effective at allowing people to trumpet out problems, but does nothing to help people manage or resolve them. Where is the social media IBIS that will allow social media users to put their problems into perspective?
Social media users facing wicked problems need help to argue for their perspectives. This is very different from a vitriolic attack. They need help to structure a design problem, and to participate with others in a design solution, a process where they can reflect on that problem, deliberate and to argue for and against different perspectives, and come up with a solution that is better than the current situation.
Coming sooner or later: Russell L. Ackoff on Social Messes.
The myth of Kanban is set in the late 1940s, when Toyota began to optimize inventory levels. Several authors describe this as an engineering process. It isn’t, but myths are not invented to tell truths, but to elevate the mundane into eloquent and compelling stories. In this myth, Toyota’s goal was to align inventory levels with the actual consumption of materials. To communicate capacity levels in real-time on the factory floor (and to suppliers), workers would pass a card, or kanban, between teams. When a bin of materials used on the production line was emptied, a kanban was passed to the warehouse describing the material and its quanity. The warehouse would then send a replacement to the factory floor, and their own kanban to their supplier. The supplier would then ship a bin of material to the warehouse. While the physical cards may have disappeared, Just In Time (JIT) remains at its heart of modern manufacturing.
Kanban moved from esoteric knowledge about Japanese business practices, to myth in 2010, when David Anderson wrote Kanban: Successful Evolutionary Change for Your Technology Business. The book is only incidentally about Kanban. It is more about the evolution of a software development approach at Microsoft in 2004, to a better approach at the Bill Gates owned digital image business, Corbis, in 2006-7, designated Kanban. About the same time there were several others who were name-dropping Kanban, and suggesting variations of it as a form of lean production, especially for software.
The key point with Kanban is that it works in organizations engaged in processing, either in terms of products or services. It is less applicable in organizations working on projects.
Service organizations, can establish teams to supply these services, using JIT principles. This requires them to match the quantity of Work In Progress (WIP) to the team’s capacity. This provides greater flexibity, faster output, clearer focus and increased transparency. Teams can implement virtual kanban methodology.
In Japanese, kanban translates as visual signal. For kanban teams, every work item is represented as a separate card on the board.A kanban board is a tool used to visualize work and to optimize work flow across a team. Virtual boards are preferred to physical boards because they are trackable and accessibile at every workstation. They visualize work. Workflow is standardized. Blockers and dependencies are depicted, allowing them to be resolved. Three work phases are typically shown on a kanban board: To Do, In Progress and Done. Since the cards and boards are virtual, the unique needs of any team can be mapped to reflect team size, structure and objectives.
Truthing is at the heart of kanban. Full transparency about work and capacity issues is required. Work is represented as a card on a kanban board to allow team members to track work progress visually. Cards provide critical information about a particular work item, including the name of the person responsible for that item, a brief description and a time estimate to completion. Virtual cards can have documents attached to them, including text, spreadsheets and images. All team members have equal access to every work item, including – but not restricted to – blockers and dependencies.
An important management task is to (re)prioritize work in the backlog. This does not disrupt team efforts because changes outside current work items don’t impact the team. Keeping the most important work items on top of the backlog, ensures a team is delivering maximum value. A kanban team focuses on active works in progress. Once a team completes a work item, they select the next work item off the top of their backlog.
Cycle time is the time it takes for a unit of work to travel through the team’s workflow–from the moment work starts to the moment it ships. This metric can provide insights into delivery times for future work.
When a single person holds a skill set, that skill set can potentially becomes a workflow bottleneck. Overlapping skill sets eliminates that potential bottleneck and may reduce cycle times. Best practices, encourage team members to share skills and to spread knowledge. Shared skill sets mean that team members can enrich their work, which may further reduce cycle time. In kanban, work completion is a team responsibility.
A kanban feature is to set a limit on the number of works in progress. Control charts and cumulative flow diagrams provide a visual mechanism for teams to see change. A control chart shows the cycle time for each work item, as well as a rolling average. A cumulative flow diagram shows the number of work items in each state. Combined, these allow a team to spot bottlenecks and blockages.
My interest in Kanban is tied to my son, Alasdair’s use of it for process management with the Council for Religious and Life Stance Communities Bergen (Samarbeidsrådet for tros- og livssynssamfunn Bergen, STLB). See: https://www.stlb.no/english/ . It is available as an app for Nextcloud, Deck. See: https://apps.nextcloud.com/apps/deck . This will be installed on our upcoming server, tentatively named Qayqayt.
I started to write this post on 2017 November 22. The original title was MKR: Arduino Revisited. Two hundred and fifty days should be long enough to write any post, but this one has defied me.
Soon, it will be ten years since I started using and teaching Arduino. In November, I was looking forward to the new series of Arduinos, the MKR (Maker) series, a small form factor microcomputer with a number of outputs and headers (electrical pin connectors), a battery management system and USB, that could be useful for building home automation room control units, small enough to fit inside almost anything, such as a light switch box inside a wall.
Each room in our house would have its own personal microprocessor. Some rooms, such as the workshop might have several. Each microprocessor would then connects to multiple sensors that would measure/ detect things like temperature, humidity, light and motion inside that room. Analogue data would be converted inside the microprocessor to digital data, then sent onwards to a central controller, located somewhere in the house. If specific conditions were met, the controller will initiate an action, sending a message to either the same or another microprocessor, and ordering it to activate an activator (as they are called) such as opening a vent or turning on a light. These microprocessors would be fried if they received power exceeding a few milliwatts, so they use relays to indirectly switch on components that can consume up to 2 500 Watts.
There are just two problems with microprocessors like this. First, they should not use wireless communication (including but not limited to WiFi, Bluetooth or radio), but should be wired. Second, they should not use batteries or mains current as a power source.
Both of these problems can be resolved using Ethernet, which is network cabling technology, initially developed at Xerox PARC (Palo Alto Research Center) between 1973 and 1974. For most home automation circuits there is no great urgency to turn on or off an activator. A millisecond or two will not make much of a difference, so there is no need for gigabit per second Ethernet. Megabit per second is good enough. Batteries and mains current can be eliminated by sending electrical power through the Ethernet cable.
At the moment, I have lost my enthusiasm for Arduino MKR boards, and am looking for a replacement. Why? It all has to do with open-source, or lack thereof, if not in practice, at least in spirit.
Arduino, despite its open-source claims, has not always been transparent. The following is a summary of some of the Arduino disputes, taken from: https://en.wikipedia.org/wiki/Arduino
In 2008, the five cofounders of the Arduino project created a company, Arduino LLC to hold the trademarks associated with Arduino. They transfered ownership of the Arduino brand to the newly formed company. The same year one of the five cofounders, Gianluca Martino through his company, Smart Projects, registered the Arduino trademark in Italy and kept this a secret from the other cofounders for about two years. Later, negotiations with Gianluca to bring the trademark under control of the original Arduino company failed. In 2014, Smart Projects began refusing to pay royalties. It then renamed itself Arduino SRL and created the website arduino.org, copying the graphics and layout of the original arduino.cc. In January 2015, Arduino LLC filed a lawsuit against Arduino SRL. In May 2015, Arduino LLC created the worldwide trademark Genuino, used as brand name outside the United States. In 2016, Arduino LLC and Arduino SRL merged into Arduino AG. In 2017 BCMI, founded by four of the five original founders (with the initials representing those of their respective last names) acquired Arduino AG and all the Arduino trademarks.
In July 2017, Massimo Banzi announced that the Arduino Foundation would be “a new beginning for Arduino. ” That same month, former CEO Federico Musto allegedly removed many open source licenses, schematics and code from the Arduino website. See: https://techcrunch.com/2017/07/26/ceo-controversy-mars-arduinos-open-future/
Currently, I have been unable to find any further information about an Arduino foundation, and any new start. Instead, I find in May 2018, Arduino announcing the sale of engineering kits to encourage the use of Arduino at university level. Unfortunately, the kits require the use of Mathlab and Simulink, closed-source and expensive software packages from MathWorks, despite open-source alternatives, such as R. Admittedly, the kits contain a one year licence for the software. See: https://blog.arduino.cc/2018/05/12/arduino-goes-to-college-with-the-new-arduino-engineering-kit/
Back in 2013, Massimo Banzi was more enthusiastic about open-source hardware, as he explained in an Arstechnica interview. See: https://arstechnica.com/information-technology/2013/10/arduino-creator-explains-why-open-source-matters-in-hardware-too/
Here are some quotes from the article: As an open source electronic prototyping platform, Arduino releases all of its hardware design files under a Creative Commons license, and the software needed to run Arduino systems is released under an open source software license. Why is openness important in hardware? Because open hardware platforms become the platform where people start to develop their own products. For us, it’s important that people can prototype on the BeagleBone [a similar product] or the Arduino, and if they decide to make a product out of it, they can go and buy the processors and use our design as a starting point and make their own product out of it. … With the Raspberry Pi you cannot even buy the processor…. Raspberry Pi is a PC designed for people to learn how to program. But we [Arduino] are a completely different philosophy. We believe in a full platform, so when we produce a piece of hardware, we also produce documentation and a development environment that fits all together with hardware.
Even the Arduino website has changed its language. Before it might have distinguished between an original board and a clone, emphasizing the open source nature of boards. Now this nuance is missing entirely, it states: “If you are wondering if your Arduino board is authentic you can learn how to spot a counterfeit board here [with link].” See: https://www.arduino.cc/en/Main/Products
While I live in hope that Arduino will reform, I don’t want to financially support it any more. What I want is an open-source board with its own Ethernet connectivity. They did have one, but retired it without a direct replacement. Currently, this feature seems to be unavailable from Arduino. They are undoubtedly too busy developing boards that connect an assortment of proprietary communications technology.
I have considered using other boards, even Italian ones, such as the Fishino, but they also lack Ethernet connectivity: https://www.fishino.it/home.html
The board that seems to offer the most promise comes from DF Robot: https://www.dfrobot.com/product-1286.html
Their W5500 Ethernet with PoE board is based on the ATmega32u4 in an Arduino Leonardo package with a W5500 Ethernet chip . The latter is TCP/IP hardwired for embedded systems, although not gigabit enabled. The board is compatible with most Arduino shields and sensors. Version 2.0 has an upgraded PoE power regulation circuit, which makes PoE power more reliable.
The main difference between a Uno and a Leonardo package is that the latter uses a more sophisticated chip, an ATmega32U4 chip. It shares the same form factor and I/O placement (analog, PWM, I2C pins in the same place) as the Uno, which means that it can use the same shields, which are additional boards that sit on top of the Arduino, and connect directly to the pins below.
A relay shield provides several, typically four, switches that can control high current loads. They can be wired as NO (Normally Open) or NC (Normally Closed) circuits. These are important, because most electrical equipment cannot be controlled directly by a microprocessor’s pins. Relays are useful for switching AC appliances such as fans, lights, motors as well as high current DC solenoids.
The fact that the DF Robots W5500 board is open-source, means that everyone is permitted to make their own boards from scratch, if they should want. Most of the time, this is not an economic choice.
An embedded map of Trondheim Fjord, Norway, showing the location of Cliff Cottage. It was made using an OSM (OpenStreetMap) plugin, in the WordPress program, used to make this weblog.
While I might have included a screen shot or image to display a map, a more flexible approach is to embed one. This requires adding a mapping plugin to the WordPress program that I currently use. Since there are so many plugins available, I have to decide which one to use. I’m using one that is open source to make a policy statement!
Background theory: A factoid from Economics 101 – perfect competition leads to Pareto optimality, which is just a fancy way of saying that businesses that compete will stagnate. There is no way for any of them to make any profits. Ultimately, a little mistake will lead to bankruptcy. So businesses will do almost anything to avoid competition. They want monopolies or, if that isn’t possible, a large market share, so they can charge whatever consumers will pay, to make lots of money.
Background event: As explained in the /. article, on 2018-07-16, the free ride of using Google Maps’ application programming interface (API) is over. Google is going to make it more difficult and expensive to use its API. The good news (for Google) is that they should be able to extract more revenue from users. The bad news (for organizations and people using these APIs) is that custom maps will be less sustainable or even unfeasible for organizations that made them. See: https://developers.google.com/maps/billing/important-updates
When a company makes programs that are high quality and free, people will use them. Google Maps is no exception. Thus, the most popular WordPress mapping plugins are (with the number of active installations in parentheses, from largest to smallest): WP Google Maps (400 000+ ); Google Maps Widget (100 000+); MapPress Easy Google Maps (100 000+); WP Google Map Plugin (100 000+); Google Maps plugin by Intergeo (60 000+); Snazzy Maps (60 000+); Google Maps Easy (40 000+); Simple Map (40 000+). All of these plugins relate to Google Maps. It is only when one gets to Leaflet Map Marker (30 000+) that an alternative to Google Maps can be found that works with OpenStreetMaps and Bing Maps, as well as Google Maps.
Consequences: People who need a map, but don’t know how to program, and don’t have a budget to pay for a customized solution, have been able to make maps using Google (or equivalent) APIs. Google’s actions are part of a trend away from easy access to free mapping tools. Fewer companies are offering free accounts and there are fewer alternatives to Google.
Open source API choices to replace Google Maps APIs include Leaflet and OpenLayers.
Leaflet is an open-source JavaScript library for desktop and mobile platform interactive maps. The API code is small, 38 kB, but has most mapping features needed by developers. It can be extended with plugins. Its focus is on the optimal performance of basic mapping features, rather than on an extensive features rich environment. See: https://leafletjs.com
In contrast OpenLayers is much more extensive, and larger (10 MB) requiring greater insight. See: http://openlayers.org
Mapzen, often cited as a third open soure tool that ran on OpenStreetMaps, shut down its operations at the beginning of 2018.
For me, open source matters, so I chose to add on OSM – OpenStreetMaps, as a plugin. It took a couple of minutes to download the plugin, and up to several seconds to activate it. Here is the entire procedure:
Select Generate: OSM shortcode
Select the OSM control theme of your choice
Adjust the map and click into the map to generate the OSM shortcode
Copy (Ctrl+C) the generated shortcode and paste (Ctrl+V) it into your article or page.
Notes: The generator was located immediately below the WordPress text frame; Once the map was pasted into the text, the preview button had to be pressed to show the map. The only thing left to do is to publish the post!
This post started off as a reflection on Douglas Rushkoff (1961-) media theorist professor and author of, Survival of the Richest – The wealthy are plotting to leave us behind: https://medium.com/s/futurehuman/survival-of-the-richest-9ef6cddd0cc1
Survival of the Richest
Article summary. Rushkoff wants us to be so annoyed with the richest among us, that we will rush off (pun intended) and buy his new book!
At a certain level of income , a discussion of technology changes from a discussion of its acquisition costs, to a discussion of the opportunities it offers for professional work. For affluent people, it becomes a discussion of investment opportunities. Beyond this, the opulent seldom need to understand technology, even as an investment. Hired minions understand it and deal with its practical application. Rushkoff is a minion, who sold his soul for an hour to a group of five hedge funders.
While the article is probably just the introduction of his new book, it hints that the author may have had some moral issues with his gig. Writing the book is probably some form of self-imposed quasi-penance. Real penance would have resulted in the book being published under a Creative Commons license.
The opulent already have retreats in areas of the world less impacted by crises, be they social or climatic. They have the financial means to buy them in New Zealand or Alaska or anywhere else that suits their fancy. There will always be a discussion about the level of fortification needed, where an underground bunker is the minimum. If such a retreat is too large and extensive, there will also be a need for another class of hired minion, the mercenary, to defend it. But mercenaries are fickle, and they are only loyal to money. What happens if money becomes worthless?
As we all know, the opulent have no interest in making the world a better place. Their major concern is their personal transcendence of the human condition. Preventing this are any number of potential challenges. Rushkoff lists them for us: climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. The opulent code this in one word, the event, which in turn precipitates just one response, the escape.
Transhumanism reduces reality to data, and humans to information-processing objects. Human evolution reduces to a video game, won by finding the escape hatch.
Rushkoff identifies a brief moment, in the early 1990s, when technology seemed open-ended, an opportunity to create a more inclusive, distributed, and pro-human future. This faded quickly in the dotcom crash. The future was no longer created through creative decisions, but predetermined by passive venture capital investments.
Rushkoff questions the morality of unbridled technological development turning an exploitative and extractive marketplace (think Walmart) into an even more dehumanizing successor (think Amazon). Downsides include automated jobs, the gig economy, the demise of local retail, the destruction of the natural environment and the use of global slave labour to manufacture computers and smartphones.
Fairphone
As an aside Rushkoff mentions Fairphone, founded to make and market ethical phones. Except this was impossible. Bas Van Abel, Fairphone’s founder now sadly refers to their products as “fairer” phones. Interestingly, I had had discussions about these phones on several occasions during the days immediately before reading this article. The main question being, how much more would a person be willing to pay for a moral product? Note your guess before checking the answer at the bottom of the article.
At some point mining of rare earth metals by slave-labour ends, as reserves cease to be viable. Mines are replacing by toxic waste dump filled with disposed digital technology, “picked over by peasant children and their families, who sell usable materials back to the manufacturers.”
Yes, Rushkoff’s prose can be visible and moving. If people ignore technology’s social, economic, and environmental repercussions, the greater these problems become, resulting in more withdrawal, isolationism, apocalyptic fantasy and more “desperately concocted technologies and business plans. The cycle feeds itself.”
Rushkoff notes that this world view promotes seeing people as the problem and technology as its solution. Human traits are treated as system bugs. Technology is defined as neutral. “It’s as if some innate human savagery is to blame for our troubles. Just as the inefficiency of a local taxi market can be “solved” with an app that bankrupts human drivers, the vexing inconsistencies of the human psyche can be corrected with a digital or genetic upgrade.”
Repo! The Genetic Opera.
In 1996, Darren Smith (1962-) was inspired by a friend’s bankruptcy to write of a future where not only property, but also body parts, could be repossessed. In collaboration with Terrance Zdunich (1976-) this resulted in The Necromerchant’s Debt, a 2002 preliminary theatrical version of Repo! This was then expanded and transformed into assorted incarnations through 2005. In 2008 it emerged as a science fiction musical horror comedy film, directed by Darren Lynn Bousman (1979-).
As a media theorist Rushkoff is programmed to include film references in his works, especially those with post-apocalyptic zombies, where the future is a zero-sum game between humans. One tribe survives at the expense of another’s demise. Repo! is a transgressive film, a genre I appreciate more than most. I am awaiting a sequel, or perhaps, prequel where consciousness is uploaded to a computer. The only challenge is that the Matrix, seems to have had that as its plot.
Westworld is Rushkoff’s media product of choice, depicting a world where human beings are simpler and more predictable than general artificial intelligences. Humans are feeble. They deserve nothing. In contrast robots are far superior. I am looking forward to seeing it, if only to appreciate Ingrid Bolsø Berdal (1980-) as Armistice, a host. She is a brutal and ruthless bandit and a member of Hector Escaton’s gang. In real life, she was born in Inderøy, and attended the same rural elementary school as my children.
Convivial Technology
Surviving the event seemed to be the primary goal of the hedge funders. Rushkoff’s advice was to treat everyone well. The more the world develops sustainability and the wider wealth is distributed, the less chance there will be of an “event”. The challenge was that the hedge funders didn’t seem interested in avoiding a calamity, convinced the world had deteriorated too far. Wealth and power couldn’t affect the future, it could only buy insulation.
As one retreats from the opulent to the affluent, to the middle class, and the working poor, there are better options available for using technology. Convivial technology, where people can have fun, learn and develop, but simultaneously treat each other with respect. Being human is not about individual survival or escape. All individuals die. It is survival of the species that counts in the biological world. Humans thrive through co-operation.
[Answer: A Fairphone costs about 100% more than equivalent phones, about NOK 5 000 for a NOK 2 500 phone. ]
When I look at construction today (2018-07-03), fifty two years to the week after completing high school in 1966, and beginning work as a construction labourer at that very same location, Lester Pearson Senior Secondary School, the work looks surprisingly similar and the tools surprisingly familiar. Someone working in 1968 would have no problem working in 2018.
Pneumatic nailers have been in use since the 1950s, and can save a lot of time. They also give a superior join. Yet, this week, on a site some hundred meters from our residence, two builders were using conventional hammers to construct a cabin. The work was progressing slowly.
One of the main reasons I prefer to build, rather than to hire, is that too many builders are living in the past. Fortunately, I actually enjoy building construction. Yes, it can be tiring work. But it means that I never have to work out at a gym. Yes, it is necessary to take precautions to avoid physical injury, and to use personal protective clothing. Yes, at the end of the day, much of the work will be invisible, but that isn’t too different from my previous work as a teacher.
Many of my first jobs involved working with wood. While still attending junior secondary school, I built a sabot sailboat out of two sheets of 1/4″ (6mm) plywood. Later, I worked clean-up on the weekends at Brownlee Industries, in Surrey. They processed alder into lumber and made glue-laminate products from it. Other summer jobs were with Bel-Par Industries in Surrey, where I worked as a cabinet-maker’s assistant. This was undoubtedly the job in Canada that suited my personality best.
Somewhat later, I also working for Habitat Industries on Annacis Island, Delta. It was a pre-fabricated housing factory that has had other names, both before and since. It was named after the first United Nations Conference on Human Settlements, held in Vancouver in 1976. John Reagan’s designs were anything but modular boxes. He designed octangular, split level and mineshaft buildings. They involved post and beam as well as platform framing. Here, I worked in the factory, not just framing, but other tasks such as electrical and plumbing installation, as well as in the office, mostly related to scheduling and project planning.
Pre-fabrication saved on build time and labour costs by moving much of the work to a climate-controlled environment. Part of the challenge is that these parts have to be transported, which means that the building has to be sub-divided into transportable units, with a maximum length, height and width. Modules are not always the solution. One compromise is to use pre-cut materials for flooring and roofs, but to make and transport walls in sections. Modules can work for bathrooms, less so for kitchens.
In February 2012, I watched an inspiring TED Talk, Contour Crafting – Automated Construction, with Behrokh Khoshnevis at TEDxOjai. After this, I expected there to be a surge of interest 3D-printing of houses. I am still waiting, but understand progress has been made by Khoshvevis in China. Not so much on the North American continent or in Europe.
AMT-SPECAVIA of Yaroslavl, Russia started serial production of construction printers in 2015. Currently, seven models are available ranging from a small format for the printing of small architectural forms, to a much larger scale, that allows printing of buildings up to 3 stories high. A construction printer was delivered to 3DPrinthuset, in Copenhagen, Denmark in 2017. This 8m x 8m x 6m printer was used to construct a 50 m2 office-hotel.
This is referred to as a Building on Demand (BOD) project. Only its walls and part of its foundation are printed. The rest of the construction is traditional. A time-lapse video of the project is also available.
I don’t think I will have an opportunity to build and live in my own 3-D printed house. However, I am encouraging my children to consider the potential this technology offers. I would enjoy helping them.
Soon it will be time to end my surfing career, and I have been wondering what to replace it with.
Earlier, on Sunday (2018-07-01), I had read an article in The Independent about Deep Purple and their compelling song Smoke on the Water, which appears to reference the burning down of a casino at Montreux in 1971: https://www.independent.co.uk/news/long_reads/deep-purple-montreux-jazz-festival-lake-geneva-1971-a8418926.html
Later, that day I was into YouTube, and in addition to the usual mix of woodworking and computing videos, the second on the list of recommended videos was Rolling Stones time! Riffing on Gimme Shelter with my Bacchus BST-650, by Laura Cox. It had over 300 000 views, and was made 2018-05-26: https://www.youtube.com/watch?v=Zy09E1HC7lE
Unlike many people with tinnitus, I restrict the amount of music I listen to. My normal music consumption is somewhere around one track/ song a week.
However, working on Project Retrograde at the time, I wondered why this particular video was included so high up on my recommendation list. My thoughts were, I was reading about another famous guitar riff earlier. Google knows this. Both Google and YouTube are part of Alphabet. Then I wondered, why not the original version? Does their algorithm conclude that I would prefer a cover version by an unknown woman, to the original by a famous rock band?
YouTube’s placement of the video worked. I decided that this 2-minute long video would be the one track I would listen to on Canada Day – and probably the only one for another week. I did play it, but what fascinated me was the guitar. It looked like a Fender Stratocaster. A little searching through the surface web and I discovered that Bacchus guitars are made in Japan by the Deviser Custom Shop. They are generally well made copies of famous brand names. They are handmade without using CNC equipment.
In an instant, the framework of a new project started to appear, but one that would only begin after: 1) the house was remodeled, 2) its furniture constructed, 3) the DIY CNC machine completed and 4) the electrically powered, jet surfboard made. Only then would I manufacture an electric guitar, using CNC equipment. No, probably not a Fender Stratocaster copy, but a bespoke design. One can only go so far in copying the works of someone else.
As for the amplifier and speaker system, one source of inspiration is Notes & Volts – Electronics, Guitars & Geekery: https://www.youtube.com/user/NotesAndVolts/videos?disable_polymer=1