On Environmentalism

British Prime Minister Margaret Thatcher with activist fashion warrior Katharine Hamnett at a reception at 10 Downing Street, in 1984. Please note the appropriate party shoes worn by Hamnett.

There are many different types of environmentalists. Most people’s involvement in environmentalism does not involve a full range of issues. Instead, there is a focus on just one, or a few. For example, some people are focused on nuclear energy, or policy decisions on bears and other carnivores, or preservation of the arctic fox.

For many, their most distinguishing garment is their hiking boots. Others are more comfortable in a lab-coat. There are even people who prefer tailored suits, to cavort with members of political/ business elites. Fortunately, many times increasingly more people simply wear their ordinary school clothes to protest outside their favourite democratically elected assembly each and every Friday. Personally, I feel most comfortable outfitted in protective clothing suitable for a workshop. One can never be quite sure what type of clothing evokes the best environmentalist image, except to refer to the stunning success of Katharine Hamnett, dressed in a rather long sweat shirt with dress sneakers, at a reception at 10 Downing Street in 1984, which is now 35 years ago.

The reason for all of these different fashion statements, is that people have their own individual environmental fashion style. Personally, I see a need for a flora of environmental organizations, each with their own approach. To help people understand this concept better, I’d like to use religion as an analogy.

There is a large segment of the population in Norway who are active – but more likely passive – members of a Lutheran church, still often – but incorrectly – referred to as the State Church. Many immigrant families are members of the Catholic church, while other immigrant families are members of a wide variety of Muslim organizations. There is also a variety of other religions, associated with other faiths.

Membership in a religion involves a two-fold declaration. First, a potential member must hold a minimal set of core beliefs that are known in advance, and the religion must then be allowed to adjudicate that person to determine if that person meets its membership requirements. It is insufficient for a person to make a declaration that they are Jewish/ Christian/ Muslim/ Baha’i, and for the particular religion to be required to accept that person as a member.

Bridge building between the various religions is not undertaken by having every religious person join an ecumenical organization, and then allow decisions to be made through democratic voting procedures. That would result in a tyranny by the majority. Instead, the different Faiths/ denominations become members, and areas of common interest are developed through consensus. There will, of course, be areas where these organizations agree to disagree.

My experience of Friends of the Earth, is that it – like the Church of Norway – has a large number of passive members, who pay an annual membership fee more out of guilt, than belief. Yet, it is also resembles The Council for Religious and Life Stance Communities, hoping to foster mutual respect and dialog between a variety of environmental perspectives, and working towards their equal treatment.

The Norwegian name of Friends of the Earth is not Verdens Venner or even Jordens Venner, as could be expected with a literal translation. Instead, it is Naturvernforbundet, which is officially translated as the cumbersome, The Norwegian Society for the Conservation of Nature/ Friends of the Earth Norway. For linguists, one could cryptically add: natur = nature; vern = protection; forbund = for-bund = together bound = federation/ association/ society; et = neuter direct article = the, which is put at the beginning of the phrase in English. Note the general absence of norsk (adjective, not usually capitalized) = Norwegian, or Norge (noun, in Bokmål, spelt Noreg in Nynorsk or New Norwegian) = Norway. However, the name sometimes begins with Norges (possessive noun) = Norway’s, if there is a need to distinguish the organization from something in other countries.

Because of the structure of Friends of the Earth, there is no need for the organization to build consensus. Instead, individuals can position themselves to become representatives attending bi-annual national meetings, and voting on policy decisions. In this internet age, this 20th century approach means that a determined few, can decide policy that could be offensive to a more passive majority.

Some of the more radical and active members are able to capture the votes of this passive majority, and to use it to change/ uphold policy decisions. What appears to be consensus, can be more properly be described as a tyranny by the few. This problem can be remedied by replacing a representative democracy, with a direct democracy – one member, one vote. This is attaining using today’s internet technology.

Unfortunately, Friends of the Earth cannot be both dogmatic and ecumenical at the same time. If it opts to take a more ecumenical approach, then instead of communities of Buddhists, Hindus, Humanists and Sikhs (all groups not mentioned previously), there would be place for different views of environmentalism: field naturalists, species preservationists, workshop activists, to name three. Each group would then be allocated an agreed upon number of council members. A (bi-)annual meeting would appoint a board, which again employs a secretariat, and the organization would work towards consensus building.

Despite my role as leader of Friends of the Earth, Inderøy there are days when I contemplate leaving the organization. It is related to one significant flaw with Hamnett’s photo (above), and that is the negativity of her message. One never wins friends by telling people what not to do. Instead, there has to be a positive message that can be periodically reinforced.

Friends of the Earth, Norway, is on the warpath again against imported plant species, including those grown in private gardens. Instead of making positive suggestions to grown some under-rated, beautiful, endemic species, they want to induce guilt in people who chose immigrant species.

I think, in particular of the sand lupine, Lupinus nootkatensis, which thrives on sand and gravel-containing areas, growing to about 50-70 cm high. The species name originates from the Nootka Sound in British Columbia, Canada. It is a place I am intimately familiar with. The species was first listed on the Norwegian Black List 2007 (SE). Yet, the species came to Norway with The Norwegian State Railway (NSB), which used it to tie the slopes along the then (1878) newly constructed Jær Line, south from Stavanger for almost 75 km to Egersund. From there, the plant has spread along the railway and the road network to large parts of the country. Today, it is found in 16 of the country’s 19 traditional counties.

The species started its expansion from Jæren in the Southwest. It was observed in Stjørdal in 1911, which means it has been found in Trøndelag for at least 108 years. In a very short period of time, lupins grow densely, and where not limited by droughts, large, barren areas can be reclaimed quickly because of its nitrogen fixation abilities. It can also extract phosphorus from compounds in poor soils. In spite of these good qualities, it has a tendency to become dominant and overtakes the natural flora. Of course, the reason why lupins were used by the railway, is that there were no native Norwegian species capable of taking on the reclamation duties required: to combat erosion, to speed up land reclamation and to help with reforestation.

The reason for my despair, is that many environmentalists do not seem to understand that the world of 2050 will be vastly different from the world of 1950 or 1850. Unfortunately, many of the species previously thriving in Norway will be totally unsuited for continued life in Norway in thirty years time.

The Crowther Lab at ETH Zürich has examined expected temperatures for 2050, and found that Oslo will experience a 5.6 degree increase in its warmest month, and a 2.2 degree increase annually. This could significantly weaken the viability of many species, including Norway maple, Acer platanoides and strengthen an imigrant, Sycamore, Acer pseudoplatanus, which was introduced to Norway about 1750, and has become naturalized. There are suggestions that the Sycamore is replacing species devastated by disease such as the wych elm, Ulmus glabra, and the European ash, Fraxinus excelsior, which is at its cultivation limit at Trondheim Fjord.

NB Information about Lupinus nootkatensis has been updated. Aparently, it was already placed on the Norwegian black list in 2007.

Disruptive Technology: Micro-batteries

Christine Hallquist, CEO Cross Border Power.

World citizens intent on sustainability should rejoice that Vermont citizens were too dumb to elect Christine Hallquist as their governor. Allegedly, they were more concerned about a tax increase on fossil fuels, than they were about entering the 21st century. This means that Christine can use her insights and other talents to help North Americans transition away from fossil fuels to clean electrical energy solutions.

Soon after she lost the election she wrote a white paper on a North-American solution to climate change, which has a lot to do with sustainable electrical energy. Wind and hydro are part of the equation, but so are batteries. Now, she emerges as CEO of Quebec registered, Cross Border Power. Its strategy is closely aligned with that of Bothell, Washington startup XNRGI.

XNRGI (exponential energy) exuberantly tells us that it “has developed the first-ever porous silicon chip based Lithium Metal rechargeable battery technology. XNRGI’s 15 patented technologies and 12 pending patents, were developed over a 15-year period with more than $80-million of investment from Intel, Motorola, Energizer, the United States Navy, Argonne National Laboratory / US DOE Department of Energy grant for advance manufacturing and Novellus Systems, among others. XNRGI’s technologies enable scalable, high-volume manufacturing at the industry’s lowest cost, by using existing semiconductor wafer manufacturing and contract assembly which have been perfected in Silicon Valley over the past 20 years. This combination of original technologies and proven manufacturing processes provides XNRGI with an unprecedented manufacturing scale and at a low cost with minimal capex.”

XNRGI has developed a new battery technology that prints micro-batteries onto silicon wafers. There are 36 million of these machined onto a 300 mm silicon wafer, which is referred to as being 12-inches in diameter.

These batteries can scale from ultra-small batteries for medical implants to large-scale grid storage and initially promises four times the energy density of lithium-ion batteries for half the price. It claims to completely eliminates the problem of dendrite formation which, if true, would make it a massively disruptive invention. Dendrites are responsible for most fires in lithium based batteries.

Porous silicon gives about 70 times the surface area compared to a traditional lithium battery, with millions of cells in a wafer. The batteries are 100% recyclable. At the end of the product life, the wafers are returned, then cleaned to reclaim the lithium and other materials. Then can then be reused.

XNRGI has worked with partners in an early adapter program to test out 600 working samples in a variety of areas. These include: electric vehicles with 3 – 6 times the energy density currently found, while being 2 – 3 times lighter, and at a considerably lower cost; consumer electronics, providing 1 600 Wh/liter; internet of things applications with micron scale power with low discharge rates.

Grid scale storage for intermittent renewables like Solar/ Wind and backup power is another focus area.The battery banks that Cross Border Power plans to sell to utility companies as soon as next year will be installed in standard computer server racks. One shipping container with 40 racks, will offer 4 megawatts (MW) of battery storage capacity in contrast to a comparaAble set of rack-storage lithium ion batteries which would typically only yield 1 MW.

Electrical grid stabilization, is probably the one area in electrical engineering where battery density is irrelevant. Of course, everyone appreciates a price reduction, and this means that a 4 MW 40 foot container will cost twice the price of a 1 MW unit.

A 1 MW 40 foot container-based energy storage system typically includes two 500-kW power conditioning systems (PCSs) in parallel, lithium-ion battery sets with capacity equivalent to 450 kWh, a controller, a data logger, air conditioning, and an automatic fire extinguisher. When this is scaled to 4 MW, some of the details remain unknown, including the number and size of the PCSs. The total capacity should increase to 1.8 MWh.

What is missing from any documentation I have found, is any mention of these batteries in aviation. Because of its importance, this will be the subject of an upcoming weblog post.

Hype

The difficulty with hype, is knowing what technology has a basis in fact, and what is simply wishful thinking. Much hype is related to batteries specifically developed for electric vehicles. Despite chemical engineering studies, including physical chemistry, I lack sufficient insight to judge the veracity of any of these claims. One needs to be a specialist, with detailed knowedge and experience.

The first storage device that raised issues was one under development by EEStor of Cedar Park, Texas. It claimed to have developed a solid state polymer capacitor for electricity storage, that “stores more energy than lithium-ion batteries at a lower cost than lead-acid batteries.” Despite patents, many experts expressed skepticism.

The rise and fall of Envia is another example. This battery startup secured a contract with GM to supply its cathodes, made from nickel, manganese, and cobalt, to power GM’s Volt. Everything looked great until Envia’s cathodes failed to perform as claimed. Details about this can be found in an article by Steve LeVine in Quartz. Later, LeVine wrote The Powerhouse (2015), which more generally discusses the geopolitics of advanced batteries.

Phinergy, as Israeli company, has promoted an aluminum air battery, where one electrode is an aluminum plate, and the other is an oxygen and a water electrolyte. When the oxygen interacts with the plate, it produces energy. The good news is that these batteries could have 40 times the capacity of lithium ion batteries. The bad news is that the aluminum degrades over time. Current only flows one way, from the anode to the cathode, which prevents them from being recharged. This means that the batteries have to be swapped out and recycled after running down.

Fisker Inc. claims it is on the verge of a solid-state battery breakthrough that will give EVs extended range and a relatively short charging period. In contrast to conventional lithium-ion batteries that offer significant resistance when charging or discharging, which creates heat. Solid-state batteries have low resistance, so they don’t overheat, which allows for fast recharging. But the negative side is their limited surface area means they have a low electrode-current density, which limits power. Existing solid-state batteries can’t generate enough power, work well in low temperatures, or be manufactured at scale.

Fisker’s solution is to create a 3D solid-state battery, they call a bolt battery, that is thicker, and with 25 times the surface that a thin-film battery. This allows it to produce sufficient power to move a vehicle. It claims to produce 2.5 times the energy density of lithium-ion batteries can, at a third of the cost. Despite the hype, Fisher will not be providing solid state batteries on its EMotion luxury sport vehicle, claimed to be available from mid-2020. Rather, it will come with proprietary battery modules from LG Chem.

Desertec

This is not the first time I have announced disruptive energy technology. I have been a keen advocate of Desertec solar-thermal power, where I had hoped that electricity generated in North Africa could be used to power Europe (as well as North Africa, the Middle East and elsewhere) with copious quantities of sustainable energy. Bi-products included desalinated potable water (not alw)ays appreciated as a benefit, opportunities for growing large quantities of food, and stabilizing soils to prevent climate deterioration. A White Book has been written on it. It was, and still is my hope, that the introduction of this system would result in more sustainable, and democratic societies, in North Africa, without reliance on fossil fuels.

Readers eager to find weblog posts on Desertec will be disappointed. During the period when I was most interested in this technology (2004 – 2008) I did not have a weblog. Instead, material was presented in the form of lectures and activities in science class, typically for grade 11 students.

Empty Planet

This post looks at the basic premise of Darrell Bricker and John Ibbitson’s Empty Planet (2019), that the human population, now at 7.5 billion, will peak at 9 billion, before declining rapidly, later in the 21st century. The question addressed is how different nations/ regions will cope with a collapsing population.

All advanced and many emerging market economies, to use International Monetary Fund slang, have fertility levels below replacement, considered to be 2.1 offspring per woman. The Total Fertility Rate (TFR) in Norway in 2018 was 1.56, the lowest on record. In her New Year’s address in 2019, the Prime Minister of Norway, Erna Solberg, encouraged Norwegian women to have more children. The PM has had 2.0 children, while the two other women party leaders in her government have not had any children. Being generous, these three have a combined TFR of 0.7.

Is Canada the world’s first post-national country? Despite my Canadian heritage as well as Bricker and Ibbitson’s suggestion that this is the case, I would have to answer, not yet. Yes, Canada features individualism, combined with urbanism, low TFR and high immigration. Twenty percent of the population are immigrants. That is higher than any other country, including the United States of America. Yes, this contrasts with the stereotypical image of Canada as the vast, unpopulated, ice-encrusted North. But these characteristics are not sufficient to make a country post-national.

For a country to be post-national, it has to be multi-cultural. This means that it cannot display preferences for one culture to the detriment of another. Canada does this, by continuing to have a British monarch of German heritage, as head of state. The fact that it uses a first-past-the-post electoral system also puts limits on representation in parliament. Amazingly, the Liberal Party currently in government as of this writing in 2019, first promised a more democratic voting system, then reneged on this promise. Currently, cultural minorities have to find their place within a three party system. Rather than having a group of people from an assortment of political persuasions representing citizens over a larger area, one person from a specific political party represents everyone in the riding, including those who did not vote for that person.

Despite the book’s Canadian chauvinism, it offers important insights. In urban societies, women become better educated, have better knowledge about contraception, this results in fewer children and leads women to better jobs, which makes women more financially autonomous. Ties to family, clan and religion deteriorate. The ties that are left are cultural.

Immigrants to Canada generally teach their children their original language, so that these children can continue to be entrenched in two cultures – Canada and something else. My daughter, who took her secondary education in Canada, had Norwegian as her second language. My old elementary school, named after gold prospector, journalist, some time New Westminster resident and former British Columbia Premier John Robson, has been demolished. It has been replaced with Ecole Qayqayt Early Education Centre. It is named after the Qayqayt First Nation, who originally lived in New Westminster, and offers French immersion classes.

While most of the world is at or below replacement fertility, 2.1 children per woman, the one major exception is Africa. The 2019 African Economic Outlook reports economic prospects and projections for Africa and for each of its 54 countries. It offers short and medium term forecasts for the main socio-economic factors, noting challenges and progress.

The report states, “Africa’s economic growth continues to strengthen, reaching an estimated 3.5 percent in 2018, about the same as in 2017 and up 1.4 percentage points from the 2.1 percent in 2016.”

But also cautions, “Africa’s labor force is projected to be nearly 40 percent larger by 2030. If current trends continue, only half of new labor force entrants will find employment, and most of the jobs will be in the informal sector. This implies that close to 100 million young people could be without jobs.” African fertility has halved to 4 since 1975 due to better female education and empowerment. However, this is still about twice replacement levels.

While declining population is a benefit, in terms of relieving pressure on the environment, it will also swing economic power from capital to labour, reducing inequality.

There are four approaches that can be taken to the world population challenge. Approaches 1 and 2 both accept a perpetual decrease in population. Approach 1 is to continue relying on humans as before.

Approach 2 is to replace workers with robots. Japan is the poster child for working with this. There are many areas of the economy where robots can be used, especially in the transport sector and manufacturing. Some progress can undoubtedly also be made in terms of construction.

With these first two approaches, there will come a point when having a nation-state will become impractical. There will be so few people living in them, that land could be freed for other groups to use.

The third and fourth approaches allow increased immigration. There are a number of situations that have to be understood, when dealing with immigrants. One term bandied about is integration. Quite often it is understood to be something that someone else has to do. Many people in the host population confuse integration with assimilation, and expect immigrants to assimilate themselves into the wider society. That task is, of course, impossible. So, Approach 3, is a situation in which the host society/ culture envisions itself as superior to anything/ everything on offer from migrants. It also expects them to give up social and cultural values, for something these immigrants may not quite understand.

Approach 4, the last approach here, is to accept that society will ultimately become multicultural. This is the official Canadian approach. Apart from a disdain for radicalism, Canada is willing to accept large numbers of well educated and young immigrants, capable of engaging with other Canadians of divergent backgrounds. At the same time these immigrants are allowed, even encouraged, to preserve their original culture, so they can function as a link between the two societies. Immigrant children to Canada are taught that they have two feet, one planted in Canada, the second in the culture they come from. Both are equal, and both are relevant.

Society changes

If anyone were to enter a time machine, and go fifty years back in time, they would discover a completely different world. With respect to my own situation, it would be a world filled with cheap gasoline, smokers, mini-skirts, vinyl records, beef steaks and corporal punishment. Since then, that culture has become unrecognizable. Like everyone else born before the new millennium, I didn’t grow up with cell phones, and had to learn how to use them – as an adult. Dress codes prevented women from wearing trousers at school and at work. In fact, some women had to quit work if they married. I was strapped for turning around in my desk at school.

Corporal punishment is illegal in Norway today, but listening to Norwegians my age, it seems to have been quite common fifty or sixty years ago. Transitions to a new culture can be difficult and while I don’t approve of people hitting their children, I don’t think jail sentences are the correct response either, in many cases. Suspended sentences are a cost-effective way of expediting behavioural change, both for the individual and society.

Yet, prophecy is a tricky proposition. With hindsight, it is easy to see that pizzas are more than a passing fad. The same cannot be said of fondues.

This weblog post was originally written 2019-03-06, but not published until 2019-07-08. It continues a discussion begun in Workshop Activism (published 2018-02-03), and continued in Lotta Hitschmanova (published 2019-02-14).

The Charm of PLA

Biodegradable PLA cups in use at Chubby’s Tacos in Durham, North Carolina. (Photo: 2008-07-14, Ildar Sagdejev)

At Inderøy Techno-workshop, the standard plastic used on our 6 x 3D-printers is Polylactic Acid = PLA. The weblog post explains why.

The name Polylactic Acid is actually a misnomer. It does not comply with IUPAC (The International Union of Pure and Applied Chemistry) standard nomenclature (naming standards for chemicals), and is potentially ambiguous/ confusing: PLA is not a polyacid (polyelectrolyte), but a polyester.

What distinguishes PLA from other thermoplastics/ -polymers is that it is made from plant-based renewable feedstocks. PLA’s list of raw materials include cassava, corn/ maise, sugar beet, sugarcane, potatoes and similar products. What is interesting, from an Inderøy perspective, is that the municipality has a potato processing plant that was established in 1844.

Despite its natural origins, PLA offers properties similar to other thermoplastics used industrially. One of the reasons for selecting PLA as a standard product, is that the workshop wants participants to reflect over their choice of materials, and to choose those that are least damaging to the environment and living things, including themselves and other human beans, as some people call them. PLA has less negative impact than most other plastics, so people can use it with a good conscience. If workshop participants want to switch to a different plastic, they will have to defend their choice!

There are several methods used to manufacture PLA plastic. Those interested in the fine details of PLA engineering should consult Lee Tin Sin, A. R. Rahmat, W. A. W. A. Rahman, Polylactic Acid: PLA Biopolymer Technology and Applications, (2012) ISBN: 9781437744590

PLA is a thermoplastic, which means it can be melted and reshaped without significant degradation of its mechanical properties. Thus, it is easy to recycle.

PLA is biodegradable. Microorganisms transform it into natural components, such as water and carbon dioxide. The speed of the transformation is strongly dependent on temperature and humidity. At the Inderøy Techno-workshop, we will ensure that PLA is properly recycled. There is a special bin, clearly marked with PLA – and not with resin code 7 used to identify “other” plastics. The plastic recycling resin codes 1 to 6 are used for petroleum based plastics.

One of the projects this writer wants to prioritize in 2020 is to work with Innherred Renovation, the local waste recycling company, to examine the feasibility of processing PLA locally to avoid excessive transportation costs, and to give the workshop a source of raw material to make coils of PLA filament – yet another project, scheduled for 2021. Disposal (as distinct from recycling) involves heating PLA to about 60°C and exposing it to special microbes, that will digest and decompose it within three months. If these conditions are not met, PLA can take between 100 and 1 000 years to decompose.

Because PLA is derived from renewable resources, and is not petroleum-based, it offers many positive characteristics for manufacturers. It is almost carbon neutral. The raw material it is made (plants) from absorbs carbon. When oxygenated or heated, it does not release toxic fumes. Yet there is a down side. With the world’s population raising, at least temporarily until towards the end of this century, there are concerns about using agricultural land for the production of non-food crops, such as bioplastics. In addition, raw materials for PLA typically use transgenic plants, plants that have genes inserted into them that are derived from another species.

Other challenges include agriculture based on monocultures; a lack of long-term testing; mixing/ contaminating PLA with petroleum based plastics (PLA plastic is brittle unless it is mixed with some petroleum based polymers.); decomposition of food storage PLA plastics during production, packaging, transportation, selling and consumption phases. There are also strength and crystallinity deficiencies.

PLA plastic is recognized as safe by the United States Food and Drug Administration. Its non-toxicity allows it to be used safely in all food packaging and many medical applications including implants. These can be biodegraded in the body over time, if PLA is in its solid form. There are some ventilation issues. Fumes emitted by PLA are claimed to be harmless, however, there are suggestions that the release of nanoparticles can potentially pose a health threat. At Inderøy Techno-workshop, extractors will be fitted to our 3D-printers, with both HEPA and active charcoal filters.

Physical characterics of PLA that are important to users are its mechanical, rheological (flow) and thermal (heat) properties. The makeitfrom.com database is a convenient site to get basic information abouta number of materials. Here are the results for PLA.

PLA has good mechanical properties, that are often better than many petroleum based plastics such as polypropylene (PP), polystyrene (PS) and polyurethane (PU). It’s Young’s modulus, ability to tolerate elongation under tension or compression, is ~3.5 GPa, in contrast to 0.1 GPa for rubber and 200 GPa for steel. Its tensile yield strength, the force needed to pull something, is ~50 MPa. Its flexural strength, the stress needed to start plastic deformation, is ~80 MPa, All of these are at the low end compared to other thermal plastics.

Rheology is the study of materials with both solid and fluid characteristics. PLA is a pseudoplastic, non-Newtonian fluid. Non-Newtonian means that its viscosity (resistance to flow) changes depending on the stress that it is subjected to. PLA is a shear-thinning material, which means that the viscosity decreases with applied stress.

PLA’s thermal properties depend on its molecular weight. It is classified as a semi-crystalline polymer, with a glass transition temperature at ~55°C and melting temperature at ~180°C. These are low compared to other thermoplastics such as ABS. PLA can burn. This means that heat/ and smoke detectors are necessary, if 3D-printers are to be used without people present.

Processing PLA requires humidity and temperature control to avoid unnecessary degradation.

Some sources recommend storing PLA in its original package at ambient temperatures but drying it before use, because of its hydroscopic tendencies.

The main usage of PLA at the techno-workshop will be 3D printing with filament. In addition, PLA can be extruded. While heat is needed to allow PLA to flow under pressure, more specific processes are needed to pump, mix and pressurize PLA. Related to this is injection molding, for small-series production. The main challenge is making inexpensive molds. Injection molding for PLA production is limited, because of its slow crystallization rate, compared to other thermoplastics.

Other processes include injection stretch blow molding, cast film and sheet and thermoforming.

Bioplastics such as PLA have a large economic potential, allowing job creation opportunities, especially in rural areas, such as Inderøy. There are estimates that the European bioplastics industry will provide 300 000 skilled jobs by 2030, up from an estimated 30 000 in 2020. Thus one of the key tasks of the Techno-workshop is to encourage young people to develop business ideas based on the use of PLA.

PLA is biocompatible, it can be used in the human body with minimum risk of inflammation and infection. It has been used to produce biomedical products for drug delivery systems and bone fixation, including plates, screws, surgical structures and meshes. These can dissolve inside the body show over a period of between three months and two years. that it posses great promise in solving problems such as tissue loss and organ failure

There are efforts in the textile industry to replace non-renewable polyester textiles with PLA. Advantages include breathability, lower weight, and recyclability.

The cosmetics industry facing a consumer backlash for using petroleum based plastic products, has sought more sustainable solutions using PLA.

While there were hopes that PLA could be used for structural applications in the construction industry, the same characteristics that made it useful in biomedical applications, detracted from its use as foam for insulation, fiber for carpets and more generally in furnishings.

Hydrogen Station Explosion – Aftermath

The hydrogen station at Kjørbo is centrally located in Sandvika outside of Oslo, by two of the busiest roads in Norway with 80 000 cars passing daily. It is in Bærum municipality, and Akershus county. It exploded on Monday 2019-06-10. Since then, a number of interesting – some might say alarming – facts have emerged.

The station was a joint venture between X-Uno, Nel and Nippon Gases (formerly Praxair), announced on 2016-04-01. It uses Nel technology for on-site hydrogen production from electrolysis. The station is co-located with Powerhouse Kjørbo, an energy-positive office building, that uses solar panels that can supply upward of 200 000 kWh each year, twice the amount of the building’s annual energy consumption. Some of this excess energy was to be used to produce hydrogen.

The project had a total budget of NOK 28.4 million, of which NOK 5.7 million was support from the Akershus County Council and NOK 7.7 million was from the Norwegian public enterprise, Enova, responsible for the promotion of environmentally friendly production and consumption of energy. Other project partners included consulting firm Asplan Viak and Bærum Municipality.

Nel is an electrolysis technology company that has expanded into the hydrogen market. Its roots going back to technology developed by
Norsk Hydro in 1927. It is the world’s largest electrolyzer manufacturer with more than 3500 units delivered in over 80 countries. It is also a world leading manufacturer of hydrogen fuelling stations; approximately 50 stations delivered to 9 countries.

Safety Assessment

Bærum municipality has clearly stated that they did not have the competence to say whether the station was safe or not. They pointed out that the operator Uno-X sent its risk analysis to the Directorate for Civil Protection and Emergency Planning (DSB), relying on the authority to intervene if they saw the station as a security risk.

But DSB did not assess the analysis. Neither do they need to do so with anyone who stores or produces hydrogen in Norway. It emerges from DSB’s overview of hydrogen facilities in Norway, that the limit for having to get approval from the professional authority is actually set so high that it does not apply to anyone.

A total of 5 tonnes of hydrogen can be stored before it is subject to major accident regulations. Then another regulation on the storage of hazardous chemicals enters, which requires consent from DSB. That said, 100 grams of hydrogen can cause a serious situation if it is handled incorrectly, and less than one kilogram can lead to a fatal accident.

The 5 tonne limit is taken directly from Seveso, the relevant EU directive, which has been placed in the Norwegian major accident regulations. DSB is nevertheless free to demand that organizations obtain approval even if they are below the limit. However, DSB must argue that the risk dictates it, and then make a decision. It was not done at the hydrogen filling station in Sandvika. DSB is now also asking whether the limit of 5 tonnes of hydrogen is reasonable.

The amount of hydrogen stored when it exploded in the Uno-X station in Sandvika is uncertain, but in the safety analysis, the company estimates that they would store up to 100 kilograms during the first 1-2 years.

Leakage without Alarm

Perhaps the most disturbing fact emerging is that there was a hydrogen leak for an estimated 2.5 hours, that did not set off any alarms before the station exploded.

Nel installed the technology at the station and has admitted their responsibility for the explosion.

They are now reacting to the accident with a four point action plan. First, with a verified plug solution, they intend to inspect all high pressure storage units in Europe, and to check and re-torque all plugs. This should prevent the same circumstances arising in the future.

Second, they are updating their routines for assembly of high pressure storage units. This includes the introduction of a new safety system, and routines that follow an aerospace standard. This includes torque verification, double witness and documentation/marking.

Third, there is a need for improved leak detection, since it is estimated that hydrogen leaked from the tank for 2.5 hours, without this being detected. Thus, no alarm sounded before the tank exploded. Initially, this will involve a software update to increase leak detection frequency. However, they will also consider additional detection hardware and/ or modifications to the existing equipment.

Fourth, ignition control measures will have to be implemented. These are site dependent. A smooth surface, without gravel, should surround any high pressure storage unit. Additional ventilation may also be required, along with greater use of EX-equipment. That is, electrical equipment specifically designed for hazardous locations. This type of equipment should be specially designed and tested to ensure it does not initiate an explosion, including – but not restricted to – those due to arcing contacts or high surface temperature of equipment.

Incorrect Assembly of Equipment

The safety consulting company Gexcon, along with SINTEF and Bureau Veritas, is responsible for investigating the accident. They have found that a plug in one of the hydrogen tanks was mounted incorrectly and that this is why hydrogen leaked into the air and formed a cloud that eventually exploded.

On Friday, 2019-06-28, Nel, the company manufacturing the hydrogen distribution equipment, and who has taken responsibility for the explosion, explained how the incorrect assembly took place. Their presentation – which appears to be part public relations information about the company and part explanation for the incident – is here.

  1. Materials OK
    1. Magnetic particle inspection
    2. Penetrant testing
    3. Verification of materials
  2. Design OK
    1. 1 000 000 cycle accelerated test
  3. Assembly NOT OK
    1. Bolt analysis
    2. Physical gap
    3. Opening torque
  1. Starting condition.
    1. Green bolts torqued properly
    2. Blue bolts not torqued properly
  2. Red sealing fails.
    1. Starting with small leak on red sealing area
    2. Small leak wears red sealing out and escalates
    3. Large leak exceeding capacity of leak bore, causing pressure increases inside blue sealing area
  3. Bushing with Plug lifts and the blue seal fails.
    1. Insufficient pretension of bolts leads to lift of the plug and blue sealings fail immediately
    2. Spread of Hydrogen leaks out in uncontrolled way

There are two main candidates for ignition that are probably impossible to distinguish between. These are: 1. Self-ignition by static electricity mixed with optimum amount of oxygen and hydrogen led to ignition. 2. Gravel on the substrate at the tank, which lay at the very bottom in one corner. Wind acting on the gravel may have caused friction which led to ignition.

An additional report is expected to be released at the end of august 2019.

V2: The content was updated 2019-06-30 at 17:30.

Cut/Copy and Paste

The most influential computer ever made, original Xerox Alto featuring bit-mapped black and white display sized 606×808 (the same dimensions as a regular 8.5″x11″ sheet of paper, aligned vertically; 5.8 MHz CPU; 128kB of memory (at the cost of $4000); 2.5MB removable cartridge hard drive; three button mouse; 64-key keyboard and a 5-finger key set. It was on such a machine that Bravo and Gypsy were developed, and cut/copy and paste invented. (Photo: Xerox PARC)

Larry Tesler (1945 – ), invented cut/copy and paste. Between 1973 and 1976, Tesler worked at Xerox PARC (Palo Alto Research Center), in Palo Alto, California, on the programming language Smalltalk-76, and especially the Gypsy text editor, referred to then as a document preparation system. It was on this project, he implemented a method of capturing text and inserting it elsewhere.

Xerox PARC was initiated by Xerox Chief Scientist Jacob E. “Jack” Goldman (1921 – 2011) who previously worked at Carnegie Tech and directed the Ford Scientific Laboratory, who hired a physicist, George Pake (1924 – 2004) to create it in 1970.

Xerox PARC was largely responsible for developing laser printing, the Ethernet, the modern personal computer, the graphical user interface (GUI) and desktop paradigm, object-oriented programming, ubiquitous computing, electronic paper, amorphous silicon (a-Si) applications, and advancing very-large-scale integration (VLSI) for semiconductors.

For a more complete story, see: Larry Tesler, A Personal History of Modeless Text Editing and Cut/Copy-Paste (2012)

While most people focus on the cut/copy-paste tool, the concept of modeless software had even greater impact. A mode is a distinct setting within a computer program, in which the same user input will produce different results, because of other settings. Caps lock when pressed puts the user’s typing into a different mode, CAPITAL LETTERS. If it is pressed a second time, the original made will be reactivated, resulting in lower-case letters.

Most interface modes are discouraged because of their potential to induce errors especially when the user is expected to remember the mode state the interface is in. The situation is somewhat better if there is an on-screen state/ mode indicator, such as a change in the colour of an icon, when a mode change is made.

If the user is unaware of an interface mode, there may be an unexpected and undesired response. Mode errors can be disorienting as the user copes with a transgression of user expectations. Not all mode changes are initiated by users,

Mode changes can be initiated by the system, by previous users or by the same user who has disremembered the state change. In such a situation, an operation with the old mode in mind, will disrupt user focus as the user becomes aware of the mode change. This is especially important when a user cannot find how to restore the previous mode.

Prior to Gypsy, Butler Lampson (1943 – ), Charles Simonyi (1948 – ) and others developed Bravo at Xerox PARC in 1974. It was a modal editor where characters typed on the keyboard were usually commands to Bravo, except when in “insert” or “append” mode. Bravo used a mouse to mark text locations and to select text, but not for commands.

Although similar in capabilities to Bravo, the user interface of Gypsy was radically different. In both, a command operated on the current selection. But Bravo had modes and Gypsy didn’t. In Bravo, the effect of pressing a character key depended on the current mode, while in Gypsy, pressing a character key by itself always typed the character.

In the Wikipedia article on Gypsy, the difference between Bravo and Gypsy is illustrated by three examples:

  1. Insert In Bravo’s Command Mode, pressing “I” entered Insert Mode. In that mode, pressing character keys typed characters into a holding area (“buffer”) until the Escape key was pressed, at which time the buffer contents were inserted before the selection and the editor returned to Command Mode.
    In Gypsy, no command or buffer was needed to insert new text. The user simply selected an insertion point with the mouse and typed the new text. Each inserted character went directly into the document at the insertion point, which was automatically repositioned after the new character.
  2. Replace In Bravo, to replace existing text by new text, the user pressed “R” to enter Replace Mode. That mode was just like Insert Mode except that the buffer contents replaced the selection instead of inserting text before it.
    In Gypsy, to replace text, the user simply selected the old text and typed the new text. As soon as the user began to type, Gypsy deleted the old text and selected an insertion point in its stead.
  3. Copy In the then-current version of Bravo, the user selected the destination, pressed “I” or “R” to enter Insert or Replace Mode, selected the source (which highlighted differently from the destination), and pressed Escape to perform the copy and return to Command Mode. While in Insert or Replace Mode, the user could scroll and could select a source, but could not invoke another command, such as opening a different document. To copy text between documents was more complex.
    In Gypsy, the user could select the source text, press the “Copy” function key, select the destination text or insertion point, and press the “Paste” function key. Between Copy and Paste, the system was, as usual, not in a mode. The user could invoke other commands, such as opening a different document.

Fewer modes meant less user confusion about what mode the system was in and therefore what effect a particular key press would have. Gypsy and Bravo both used a three-button mouse, where the second and third buttons were intended for experts.

New users could learn to work with Gypsy in only a few hours. Drag-through selection, double-click and cut-copy-paste were quickly adopted elsewhere, and have become standard on most text editors.

This text was originally written in June 2009 as a draft for a weblog post. It was removed from the weblog, but subsequently revived without the original date and time stamps. New text was added at irregular intervals, including 13 May 2016, 23 April 2018, and 06 May 2019. The publication date of this weblog post celebrates the 10th anniversary of this weblog.

Hydrogen Station Explosion

2019-06-10, the Uno-X hydrogen station at Sandvika, near Oslo, Norway, was destroyed in an explosion. The explosion led to the activation of airbags in two nearby cars. (Photo: NRK)

An explosion, most likely in a single hydrogen tank, occurred at the Uno-X hydrogen station at Sandvika, near Oslo, on 2019-06-10. When writing this post, the cause of the explosion was not known.

While no one appears to have been directly injured in the explosion, two people driving in the vicinity were injured when their airbags activated because of air pressure from the explosion.

The explosion resulted in the closing, in both directions, of two major highways. European Highway 16 (E16) is the major east-west connection between Bergen and the Swedish border. The E18 connects southern Norway with Oslo.

For those interested in robotics, a LUF 60 wireless remote controlled mobile firefighting support machine, was actively used to suppress the fire that followed after the explosion. More importantly, it was used to cool other unexploded hydrogen tanks, to prevent them from exploding. In addition, a platform lift with water canon assisted with this task. These two vehicles allowed firefighters to keep their distance.

Norway’s other two hydrogen stations, one in Skedsmo, another Oslo suburb, and the other in Bergen, have now been closed.

According to Norwegian Hydrogen Forum as of 2018-12-31 there were 148 hydrogen cars registered in Norway: 57 Toyota Mirais, 27 Hyundai Nexos, and 64 Hyundai iX35s. In addition to this there are 5 buses and 1 truck. In contrast, as of the same date there were 200 192 plug in electric vehicles, plus 96 022 hybrid vehicles.

In another post titled Methane vs Electricity, a significantly flawed study from the Munich-based IFO Institute for Economic Research, was examined, along with its support for methane based, hydrogen vehicles.

With this explosion, hydrogen supporters in Norway will have lost much of the little good will that hydrogen fuel cells have built up. It has probably resulted in the last nail being put into the hydrogen car coffin.

Milk

These are the ingredients that are the basis of my breakfast during six days of the week. From the left, four grain cereal – the four grains being oats, rye, wheat and barley, then there are containers with hazelnuts, walnuts, sunflower seeds (container courtesy Sun-Maid Raisins), almonds and pecans. Culture milk is available in a 1 liter carton. I usually eat an orange or half a grapefruit (alternate days) before the cereal, and sometimes a banana after it. I drink one cup of green tea with breakfast.
This is what the bowl of cereal looks like after culture milk has been added.

Welcome to yet another weblog post about Norwegian culture. All three definitions used, apply here. According to the Merriam-Webster online dictionary, culture can refer to “the customary beliefs, social forms, and material traits of a racial, religious, or social group.” It can also refer to, “the act or process of cultivating living material (such as bacteria or viruses) in prepared nutrient media.” A third definition refers to the, “acquaintance with and taste in fine arts, humanities, and broad aspects of science as distinguished from vocational and technical skills.” This will be done by introducing some norske ord = Norwegian words.

Wikipedia tells us, “Milk is a nutrient-rich, white liquid food produced by the mammary glands of mammals. It is the primary source of nutrition for infant mammals (including humans who are breastfed) before they are able to digest other types of food.”

The average Norwegian consumes about 90 liters of melk = milk annually, along with 11 kg of yogurt = yogurt, 10 kg of krem = cream products and about 20 kg of ost = cheese. There are about 10 500 dairy farms in Norway, each with an average of 25 cows. Altogether there are about 230 000 dairy cows in Norway with each cow producing an average of 7 500 kg of milk each year. In addition, there are about 35 000 goats producing about 20 million liters of goat’s milk each year.

Sweet milk must be kept cool so as not to become sour. Without cooling technology, it was impossible to transport milk long distances, whether it was from farm to customer, from farm to dairy or from dairy to shop. The market for milk production was primarily in the cities. Therefore, the production of milk to drink was initially a niche for urban farming or for farms situated on the outskirts of the cities. Farmers who were farther away from the cities produced milk that could be processed into more durable and more easily transportable milk products such as smør = butter or cheese. Refrigeration technology has become increasingly important since the mid-1900s.

Until the middle of the 20th century milk in Norway was sold by the churn or pail. The dairies transported span = churns (milk containers) out to shops. The customers brought their own milk pails to the shops where the serving clerks poured milk from the large dairy churns. The customers also had their own smaller cream pails.

In the 1930s, provisions were made that all milk sold in stores should be pasteurized. Milk bottles were used in the interwar period, first in the big cities. Around 1960, clear glass milk bottles were replaced with brown bottles that better protected the milk from light. The milk bottles were returned by the customers to the stores.

In the 1960s, the melkekartong = milk cartons came into use, and with this, disposable packaging was introduced. By 1980, all Norwegian dairies had replaced bottles with milk cartons.

Since the 1970s, the selection of dairy products in the Norwegian grocery trade has multiplied. Yogurt was introduced around 1970, including yogurt flavored with fruit and berries.

Around 1960, skummet melk = skimmed milk came on the market. Lettmelk = low fat milk was not on sale until 1984, and in the 2000s, extra low fat milk was introduced to the market. Since the 1980s, low fat milk has accounted for an increasing proportion of the drinking milk volume.

Kulturmelk = cultured milk was originally a general type designation for soured milk, but from 2005 (together with skummet kulturmelk = skim cultured milk) it became protected by the Norwegian agricultural industry’s public labeling protected food names and belongs to Tine SA. Cultured milk is referred to by many as surmelk = sour milk as opposed to søtmelk = sweet milk, ordinary whole milk.

In contrast to North America, where similar types of milk can be made through acidification, pure lactic acid bacterial cultures are used to make cultured milk and to give it a distinctive taste and consistency, in contrast to regular homogenized milk. Regular cultured milk contains 3.8% fat, while skim cultured milk contains 0.4% fat.

Cultured milk is consumed as a drink, poured on assorted types of breakfast cereals, and is used as an ingredient in baked goods.

In Norway, one finds many other soured milk products. Tettemelk = dense milk and skjør are old varieties of Nordic cultured milk. Kefir is undoubtedly even older, but its use in Norway is more recent, as is that of yogurt. Cultura and Biola, which are Tine brands, are flavored cultured milk. Kesam or kvarg = quark, is a fresh cheese made from skimmed cultured milk.

I consume cultured milk almost exclusively, despite having to read the carton in Nynorsk = New Norwegian, the second and less popular Norwegian language that originates along the West Coast of Norway: Kulturmjølk. Syrna mjølk med lange tradisjonar. Heftig og frisk smak – ikkje ulik naturen mjølka kjem frå. (Nynorsk) = Kulturmelk. Surmelk med lange tradisjoner. Sterk og frisk smak – ikke ulik naturen melken kommer fra. (Bokmål) = Cultured Milk. Sour milk with long traditions. Strong and fresh taste – not unlike the nature, milk come from.

An irritation: Tine insists on telling me that cultured milk is traditional. I disagree. It is a modern, bacteriological enhanced milk product that has some superficial similarities to historic varieties. I also object to statements about milk being a natural product. It is a product from industrialized agriculture.

More information about milk (in Norwegian) can be found at: melk.no

An aside about food security

“Food security exists when all people at all times have physical and economic access to sufficient, safe food for an adequate diet that meets their nutritional needs and preferences, and which forms the basis for an active and healthy life.” United Nations definition

The term is sometimes used indiscriminately to cover also food safety, which means that the food does not contain microorganisms, environmental toxins or additives that negatively impact health, when food items are prepared and consumed as intended.

Norway is a net exporter of sea food. It produces more than enough of everything needed domestically, and its sea food exports significantly exceed its sea food imports. Norway is self-sufficient in milk. It is largely self-sufficient when it comes to meat. However, where it fails is its considerable – and increasing – deficit with respect to plant produce. It is now able to provide considerably less than 50%.

It is this lack of sufficiency in plant materials, that is prompting me to build a community greenhouse, with other members of Friends of the Earth, Inderøy, and to experiment with hydroponic gardening.

At this point, I should probably add that I do not have anything against gene modified organisms in principal. I would have no objection to using an artificial milk that is produced through bacterial processes, in vats. It seems much more humane than keeping cows. This does not mean that I support other gene modifications, such as Monsanto/ Bayer and their use of glyphosate herbicides. However, I have studied genetic engineering and microbiology, and see both fields as important contributors to increasing stocks of nutritious foods, if done properly.

A Pedagogical Sail

Sabot sailing dinghy on a cradle, fully-rigged, in Sydney Australia 2007-12-16. With minor modifications, the same type of boat was also made in North America. (Photo: Peter4Truth)

The purpose of a sail is to propel a boat or ship forward. It can be used figuratively, to describe mechanisms to propel a person through an educational system alive, and capable of making a social and economic contribution to the world after completing the education. For me, industrial arts was such a mechanism, as it allowed me to survive an academic education. Looking forward through time, I see an engineering workshop (and more specifically a mechatronics workshop located in Inderøy) as a mechanism to help the generation with many names (Gen Z/ iGen/ Plurals/ Post-millennials) to cope with the world’s current insanity.

Yes, it would be more professional to keep personal thoughts at a distance, and to look at a mechatronics workshop’s pedagogy from a theoretical point of view, with comments and statements supported by references to well formulated documents based on impeccable research.

Unfortunately, that is not how the real world works. Everything I’ve ever done related to workshop pedagogy, is based on reflections on a single personal experience. As a 13 year old, I decided to build my own boat, an 8 foot (2 400 mm) long Sabot dinghy, similar to a European Optimist dinghy. I made it alone, except for some help from my mother, who made the sail for the boat. Where did I get the self-confidence, which let me start and complete such a comprehensive project, at a relatively young age? The answer is, by taking industrial arts!

A note of thanks. The one person I would like to thank is Vincent Massey Junior High School guidance councilor Allen, who encouraged me, and suggested I build this specific boat. He also told me where I could get the plans (Valley Lumber). I believe they cost me $2 (NOK 12) in 1962. In case one thinks that my boat building skills are inherited, let me assure readers that I have never experienced my father making anything in his life, until he started to make rya rugs as an 80 year old. Yes, he could do some interior house painting, but he rarely repaired anything, and never made anything for the house. Both of my parents prioritized spending their free time out in nature. They were both fishers and hunters, and enjoyed collecting berries and mushrooms, and walking in the wilderness. My mother, who was ten years younger than my father, had a slightly different education, including home economics as a subject. She made a lot of different things, but in the realms of food and textiles.

Unlike my father, I had industrial arts at school, from the 7th to the 9th grade, before I chose electronics as a specialty from the 10th to 12th grade. The school system in British Columbia divided all teaching into seven subjects, all of which got exactly the same number of hours of instruction. Of the five hours that we received over a seven day period for industrial arts, one was once reserved for draughting/ drafting/ technical drawing. The other hours were either electricity and electronics, woodworking or metalworking. One worked with each subject area for about a third of the school year before we pupils were on to the next subject area.

Industrial arts is an educational program that includes the manufacture of wooden or metal objects using a variety of hand or machine tools. In addition, the subject could include other related subject such as electronics, house building, motor repairs and car maintenance. All programs usually had some form of technical drawing as part of the curriculum. Industrial art was reserved for boys. The girls got home economics which included some of the same educational principles, but with a focus on food and sewing. Home economics could be described as second class industrial arts, wrapped for girls!

As a pedagogical term, industrial art came on the scene in 1904 when Charles Russell Richards (1865 – 1936) from the college of teachers, Columbia University, New York suggested replacing manual training. The intention was for all children (or at least all boys in a gender-divided time) in all schools, to gain a wide range of technical skills rather than a single one that gave vocational training.

Most North American males born between 1920 and 1960 understood technical drawings, had used a lathe to make objects in both wood and metal, had wired a house. This is not the case today. Industrial arts ended with most of the gender-divided education in the late 1970s, early 1980s. Girls were finally allowed to fix cars, while the boys were allowed to learn how to cook.

The pedagogy used in Industrial Arts did not begin with Charles Russell Richards, but represents a tradition that can be traced back to Jean-Jacques Rousseau, Johann Pestalozzi, Friedrich Froebel, Edward Sheldon and John Dewey. These people are not unique to the history of industrial arts. Their names are invoked in many divergent subject areas.

Jean-Jacques Rousseau (1712-1778) is appreciated for the application of his educational theories in the classroom. He believed that knowledge was derived from nature, that reality was determined by gathering information through the senses and validating by building relationships, and that people learn gradually and constantly throughout life, and learn to do.

Johann Pestalozzi (1746 – 1827) He has several educational institutions in German and French speaking regions in Switzerland and wrote many works explaining his revolutionary modern education principles. His motto was “Learning the head, hand and heart”. Thanks to Pestalozzi became illiteracy in the 18th century Switzerland almost completely overcome by 1830. Pestalozzi is considered the first of Richard’s pedagogical predecessors, with an educational philosophy focused on the most effective ways of waking students’ ability to understand and process information. With this ability, young people could understand an ever changing world.

Friedrich Froebel (1782-1852) is recognized for establishing (in theory) the first kindergarten in 1837, and for taking into account early childhood needs in education: showing is better than telling. He was very concerned with activities or the activity plan that he felt would develop childhood creativity.

Edward Sheldon (1823 – 1897) founded the Oswego School of Education in 1861. Sheldon believed that the basics should be taught through objects, and students should build things that would benefit them in the classroom as they taught lessons. In 1886, Oswego had a form of manual training as a class under the supervision of the school’s janitor. Oswego became the first teaching school in the United States to teach manual training. Today, SUNY Oswego prepares students to become technology teachers.

John Dewey (1859-1952) believed that students should do (his term, but implying action) to trigger thoughts about what they are doing. Then they should think about what they have done, to stimulate learning. Dewey’s focus was on a methodology that began by identifying difficulties or problems and ending up synthesizing and coordinating knowledge and desires, resulting in the control and recreation of the external world. This is mainly the vision of the Industrial Arts movement. Dewey had a concern about the limitations of manual training. He thought that if the students were just doing something to make something and not to solve a problem, their thoughts would stop and boredom would develop. The learning process will stop.

So I’d like to pause and to reflect. The challenge with the American maker movement is political. Yes, even the term maker has been hijacked by a for-profit industry. While many view it as a continuation of the industrial arts movement, others are eager to look at it as new and different. The latter would like to have Ayn Rand (1905 – 1982) as inspiration. Debbie Chachra, in Why I am Not a Maker, warns us about it:
“A quote often attributed to Gloria Steinem says, ‘We have begun raising our daughters more like sons … but few have the courage to raise our sons more like our daughters.’ Creator culture, with the goal of giving everyone access to the traditional male culture of making, has focused on the first. But success means devaluing the traditional female domain for care by continuing to enforce the idea that only making things is valuable. Rather, I will see ourselves recognizing the teacher’s work, those who analyze and characterize and criticize, anyone who repairs things, all the others who do valuable work with and for others, above all the caregivers – whose work is not something you can put into a box and sell.” https://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/

I suspect it is the Ayn Rand friendly people in the maker movement that promote Jean Piaget (1896 – 1980) as responsible for its educational philosophy. I’m not among these. Piaget’s most famous statement about constructivism, “Understanding is inventing” is really just the title of his 1973 book, To Understand is to Invent, an English translation of Ou va l’education (1971) and Le droit a l’education dans le monde actuel (1948). In short, the teacher’s role in constructivism is to create the conditions for invention rather than providing ready-made knowledge. But this statement had already been expressed by Richards in 1904, almost 70 years earlier.

The reason for Piaget’s favor probably has more with his influence in American computer science and artificial intelligence. One of Piaget’s students, Seymour Papert (1928 – 2016) used Piaget’s work while developing Logo along with Wally Feurzeig (1927 – 2013) and Cynthia Solomon (? -?). Alan Kay (1940 -) used Piaget’s theories as the basis for Dynabook system (and Smalltalk programming language) at the Xerox Palo Alto Research Center (Xerox PARC). The work resulted in the development of the Alto computer, which is the first computer with graphical user interface (GUI). Apple MacIntosh was constructed on the basis of Kay’s research at Xerox PARC.

In Norway, Piaget has had much less appeal than in the US, and much of his status as educator has been transferred to Lev Vygotsky (1896 – 1934). Vygotsky regarded people as cultural beings. He was concerned with the closest development zone (the proximal development zone) and laid the foundation for a socio-cultural learning perspective. There is a balance between what the child learns and what s/he needs of assistance. Lev Vygotsky is considered a social constructionist, where learning takes place in a social interaction between individuals.

One would almost believe that Industrial Arts has become an educational dinosaur. Since Ronald Reagan (1911 – 2004) became the US president in 1981, inequalities between US residents have only increased. The political prioritization of schools in North America has led schools to gradually lose their ability to teach costly practical subjects. This has led to practical skills being exercised over narrower and narrower fields.

The Charm of the Demoscene

A Commodore Amiga 2000 with 3.5 inch floppy drive, 20 MB hard drive, keyboard and mouse. A cathode Ray Tube (CRT) monitor is missing. (Photo: Trafalgarcircle

Imagine home computing in the late 1970s. Machines are weak. Software is unrefined. Popular models include Apple II and its clones, ZX Spectrum, Commodore 64 and Amstrad CPC. The IBM PC, and its clones, have not yet arrived.

I remember a friend showing off his Apple II. It would show a line of text, Name? followed by a blinking cursor. When I typed in my name, and pressed return, it would respond by writing: Hello, Brock! It was easy to be impressed by technology in the late 1970s.

Inspiration for today’s demoscene first came in 1980, when Atari used a looping demo with visual effects and music to show off the features of the Atari 400/800 computers.

Demoscene is a type of computer art, that will be described in more detail later in this post, and in chronological order. It has a darker past, but a lighter present. In this weblog post, many of the terms used will be defined. It is an artform that generally avoids mainstream exposure. According to some sources, about 10 000 people are involved with it.

Cracker = a programmer who alters video game code to remove copy protection. Cracking crew is used where more than one person is involved in the cracking process.

Cractro = (crack intro) an introductory screen used by a cracker/ cracking crew to claim credit for cracking a game. They became very complex a medium to demonstrate superior programming skills, advertise BBSes, greet friends, snub rivals and gain recognition.

More important in Europe, than in other parts of the world, the cractro transmutes into the demo. A cracker community emerges then evolves into an entity independent of gaming and software sharing.

New machines are better suited to support the scene, most specifically the Commodore Amiga and the Atari ST. Some IBM clones are acceptable, if they have sound cards. Not the Apple Macintosh.

More inspiration came in 1985 when Atari demonstrated its latest 8-bit computers with a demo that alternated between a 3D walking robot and a flying spaceship.

That same year, Amiga released a signature demo showing the hardware capability of its Amiga machine, with a large, spinning, checkered ball that cast a translucent shadow.

Demo = a self-contained, originally extremely small, computer program that produces an audio-visual presentation. Its purpose is to demonstrate the programming, visual art and musical skill of its producer.

Demoparty = a festival where demos are produced, after a day or weekend long coding marathon, then presented, voted on by attendees, then released, originally on floppy disks and on bulletin board services (BBS).

Compo = a demoparty competition, traditionally divided into categories where submissions must adhere to certain restrictions: production on a specific type of computer, or a maximum data size. Submissions are almost always rendered in real time. This contrasts with animated movies, which simply record the result of a long and intensive rendering. The purpose of a compo is to push computing hardware to its limits.

Demoscene = computer art subculture focused on producing demos, international in scope.

Demoscener = a computer artist focused on technically challenging aesthetics, but with a final product that is visually and aurally pleasing.

Demogroup = a small, tightly-knit group of demosceners, centered around a coder/ programmer, a musician and a graphician. Some groups may have supporting roles and grow to tens of people, but this is the exception. Demogroups always have names. Individuals within the group have unique handles for self-expression. Demogroups use wordmarks, logos, catchphrases and slogans. They are skilled at public relations and even human resource management. The demogroup is undoubtedly the most important social unit in the demoscene.

While belonging to a group is often synonymous to being a demoscener, there are individual productions. Not infrequently, this individual will adopt a group name. There are also fake groups, involving secret identities for making humorous, political or vulgar productions without harming the reputation of the original group. Individuals invent new handles, or pseudo-pseudonyms.

There used to be an American demoscene, but it barely exists today. Who killed the American demoscene? The simple answer is the American crackdown on software piracy. European copyright law only criminalized for-profit breaches. In many European countries, including the Netherlands, Greece, Finland, Sweden and Norway, it was possible for the cracker to repent and to transform into a law-abiding demoscener.

The Amiga 2000

Our first family computer was a Commodore Amiga 1000, on loan to us while we waited for our Amiga 2000 to arrive, which it did some weeks later. In 1986/ 7, these were the best residential computers money could buy. If I remember correctly, the Amiga 2000 cost NOK 19 000 (a little over US$ 2 000 then or about US$ 4 000 in 2019.)

We bought the Amiga while living in Bodø, in Northern Norway. The company that sold it consisted of two young male idealists, who were among the most active Amiga enthusiasts in the country. In addition to selling machines, they developed software and also published a Norwegian language Amiga magazine. Some of my work appeared there. They had the largest collection of 3.5 inch Amiga floppy disks in Norway, which contained software and content on every conceivable topic. They made cractros.

The Amiga 2000 was an advanced machine. Some even claimed at the time that it would last into the 21st century. In contrast to the Amiga 1000, it allowed expansion cards to be added internally: SCSI host adapters, memory cards, CPU cards, network cards, graphics cards, serial port cards, and PC compatibility cards were available. We used a SCSI adapter with a hard drive, and a PC card, that allowed us to run both Amiga and PC-DOS programs. The Amiga 2000 also had five Zorro II card slots, the motherboard also has four PC ISA slots, two of which are inline with Zorro II slots for use with the A2088 bridgeboard, which provided IBM PC XT compatibility.

There were about 4 850 000 Amiga machines of all types sold. The machines were most popular in the United Kingdom and Germany, with about 1.5 million sold in each country. Sales in the high hundreds of thousands were made in other European nations. The machine was less popular in North America, where only about 700 000 were sold