Cars

Rush Hour! Photo: Davide Ragusa, 2016-01-16. Davide comments: I took this photo in Borgo Lupo, an abandoned village in Sicily, near Caltagirone (province of Catania). A mystical and empty place, where the only inhabitants are animals and shepherds. Here Sicily expresses its best, with breathtaking surrounding landscapes and smells that smell of the real countryside.

What is this post about? Sheep?

It is about artificial intelligence (AI), and the use of chatbots. A robot is a device that automatically performs complicated, often repetitive tasks. Bot is a shortened form of robot. A chatbot (originally, chatterbot) is a robot that uses and pretends to understands human language. ELIZA was an early chatbot implemented by Joseph Weizenbaum (1923 – 2008) from 1964 to 1967. It did so by passing the Turing test developed by Alan Turing (1912 – 1954) in 1950. This test – originally referred to as the imitation game – means that a human interacting with ELIZA will believe that the robot is another person. It is important to understand that ELIZA and other chatbots do not actually understand English (or any other human language). They store words, then use these and other words to mimic it.

The photo of the sheep was found on Unsplash, a website that allows photos to be freely used, when I was searching for a photo of a traffic jam for the beginning of the post. In much the same way that AI can get things wrong, my use of this photo gets things wrong too. It shows traffic congestion, but with sheep, rather than cars.

Why isn’t the post called Artificial intelligence, and the use of chatbots?

Because, if I gave it that title nobody I know would look at it, let alone read it. Such a title would be offensive to the people I interact with. The people I hang out with are not AI experts.

Why is it called Cars?

An honest answers it is that this weblog’s target readership probably find cars a topic they can relate to. Thus, they are being encouraged to learn something about AI by reading about something they already have a relationship to. Most of my audience readers have driving licenses, and know something about cars. A large proportion of them have been driving/ owning/ servicing/ repairing/ enhancing/ customizing cars for over fifty years. It is a topic they can relate to, unlike, say, the breeding of Labrador dogs.

Do you have something against dogs?

Let me be tactful, just this once, and say I think dogs deserve a companion who is interested in their well being. Many readers of the weblog post have dogs. I live contentedly without them. However, while writing this post, I did find this article about dogs that impressed me.

How did this post begin?

On 2024-01-04, I read an article about Perplexity in Tech Crunch. It is an AI chatbot. I opened a free account, and asked Perplexity some questions. I then tried to find some content that could act as a control to questions answered using perplexity. On 2024-01-13, I read an article in Newsweek, about why Americans can no longer afford cars. I thought it would be interesting to make up questions, based on the answers supplied in Newsweek and then ask Perplexity the same questions. For example, the first question I asked was:

Q. In USA, how much have new and used car prices risen since 2020?

Perplexity provided a long answer, one that answered many different but related questions, rather than just that one. So a new challenge arose about how to present content, so that it made sense. Part of the problem was the attribution of Newsweek content to particular people. I decided to eliminate names and quotation marks. Immediately below is the edited Newsweek answer to that first question.

Since 2020, new car prices have risen by 30 % and used car prices have risen by 38 %.

I was just expecting a simple answer from Perplexity of x% for new, and y% for used vehicles.

Here is more of the Newsweek content, extracted to remove references to sources, human or artificial (Microsoft Copilot).

In 2023—a year during which inflation slowed down to the point that the Federal Reserve decided to stop hiking rates—new car prices rose by 1 percent to an average of $50,364, while used car prices fell by only 2 percent to an average of $31,030.

But as things stand, cars are still really expensive for many Americans. Just 10 percent of new car listings are currently priced below $30,000, Things are not much better in the used car market, where only 28 percent of listings are currently priced below $20,000.

In November 2019, the average transaction price for a new vehicle was $38,500. In November of 2023, that figure jumped to $47,939.

The pandemic’s disruption of manufacturing supply chains, as well as outsized consumer demand in 2021, really put a strain on vehicle inventory and drove prices up significantly.

Dealers practically had customers lined up to buy vehicles that weren’t available. Supply chain disruptions also forced manufacturers to prioritize more profitable, higher-trim vehicles in their lineups, which meant the inventory available to purchase also carried a higher sticker price.

Used car prices will continue to fall modestly in 2024.

The backlog in new and used car demand that grew during the pandemic is slowly moderating, but with over 2 years of restricted new car production, in 2020 through 2022, it will take at least that long for supply to catch up.

Macroeconomic factors like inflation and higher interest rates are also reducing vehicle demand, but not enough to drastically drop car prices in the foreseeable future.

But there are other reasons besides pandemic-related disruptions that there seem to be no more affordable cars in the U.S.—including that automakers are increasingly focusing on the production of expensive SUVs and trucks while dropping smaller, cheaper vehicles that would cost $20,000 or less.

Manufacturers cite disappointing sales results as primary reasons for discontinuing smaller, more affordable vehicles from their lineup.

But car buyers’ preferences have also shifted dramatically to larger trucks and SUVs in the past 10 years or so, and even more towards high-tech and comfort amenities in the form of cameras, sensors, radars and large infotainment screens.

Unfortunately, all these features come at a significantly higher price—even if that’s often higher than the average yearly wage of millions of Americans.

(End of Newsweek content)

Full reply from Perplexity.

According to various sources, new and used car prices in the USA have risen significantly since 2020. The average transaction price for new cars declined 1.4% year-over-year as of October 2023, but they have increased to an average price of $47,936, which is roughly a 60% increase over the last decade1. In July 2020, new-vehicle prices increased $749 (up 2%) from July 2019, while decreasing $473 (down 1.2%) from the previous month2. The average new car price paid in December 2020 and the fourth quarter of last year crossed over $40,000 for the first time, and the average new car price in the US in December 2021 was $47,07734. According to The Zebra, the average new car prices spiked by 17.2% between 2020 and 2021, and the average car payment for new vehicles rose 11.8% to $6445.

Used car prices in the US have also increased significantly since 2020. According to various sources, the prices of used cars have risen due to factors such as the global microchip shortage, supply chain disruptions, and changing consumer preferences

Here are some key points regarding the changes in used car prices:

  • In June 2023, consumer prices for used autos were down 0.5% compared to the previous month and up 5.2% year over year1.
  • The share of used vehicles that sold for less than $20,000 in the first quarter of 2023 was 30.6%, compared to 60.5% just five years ago2.
  • Used car prices have dropped 5.6% compared to a year ago, but they remain higher than at any time before 20203.
  • Used electric vehicle (EV) prices have dropped 26% over the last year3.
  • Car ownership costs, including used car prices, have skyrocketed, with an overall increase of 40% in used vehicle prices between 2020 and 2023.

In summary, used car prices in the US have risen substantially since 2020, and the trend shows no signs of reversing. The factors contributing to the increase include supply chain disruptions, the global microchip shortage, and changing consumer preferences.

According to Statista, the average list price for used vehicles in the United States was around $27,300 in June 2023

Consumer Reports reported that the average price of a used car was around $27,000 in August 20232. Edmunds reported that the average used vehicle transaction price in Q1 2023 dipped 6.4% year-over-year but is still up 44% from 2019, and the share of used vehicles sold for under $20,000 was 30.6% in Q1 20233. The Zebra reported that the average new car prices spiked by 17.2% between 2020 and 2021, and the average car payment for new vehicles rose 11.8% to $6445. Therefore, the average price of a used car in the US in 2023 is around $27,000 to $27,300.

Used car prices in the US are relatively high compared to some other countries. For example, in Germany, the prices of certain car models can be significantly higher than in the US due to factors such as taxes and fees. In the US, the average price of a used car was around $27,000 in 2023. This is in contrast to some other countries where car prices can be influenced by various factors such as taxes, import duties, and government policies. Therefore, while used car prices in the US are high, the comparison to other countries is influenced by a range of economic and regulatory factors.

Chatbots

A human brain has about 86 billion neurons. Chatbots are based on neural networks, imitating the organization of the human brain. An advanced intelligent processing unit (IPU) consists of about 60 billion transistors. A transistor is, at best, an inferior neuron. Since 2017, most modern chatbots are transformer models, a type of machine learning.

If, at this point, I had decided to explain the details of transformer computer architecture, most readers would be putting down their coffee cups and either heading out to their garage workshops, or begging their dogs to leave the comforts of their sofas, and take them for a walk.

While chatbots appear to lie, they are simply responding to input content (often questions). In chatbotspeak, each word a chatbot learns is called a token, and the relationships chatbots construct are called vectors. The tokens and vectors construct a neural network. At first, neural networks use supervised learning to understand relationships between tokens. Here, humans assign labels to the tokens. With unsupervised learning, the neural network itself, without human assistance will assign labels.

Chatbots are designed/ programmed to instill confidence so that users believe they are interacting with a real person. This is the primary goal. Making truthful statements is unimportant, as long as the charade is maintained. A chatbox will do almost anything in order to maintain an illusion of humanness. It will invent information, if that is needed.

Today’s chatbots such as Google’s Bard, Microsoft’s Copilot, OpenAI’s ChatGPT or the Cohere’s Cohere, use transform technology, first developed in 2017. These are online, generative AI systems that are capable of maintaining a conversation with a user in natural language.

From 1988 to 1991, I taught a college course in AI. Since I had very little faith in machine learning, and chatbots were very primitive, I concentrated on expert systems. To my mind these did the least damage.

Wikipedia tells us: In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

If I were wanting to learn about AI today, I would want to start with a fun book. For me, the most enjoyable book on the subject is by Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021). Then I would try to read an AI textbook. My first introduction to the topic was: Artificial Intelligence (1983) by Elaine Rich. The most recent edition of that work is a third edition (2009) by Elaine Rich, Kevin Knight and Shivashankar B. Nair. When, about a decade ago, I took an online course in AI with an emphasis on machine learning, the textbook was by Stuart Brand and Peter Norvig, Artificial Intelligence: a Modern Approach. The latest edition is the 4th, from 2020-1. It is much more technical and difficult.

I used Prolog, a computer programming language for expert systems, in my teaching. Initially, I asked my students to map out their family relationships in a knowledge base. Think first of a family with five generations of daughters that would have to be inserted into a knowledgebase: Adriana, Beatrice, Cordelia, Desdemona and Emilia. Then, one would have to make some abstract categories, such as a mother = a female who has a child; a grandmother = a female who has a child who is either the mother of a child or the father of a child. These rules can quickly become very complex. So much of learning Prolog is learning how to create increasingly complex rules.

After students had learned how to systematize family relationships, they tested it, to make sure that the results mirrored reality. A common problem, to begin with, was that grandmothers could only find granddaughters, but not grandsons. Thus, they had to go back and make changes.

Once the family knowedgebase was working, students could go on to work with other problem areas, of their own choosing.

People wanting to learn Prolog as a computing language for an expert system, should probably use William F. Clocksin & Christopher S. Mellish, Programming in Prolog: Using the ISO Standard, 5th edition (2003) as their textbook. This is not as out of date as its publication year would suggest.

Prolog is widely used in research and education. Yet it and other logic programming languages have not had a significant impact on AI. Part of the reason is that most Prolog applications are small and dependent on human experts providing data. Real experts are a scarce resource, and what they know expertly is limited. Thus, few applications exceed 100 000 lines of code.

Hearing Assistance

I don’t find having a hearing disability fun! Anonymous

Modern hearing aids, with rechargeable batteries and T-coils allowing use of hearing loops.

V2

This weblog post is about technical devices that will ease the difficulties a hearing disability places on a person. These devices can only ease a burden, they cannot eliminate it, Hearing loss will always result in deprivations.

This is the second version of this post, with thanks to Art Vaughan, for finding 14 situations that needed to be enhanced in various ways. There are undoubtedly more. Thank you, Art!

Modern hearing aids are computerized electroacoustic systems that transform sound as it exists in a user’s environmental into sound that can be perceived, using audiometric and cognitive rules. Modern devices also utilize digital signal processing to improve speech intelligible, yet comfortable for the user. Such signal processing includes feedback management, wide dynamic range compression, directionality, frequency lowering, and noise reduction.

Hearing aids are still equipped with the basics from a century before: a microphone, amplifier and a receiver. Hearing aid manufacturers, confusingly, use receiver to refer to the micro speaker – a sender – inside each hearing aid. The microphone senses the environment, listening to sound vibrations in the normal range of human hearing. The amplifier processes signals from the microphone, and the receiver sends the signal to the ear.

Since the audio circuit and the additional control circuits are fully digital on modern hearing aids, they can be connected to a computer. Characteristics can then be adjusted individually, provided the hearing professional knows what they are doing. It is often easier to implement circuits digitally than to construct equivalent analog circuits. A typical example is that specific frequencies can be amplified more than others to provide better sound quality. Users can select between multiple programs that operate automatically and adaptively. Among the advantages are: reduced acoustic feedback and background noise and the use of multiple microphones to improve spatial awareness. Frequency transposition allows frequencies that a user may not hear to be reconstructed in frequency regions where hearing is better. Control signals on a hearing aid on one ear can be sent wirelessly to the control circuitry in the hearing aid on the opposite ear to ensure that the audio in both ears is either matched directly or that the audio contains intentional differences that mimic the differences in normal binaural hearing to preserve spatial hearing ability.

At one time streamers were popular. These were small devices worn like a pendant, that allowed audio signals to be sent wirelessly to and from other external devices, such as music players. This capability is now built into apps found on smartphones. With the addition of speech recognition and internet capability in the mobile phone, the wearer has better control over communications, and in many more situations than with hearing aids alone. Voice activation – think Siri – has becoming more common and is now used everywhere.

Digital hearing aids have existed since 1984. The first practical, wearable, fully digital hearing aid was invented by Maynard Engebretson, Robert E Morley Jr. and Gerald R Popelka. Their work resulted in US Patent 4,548,082: Hearing aids, signal supplying apparatus, systems for compensating hearing deficiencies, and methods. The patent was issued 1985-10-22, and expired 2005-10-22. This patent formed the basis of all subsequent fully digital hearing aids.

A telecoil is a tiny copper wire coiled around a metal rod. It is used to capture electromagnetic waves, much like an antenna. Circuits inside a hearing aid will convert these waves into an audio signal. Telecoil listening systems bypass hearing aid microphones, to eliminate most background noise and to transmit a clear audio stream.

At one time acoustic feedback was a major issue for hearing aids wearers when using the telephone. With a telecoil, hearing aid users can tap directly into the electromagnetic signal instead of the acoustic one, preventing feedback.

The last significant change has been the use of rechargeable batteries. These are charged by placing the hearing aids in their case, and then attaching the case to a power source, using a connector. In Europe, the use of USB-C connectors on new devices, including hearing aid cases, will be required by the end of 2024.

People with moderate-to-profound sensorineural hearing loss are often provied with a cochlear implant (CI) neuroprosthesis. This is surgically implanted. It bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Therapy teaches CI users how to interpret signals as speech so they understanding speech better in both quiet and noisy environments. CIs can be equipped with telecoils.

In Australia, Europe, North America and many other places in the world, hearing loops have been mandated for use in spaces used by the public. Previously, some jurisdictions initially allowed other, cheaper technologies. Unfortunately, this meant that hearing loops were an afterthought to disability infrastructure.

In contrast to other approaches, some of the advantages of hearing loops combined with telecoil equipped hearing aids are: there is no maximum number of users that can be served at a venue, they are simple to use, improve the signal-to-noise ratio (SNR) and signal quality, reduce background noise and reverberation, allow users to retain their dignity and privacy by not requiring them to remove hearing aids to use of forms of hearing augmentation. Despite this, there can be legitimate complaints that hearing loops are not designed for listening to music, because of distortion at higher frequencies.

Joseph Lazarevich Poliakoff (1873 – 1959), a Ukrainian-born British telephone and sound engineer and inventor, particularly of hearing aids. He founded the Multitone Electric Company in 1931. They produced their first hearing aid, a desk-top model, in 1933. In 1937,  Poliakoff was issued a patent for an induction-loop hearing assistance system. It remained valid until 1958, when it entered the public domain.

When a telecoil is activated, it senses magnetic fields, replacing (sometimes supplementing) the microphone, sensing for sounds. Telecoils can be used with hearing loops, individual amplifiers and smartphones.

In Norway, and undoubtedly many other places, no landlines exist, since all phones are connected to the mobile network. Telephone providers can supply a device with limited functionality that looks like a landline. It is permanently plugged into an electric circuit. It also costs about twice as much as a smart phone. There are also dumbphones that are specifically designed for elderly users. Acoustic coupling settings allow a phone to be used normally, with paired hearing aid microphones automatically picking up sound coming from the phone. Cellphones with a M rating of 3 or 4 have this capability. Bluetooth is also used for communication between hearing aids and a phone. This requires the two devices to be paired.

Some phones have telecoils built in. These can be set to send magnet signals to hearing aids without using microphones. These can improve the signal-to-noise ratio while eliminating feedback distortions. Phones with a telecoil capability have a T rating of 3 or 4 have this capability.

A hearing loop = audio induction loop = audio-frequency induction loop (AFIL) is assistive listening technology that uses hearing aid telecoils.

We have a hearing loop at Cliff Cottage, an electrical cable with a pair of 0.75 mm2 = 19 AWG wires. This type of cable is typically used for extension cords for lamps. The hearing loop was placed around the perimeter of our living room in the ceiling . It is almost 30 m = 100 feet long. The loop starts and ends in a wall connector, close to where a hearing loop amplifier is located. A hearing aid will be able to pick up audio inside a hearing loop, as well as up to several meters outside of it.

The loop was first installed in 2009, by the local branch of NAV = Hub (literally), the Norwegian super-agency that, in addition to far too many other duties, provides assistive devices to residents. Before fitting new ceiling tiles in 2021, I took it down and reinstalled the loop after the tiles were in place. Later, I discovered that when such a system is installed in a new house, it is hidden in the floor. Unfortunately, I was unaware of this at the time, because I could have placed it in the floor when I replaced the living room flooring in 2022.

On the left is the original Univox DLS-380 hearing loop amplifier with wiring attached. To the right is a S/PDIF splitter/converter. A S/PDIF Toslink optical audio stream is taken from a television and sent to the splitter. There the signal is split into two analogue streams of stereo signals. One stream goes to an active bass speaker, that plays low frequency founds. Mid-frequency and high-frequency sounds are sent in stereo to two (L & R) bookshelve speakers.

Working forward from a television’s S/PDIF Toslink digital optical connectors, a stereo digital audio stream was sent to a S/SPDIF splitter, that sent two RCA analogue signal streams onward. One stream was sent to the original Univox DLS-380 hearing loop amplifier. That amplifier sent the audio to the loop system. In addition, another stream was sent using RCA connectors to an active bass speaker, that plays low frequency sounds. Mid-frequency and high-frequency sounds were then sent in stereo to two (L & R) bookshelf speakers.

The current system has the same function, but with details that differ. The CLS-5 telecoil amplifier can operate with a wide range of operating voltages. In residential/ institutional/ commercial applications, it will most likely use 110-240 VAC. However, it can also be fitted into buses and recreational vehicles as well as boats, where it most likely will use 12-24 VDC. It comes with optical (digital) and coaxial (digital and analogue) inputs. It also has a setscrew that allows for adjustment of time delay between the audio and video signals (TV Sync). It provides easy connections to modern (and not so modern) flat screen plasma/ LED/ LCD TVs.

When these components were replaced at the end of 2023, none of the replacement units came with RCA connectors. HDMI, S/PDIF and USB-C audio connectors were common to both the (new) smart television and the sound bar. While the new Univox CLS-5 amplifier works with several types of connectors, the installer said that S/PID Toslink optical connectors created the least challenges. So these were used.

Once again a single digital audio stream was sent from the television to a splitter. From the splitter, two identical audio streams emerged. One was sent to the hearing loop amplifier, while the second audio stream was sent to the sound bar. Both used digital Toslink optical connectors. .The splitter is powered with mains electricity.

This cable generates an electromagnetic field throughout the looped space which can be picked up by: 1) telecoil-equipped hearing aids, 2) cochlear implant (CI) processors, and 3) specialized hand-held hearing loop receivers for individuals without telecoil-compatible hearing aids. So while this technology was installed specifically for one person using one technology, it is equally useful for other people, most likely visitors, using the other hearing assistance technologies.

In order to transmit the clearest possible sound source to a hearing-impaired listener, free of distracting noise, a hearing loop carries baseband audio-frequency signals without a carrier signal.

Installing the cables is not difficult, and can be installed by almost anyone. However, one should read amplifier documentation, and understand the process, before doing so. I am unsure of the reason for this, but the user’s manual states that one should twist the wires 180 degrees at some point. In this way a wire that was on the right at the beginning, will be on the left once the wire is turned, and vice-versa.

The Hearing Loss Association of America (HLAA) has commented on hearing loops:

Loop technology was pioneered in Europe where it is widely embraced and highly successful. When hearing accessibility in the United States was made a requirement of law in 1990 under the Americans with Disabilities Act, no distinction was made between loop systems and less expensive FM or IR systems. Although the ADA did require that systems provide access to “effective communication,” people with severe to profound hearing loss—for whom FM and IR systems often fall short—were slow to mobilize advocacy for hearing loops.

The ADA Standard for Accessible Design was updated in 2010 requiring that for renovation and new construction that at least 25% of receivers provided are hearing aid compatible—meaning that users do not have to remove their hearing aids to use the system. This dramatically raised the interest in and availability of hearing loops because loops are the only assistive listening systems that connect directly to telecoil-enabled hearing devices. (End of HLAA comment)

Tinnitus

Tinnitus is the perception of sounds that are not the result of an external source. There is no cure for this, but there are ways to manage the condition.

Despite being a person who was diagnosed with tinnitus at the age of about 50 in 1998, I have never considered using hearing aids as a treatment. I have had one counseling session. However, as a teacher I have regularly attended seminars with a focus on assorted disabilities, including hearing.

My approach to self medication has been to engage in activities that require concentration. This puts tinnitus into the background. At other times I rely on low intensity music. Over the past 25 years I have tried various genres. As I approached 60, I reduced – almost eliminated – listening to classical music and started listening to less melodic music. These are better able to mask my tinnitus. When tinnitus is at its worst, my music choices are the trashiest metal. My understanding is that everyone is different.

Many hearing aids offer some form of tinnitus relief. Often this is an app delivered to the hearing aids, from a phone. If this is the case, it is important that the phone and hearing aids are compatible with each other. Sometimes, it is a computer program inside the hearing aid, that has to be activated. It is also important to know if this program can be altered to reproduce the specific sounds needed by the user. Sounds that are enhanced for a particular users are more effective for therapy than more generic sounds.

Cognitive behavioural therapy (CBT) focuses on thoughts, beliefs and attitudes to improve emotional responses to help a person develop a strategy to help a person cope with a current problem. It was originally developed to treat depression, but its use has been expanded to include the treat other conditions, such as tinnitus.

MindEar is an app that provides CBT through a chatbot. The goal is to empower people so they regain more control over their lives. Often the use of such an app is combined with counseling with a clinical psychologist, either in person or online.

People with tinnitus often enter negative thought cycles. Many people need some form of counselling to help them cope with it.

Hearing aid styles

There are several hearing aid styles. Small hearing aids may be less visible but they may lack the power to improve hearing. These are referred to as Completely in the canal (CIC) or mini CIC. This is molded to fit inside an ear canal, but is only useful for mild to moderate hearing loss. One of its advantages is that it is less likely to pick up wind noise. Yet, because of its small size it uses very small batteries, which have a shorter life and can be difficult to install and remove. They often lack features, such as volume control or directional microphones. Earwax can clog the speaker.

An in-the-canal (ITC) hearing aid fits partly in the ear canal. Again, it is for mild to moderate hearing loss and is less visible than larger styles. It has many of the same problems as CIC hearing aids.

An in-the-ear (ITE) hearing aid is custom made either with a bowl-shaped area occupying the outer ear (full shell) or one that fills the lower part (half shell). Both are for people with mild to severe hearing loss. They are available with two directional microphones, volume control, longer battery life including rechargeable batteries. Again, earwax may clog the speaker, wind noise may be present, It is more visible but may be easier to handle.

Behind-the-ear (BTE) hearing aids rest behind the ear. A tube connects the hearing aid to a custom earpiece = ear mold that fits into the ear canal. This is the generic style appropriate for everyone, with any hearing condition. Since this is the largest type of hearing aid, it can fit in more electronics and batteries, has directional microphones, increases amplification, picks up more wind noise, and typically uses rechargeable batteries.

Receiver-in-canal (RIC) and receiver-in-the-ear (RITE) are similar to BTE hearing aids, but with the speaker = receiver in the ear canal, connected to the hearing aid with wire.

People with better low-frequency hearing and mild to moderate high-frequency hearing loss may want to use an open-fit hearing aid with an open dome in the ear.

People with hearing disabilities should think through the capabilities they want before deciding on a style. While younger people, especially, may want their hearing aids to be invisible, smallness introduces limitations. Growing one’s hair longer may be a better approach to concealing hearing aids.

Features to consider

Some features may improve one’s ability to hear in specific situations. All hearing aids have some amount of noise reduction available. The amount of noise reduction varies. Some also offer wind noise reduction.

Directional microphones are aligned on the hearing aid to provide for improved pickup of sounds coming from the front, and with reduced sounds from behind or from the side. These are particularly useful in a noisy environment.

Rechargeable batteries can make life easier by allowing batteries to be charged, rather than replaced.

Telecoils make it easier to hear when talking on a telecoil-compatible telephone or listening in an environment served with an induction hearing loop.

Another approach is wireless Bluetooth connectivity, achieved by pairing hearing aids with cellphones, music players, computers and televisions. Sometimes an intermediary device = a streamer is used to pick up the signal and send it to the hearing aids.

Some hearing aids are equipped with a remote control, that adjust features without hearing aid contact. Increasingly, a cellphone app is used for this.

Direct audio input allows audio from a television, computer or a music device to be attached, with a cord.

Some hearing aids can store several preprogrammed settings for various listening needs and environments. Trish has four settings on her hearing aids: normal, restaurant, hearing loop and silence.

Since most people use two hearing aids, these can be synchronized = function together so that adjustments made to one hearing aid will also apply to the other aid, simplifying control.

It takes time to adjust to a hearing aid. There is evidence showing that listening skills improve gradually as one become accustomed to using amplification. One’s own voice sounds different when wearing a hearing aid.

Studies

Currently, I am spending considerable time daily to understand hearing and hearing loss. My aim is to provide people in my immediate family (possibly extending outwards to friends) with better advice than they can obtain from profit motivated hearing professionals.

Modern hearing aids have to be configured to match the hearing loss, physical features, and life situation of the user. The starting point is an audiogram. So one of my first goals is to be able to produce these. My next goal is to be able to fit a hearing aid, so I am also learning more in general about audiology and audioprosthology.

The reason for this is that my potential first client is dissatisfied with the fitting and other services she has received from professionals. I have no intentions of working with anything other than digital hearing aids. Thus, I will not be working with anything resembling an osseointegrated auditory prosthesis = bone-anchored hearing aid or cochlear implant.

I am particularly interested in hearing other people’s experiences with hearing assistance. Those who want, can post a comment. Those who prefer to remain anonymous, can send me an email.

Sound bars

A television with sound bar (below), subwoofer (to left) and assorted controlling devices on the bench for sound bar sound, tv operation, and input of text (from left to right). Jellyfin, connected to our server, is our primary source of content. We have also used Netflix. I have heard Now and Then by the Beatles, but on YouTube and another device, not on this television or from Netflix.

The first words written in this post were sound bars, in the title. As the text body emerged and grew, it became obvious that these devices had to be put in context. Yes, sound bars are important, but they are only one solution of many. Thus, without changing the title, I have allowed this post to present many different solutions to providing or preventing sound from enhancing or detracting from life. Sound bars are one of many appropriate solutions to the challenges presented by sound, for people with relatively normal hearing. For people with hearing disabilities another weblog post is being prepared about hearing loops, and other devices to help people hear better.

In my youth, the brother of one of my friends had made a reputation for himself, producing radio commercials for the Seattle WA area. I learned a lot about sound visiting this brother’s studio in the late 1970s. He had started his career as a DJ, where the goal was to have the largest listening audience, for his station’s audience segment. To do this he optimized his programming with a playback system using a single 3″ (75 mm) speaker, because that was the size and type of speaker found on most radios at the time. He wanted to optimize the sound for the average listener. Competitors, he said, most often used expensive, professional playback equipment. They fine-tuned their music and commercials to play optimally on audiophile equipment. He claimed that what that meant in reality, was that it sounded bad on the equipment most listeners used.

This event impacted me. I appreciate sound. I enjoy listening to music, the human voice and soundscapes, especially those in nature, but also industrial and artificial soundscapes, as long as I can reduce the sound levels. I have my hearing limitations, related to tinnitus, where listening to low decibel (dB) sound levels of music can be regarded as therapy. Almost by definition, I can never become an audiophile.

I am never quite sure where real audiophiles can be found. My experience with self-proclaimed audiophiles is that they are obsessed with discussing sound rather than listening to it. It is very likely that I have a prejudiced view of the phenomena. That will not prevent me from discussing this topic further.

It is often claimed that Audiophiles require sounds to be curated. First, every audiophile has a preferred medium: most often vinyl, but sometimes reel-to-reel tape or even CDs. Second, sounds need to be optimized for the listening environment, using equalizers so that sound produces perfect sound intensity (dB) characteristics at all frequencies humans can hear, typically 20 – 20k Hz. Third, those with hearing imperfections, will not hear these perfect sounds as intended. Of course, the sound system could be set up for a particular individual, but that would mean that everyone else would have a sub-optimal experience. Fourth, human speech in the soundscape, not to mention any other sound-producing activities, has to be avoided. That is because your average audiophile uses equipment has been designed for use in anechoic chambers = ultra quiet rooms, that are without echoes and are otherwise non-reflective. Indeed, many anechoic chambers are designed not just to stop sound reflections, but also electromagnetic waves. These rooms are often insulated to prevent energy from entering. This means a person (or more often, a detector, possibly a microphone) hears sounds directly. This rule is the one most often broken.

What I have experienced is that modest equipment, one step above entry level (read: cheap) is good enough for most people, including myself. As people age, and their hearing worsens, it is more important to avoid the extremes.

Soundscapes

People encounter many different soundscapes in their daily lives. When people enter a different soundscape, sometimes they have to use devices that can enhance or diminish sounds encountered. A soundbar is only appropriate in a limited number of these.

All of our family members have been issued their own -34 dB ear muffs/ defenders. We also have two additional pairs for guests, that can be borrowed. In addition, disposable ear plugs are also available. These are to be worn in the kitchen, workshop and even out on the lawn, when noisy (> 80 dB) machines can be used.

Ear muffs offer more protection at higher frequencies than ear-plugs, are designed to fit most people and can be adapted quickly and with little effort. Usage can be easily monitored. Their bright yellow colour means that they cannot be easily misplaced or lost. They can be worn with minor ear infections. When guests return them, they are cleaned so they are always available for others.

In other soundscapes, such as bedrooms or office spaces, headsets are frequently used, so that sounds produced there do not infiltrate other (typically adjacent) soundscapes. Needs vary. Many people will prefer in-ear models, others over-the-ear. The in-ear models will be wireless, as will some over-the-ear models. Some want to allow spoken voices in the room to penetrate their headset space, others want to avoid this. It is not simply a matter of taste. Preferences may be tempered by employers/ schools providing free equipment.

All residents who want them have been issued their own headsets. The difference between headphones, that only provide loudspeakers, is that a headset also contains a microphone. Ours are various Logitech G Pro models. Most guests saying with us, bring their own ear pods with them, but we can find suitable headsets if these are preferred. This allows people to engage in numerous activities such as listening to podcasts and music, or indulging in gaming. They can also be used for work or study.

Children’s bedrooms and adult’s home offices now need more computing equipment than they used before the pandemic, such as video cameras. In addition, people need faster internet speeds. Work and study computers supplement leisure oriented (read: gaming) computers. Many people prefer to use laptops, others opt for desktop machines. Younger eyes don’t seem to object to smaller screens. Older ones want larger screens, with the modern ideal being a 34″ ultra-wide in 21:9 or 32:9 format.

Over the 2020s, housing will continue to be modified to provide private work and study spaces. An important part of this will be sound insulation in various forms. People who have the opportunity to do so will also wire their residences with Ethernet cables, rather than relying on Wi-Fi. At Cliff Cottage all rooms have wired Ethernet coverage.

The pandemic altered work life, for many people. The office is no longer of supreme importance. Open office landscapes and open seating are disliked, not just for their disease spreading ability, but because they are inappropriate environments for accomplishing work, because of the numerous disruptions they promote. People appear to be more productive when they work from home. Thus, while there has been some rebound to the office after the pandemic, my prophecy is that this will decline in the coming years, stopping at about one day a week, mainly for meetings, by mid century.

Soundscape Case 1: At the local cooperative store, where most of our groceries are purchased, there has been a change in the appearance of staff. The store is open from 07:00 to 23:00. The retired population of Inderøy usually shops there between about 09:00 and 11:00, sometimes later, but usually avoiding the rush of workers heading to and from work, as well as the lunch hours when school children invade the store. With the retired population present, most of the staff are busy stocking shelves, but simultaneously listening to podcasts or music fed to their ear pods from their smartphones. Their listening does not impact anyone negatively, including themselves, provided they are listening at moderate sound levels.

Soundscape Case 2: At the opposite end of the scale are the råners = boars (literal translation from Norwegian), referring to people (mostly males) who have cruising roads and streets as their primary hobby. They typically transform their rear-wheel drive cars into high powered (typically 1 000 W) sub woofers. Their presence can be felt before they can be heard, or seen. The liberty råners take, negatively impacts others.

Sound bars

Sound bars can be disruptive, and incompatible with the work and study habits of individuals. They can be enjoyed in selective common areas, most often a living room, that can provide an audio-visual experience that can be shared by a group of people, typically referred to as a family. In offices, sound bars are also suitable when used with large screens or projectors for meeting use.

In the 1950s and 1960s, the number of audio and video devices in a living room increased in number. There were turntables for playing LPs and other types of records, tape decks of assorted formats for playing and recording tapes, followed by CDs in the 1980s. These were connected to amplifiers and speakers . For video, there were other developments such as VHS cassettes, laserdisks, DVDs and Blu-ray disks, along with dedicated players. Content also took up large amounts of wallspace.

In the 21st century machines for playing audio and video content, as well as the content themselves, started to disappear. Content was stored on files, that could be located in a cloud = someone else’s server, and/ or on one’s own devices, including servers. The most omnipresent device today is a hand-held device = smartphone.

The dominant element in a living room is now a large flat-screen television. Size varies from about 32″ (80 cm) to 80″ (over 200 cm). On 2023-11-27, we entered the modern age, when a Net-on-Net Andersson, QLED5031UHDA model, from , with a 50″ 4K (3840 x 2160) QLED screen, was delivered. It was not a sophisticated product, but gudenuf for our needs.

Smart televisions have become hubs, integrating components and services. Having owned one for less than two months, I have not mastered them, and have not started to modify one yet. This will come. Yet, there have been changes. Previously, a home theatre computer functioned as a hub. It took content from the internet as well as our server, displaying video an a 40″ (100 cm) screen. Audio was sent to an active speaker system with a built-in bass unit. It sent mid- and high range audio to two bookshelf speakers. These components have all been eliminated.

The audio challenge with flat-screen televisions, in contrast to their cathode ray tube (CRT) predecessors, is that they lack volume, as in three-dimensional space, to produce realistic sounds. Thus, television manufacturers have encouraged the use of sound bars and subwoofers to enhance sound.

A sound bar is a linear sound system, with built-in amplifiers and speakers positioned in a long but narrow and low box below a television screen, often fixed to a wall, but sometimes placed on a low bench. It is often accompanied with a separate box, containing a subwoofer speaker, that produces the bass notes.

Suggestion: Before acquiring a sound bar, measure the walls or surfaces where it could be located. This will provide information about the maximum length (often called width), height and depth available. Ensure that there is space on all sides of the soundbar. Speakers need volumes of air, to move sound into a room.

We have acquired an Andersson SRS 2.4 soundbar with subwoofer and remote control. It was part of a package deal with the flat screen television. It also supports Dolby Atmos surround sound technology, although we are not using it. Many people use satellite speakers to add surround sound effects, which simulates three directional sound. This topic will not be discussed any further here, but may be a topic for a future weblog post.

Cables connecting audio-visual components are important because their quality can significantly impact enjoyment of a product. When not in use, we store cables of the same type in separate plastic storage bags, with important data about them written on the bag. Increasingly we are marking each cable, and sometimes each end (if the connectors differ) on small cards with length, type, and other important characteristics written on it. Transparent tape is then used to attach the card to the cable, and to cover both sides of the card.

Two of the most common connectors/ interfaces for digital audio are S/PDIF = SPDIF = Sony/Phillips Digital Interface, and HDMI = High-Definition Multimedia Interface. Each uses their own specific type of cable, with S/PDIF being considerably thinner than HDMI cable.

SPDIF transmits only digital audio, using two different approaches: coaxial and optical. We have avoided coaxial cables and use Toslink = Toshiba link, a standardized optical cable system. It is preferred because it uses fiber optic cables, which are immune to electrical interference; it provides superior isolation and reduces the risk of ground loops; it is better for long-distance transmission due to its excellent signal integrity; it supports multi-channel audio formats like Dolby Digital and DTS.

If two pieces of equipment don’t share the same connector type, they can be joined using converters.

Theoretically, the best sound bar audio input option is HDMI (ARC) where: ARC = audio return channel. ARC sends audio from the television to the sound bar. HDMI also transmits digital video. On our system we started off using HDMI (ARK), but found we got better results using Toslink.

Sound bars typically allows input from two other types of inputs: USB-C and 3.5 mm jacks. They also support various types of music files, including high resolution streams of up to 24-bit/192kHz.

Digital audio provides higher fidelity compared to analogue audio because it does not allow noise to be added to the audio signal. Despite the claims of some hi-fi enthusiasts, digital is better than analogue.

CD, DVD, Blu-Ray and others players are quickly fading into the past, except when enthusiasts attempt to reclaim outmoded technology. In some parts of the world antennas will be used to access some content. Increasingly, more of it will come in through internet connections. In our municipality, antennas – including satellite dishes, are antiques from the past.

Most people will operate their audio-visual equipment with a remote control device or, increasingly, just a smartphone. Chromecast, in particular, can be used to transfer content from phones, tablets, laptops or similar devises to a television, and allow them to be played. Sometimes a keyboard can be useful to input text. We have a Logitech K400 keyboard for this purpose.

As with any new system, it can take time to learn operational procedures. Sometimes instruction manuals assume people know how to use the equipment! Initially, we write operating procedure proposals on a sheet of paper which can be modified as we learn more about system operation. After some time, a clear set of instructions emerges. We then transfer this information to the front of a book used to record viewing information.

Yes, we are so obsessive that we record data about each viewing of a documentary, television episode or movie, including series and episode number and its names, along with the first name initial of viewers present (mostly family members), and the date.

We store paper copies of all instruction manuals in a 50 litre plastic storage box. They are available if we need to access information. Increasingly, we are storing digital pdf files of instructions as files inside our server, so that they are accessible to anyone with a computer.

Fond memories

If you are over the age of 20, your days of being able to hear high-frequency sounds has ended. I remember a class who had acquired a device that could produce loud high-frequency sounds. When they turned it on in the classroom, they could all hear the noise this produced, but I was totally oblivious to it. Only their laughter gave it away.

Hydrogen myths

The Mid-Continental Rift System, where white hydrogen could be produced. Source: United States Geological Survey, 2023-12-05

Hydrogen is an important element in a number of compounds, including water (H20), methane (CH4) and ammonia (NH3). In my opinion it is not an effective energy carrier. The accessible energy is stored in electrons.

One hypothetical question about hydrogen is to ask if pipelines should be built to move H2 to other places, or if it is better to move electrons using high-voltage direct current ( HVDC) transmission? It is hypothetical because no energy analyst without a vested interest in pipelines, would bother to ask the question. They know the answer is HVDC.

One avoids a lot of confusion, if the starting point for understanding energy transmission is exergy. Two (slightly modified) paragraphs from Wikipedia can provide a foundation, even if it takes time for one’s brain to understand the concept: Exergy, often referred to as available energy or useful work potential, is a fundamental concept in the field of thermodynamics and engineering. It plays a crucial role in understanding and quantifying the quality of energy within a system and its potential to perform useful work. Exergy analysis has widespread applications in various fields, including energy engineering, environmental science and industrial processes.

From a scientific and engineering perspective, second-law [of thermodynamics] based exergy analysis is valuable because it provides a number of benefits over energy analysis alone. These benefits include the basis for determining energy quality (or exergy content), enhancing the understanding of fundamental physical phenomena, and improving design, performance evaluation and optimization efforts. In thermodynamics, the exergy of a system is the maximum useful work that can be produced as the system is brought into equilibrium with its environment by an ideal process. The specification of an ‘ideal process’ allows the determination of ‘maximum work’ production. From a conceptual perspective, exergy is the ideal potential of a system to do work or cause a change as it achieves equilibrium with its environment. Exergy is also known as ‘availability’. Exergy is non-zero when there is dis-equilibrium between the system and its environment, and exergy is zero when equilibrium is established (the state of maximum entropy for the system plus its environment).

With that out of the way, in the course of this post, one will look at four academic papers. The two earliest academic papers considered here, got numerous facts wrong, the other two got some things right. The first two were: Relative costs of transporting electrical and chemical energy (2018) by Saadi et al and Cost of long-distance energy transmission by different carriers by DeSantis et al (2021).

One assumption, in both papers, is the existence of a large source of green hydrogen molecules available at a single location suitable for putting into a 1 000 km pipeline. Hydrogen plants, close to existing pipelines, don’t use green hydrogen but grey hydrogen based on a centralized natural gas steam reformation system. For a short explanation of hydrogen’s colours scroll down to the end of this post.

The papers assume green hydrogen is inexpensive to manufacture: $1 per kilogram in the 2018 study; $2-$4 per kilogram per the 2021 study. Yet, green hydrogen uses 50 – 60 kWh of firmed electricity per kilogram to manufacture. Firmed electricity costs about $0.10 per kWh, unless one is using massive legacy hydroelectricity facilities found in Norway, Quebec and British Columbia. Here $0.5-$0.06 per kWh could be found. Firmed is a term used to describe energy that can be constantly provided at a needed rate and at a fixed cost.

A better estimate of the manufacturing cost of hydrogen is $5 to $10 per kilogram.

There are also incorrect assumptions about the capacity of HVDC transmission lines, underestimating capacity. In 2019, a Chinese 1 .1 MV = 1 100 kV link was completed. It traverses a distance exceeding 3.3 Mm = 3 300 km = 2 100 miles with a power capacity of 12 GW.

At the end of every line (pipe, as well as transmission), energy has to be distributed onward. Electricity is part of a high efficiency (read: cheap) distribution grid. Hydrogen is expensive to distribute, often relying on trucks with specially made pressurized tanks. While gray hydrogen costs $1-$2 to manufacture, it currently sells for €15 to €25/ kg to consumers in Europe and $30 to consumers in California. Much of this difference in price is due to distribution costs.

Both papers ignore that hydrogen as an energy carrier has to be used. Molecules are less efficient than electrons. Fuel cells offer only about 50% efficient in turning H2 into electricity, while electricity stored in batteries is much more efficient.

Burning hydrogen for heat under 200° C = 392 ° F competes directly with heat pumps, which are three times as efficient on average, and in industrial settings usually even more efficient. This means there is no need for H2 for residential or commercial heating.

The vast majority of industrial processes working above 200°C are electrifiable, typically with efficiency gains over using burnable fuels. The hydrogen pipeline studies ignore this inconvenient truth.

A Norwegian DNV (previously Den norske veritas) study, Specification of a European Offshore Hydrogen Backbone (2023) produced a report for The European Association of pipelines. It is slightly more realistic than the 2018 and 2021 studies, but still contains major flaws. It contends that green hydrogen can be delivered to the end of a transmission line for €3.21 per kilogram in 2050, and that this is cheaper than the transmission of electrons. It uses more expensive HVAC transmission rather than HVDC transmission, and exaggerates these transmission losses compared to hydrogen pipeline efficiencies. They also overstate transmission line operating costs, despite the fact that HVDC has no moving parts while pipelines have lots of moving parts, such as pumps.

Despite all of the unrealistic benefits given to H2, it’s still ten times more expensive to transport than liquid natural gas (LNG). Michael Barnard commenting on the report in CleanTechnica, contends that the report “was structured to meet the need of the clients to pretend that manufacturing molecules of hydrogen offshore at wind farms and then constructing pipelines all the way to major demand centers was the most cost effective model, and the DNV analysts contorted numbers and the space time continuum until the client’s needs were satisfied.”

Hydrogen pipelines vs. HVDC lines: Should we transfer green molecules or electrons? (2023) by the Oxford Institute for Energy Studies is better. It starts with the same amount of electricity at the beginning of the pipeline and HVDC = 9.6 GWh. They then apply most electrolysis facility energy losses before putting hydrogen into the pipeline, and apply the much smaller efficiency losses to electricity before it gets into transmission.

They calculate the energy delivered at the end of the pipeline to be 1.15 – 5.71 GWh , with an average value of about 3.4 GWh or a 34% delivery rate. At the end of the transmission line, there is 7.872 – 8.832 GWh, with an average value of 8.35 GWh or 87% delivery rate.

Yes, there is considerably more energy at the end of the transmission line. The hydrogen pathway delivers only 40% of the energy compared to the electron pathway. In economic terms, that means that energy from H2 will cost 2.5 times what it costs using electricity.

Much of the rest of the report can be filed under magic/ apologetics for the H2 industry, with unsupported contentions, and unrealistic costs: too little for hydrogen, but too big for electricity.

Other weaknesses in the report include failure to explain how one can get firmed electricity in sufficient quantities to an offshore electrolysis facility economically, or to take into account construction costs. In addition, electrolysis is simplified, missing several components such as a hydrogen dehumidifier.

At the other end of the pipeline, exergy is not taken into account. They treat the energy delivered as molecules and the energy delivered as electrons as being equal. Fuel cells are only 50% efficient at turning hydrogen into electricity. This means that using hydrogen requiring 5 times as much electricity, in contrast to using electricity directly or with intermediate storage using batteries or capacitors. To produce heat < 200° C, 7.5 times as much electricity is needed, For heat > 200° C, an electric option is almost always more efficient.

I will not state that there is no market for hydrogen for energy, and look specifically at the aviation industry. However, there are no hydrogen pipelines planned (that I can find), although some designs exist. There are many HVDC transmission lines approved, under construction and in operation.

There is a market for H2 as a feedstock/ commodity. In the United States the most economical way of producing it is to pump water down to hot iron-rich rock to extract white hydrogen in the Mid-Continental Rift System (MRS) = Keweenawan Rift. This can be done at an industrial scale. The rift is about 2 000 km = 1 200 miles long, in the center of the North American continent but south-central part of the North American plate. It formed when the continent’s core = the North American craton, began to split apart during the Mesoproterozoic era, about 1.1 billion years ago. When the rift failed, it left behind thick layers of igneous rock that are exposed in its northern reaches, but buried beneath later sedimentary formations along most of its western and eastern arms. This rift is also an important location for metals, but that is a topic for another day.

Conclusion: It makes no economic sense to put electrons into electrolyzers to make H2, then to put that H2 into pipelines. Hydrogen is an ineffective carrier of energy. No hydrogen pipelines will be built to transport energy. HVDC transmission lines will carry electrons everywhere. If H2 is needed as a feedstock, it will be manufactured at the point of use, to avoid transportation costs.

Notes: The various colours of hydrogen are a shorthand way of discussing hydrogen production characteristics. These were listed in an earlier weblog post, although white hydrogen was missing. In summary these are: black/ brown = from coal; grey = from methane/ natural gas; green = from surplus renewable energy; yellow = from solar (my personal usage, others use orange); orange = from mixed sources, including up to 50% non-renewables; red = from mixed sources where non-renewables exceed 50%. pink = from nuclear energy; turquoise (in general use) = purple (my personal usage) = more climate friendly blue hydrogen, such as methane pyrolysis. White (or gold) = hydrogen made by natural processes.