Hacker News new | past | comments | ask | show | jobs | submit login
Open Source Hardware Hearing Aid Part 1 (shapr.github.io)
148 points by zdw on Aug 4, 2019 | hide | past | favorite | 54 comments



This is a really cool project. The one issue I see is that the Teensy isn't what you'd call power efficient (compared to some more domain-specific DSPs, that is).

>That seemed incredibly expensive. I haven’t done much electronics, but digital signal processing doesn’t seem that expensive?

The cost is in R&D, not just for human trials but for DSP engineer salaries. It's an extremely specialized skillset and the companies that hire for it look for PhDs most of the time.

In particular, optimizing adaptive filtering to clock in the dozens of cycles per buffer with fixed point DSPs to maximize battery life, while providing features like feedback reduction, noise cancellation, working around the infamous "cocktail party" problem, with modern features like Bluetooth... all with audio fidelity on par with some serious audiophile gear.

My understanding is that there are also some other market forces too like very long lifecycle management and low volumes for new products, but idk how real that is.

Point is, I don't think those devices are really overpriced - at least no more than any other medical device paid for by insurance companies.

EDIT:

I'm reading some of their documents and a few things jump out at me. They mention supporting FFT/IFFTs on the platform. This stands out to me since doing that on a battery powered device is unusual. Firstly because it's not cheap in cycles, and secondly because useful FFTs require rather large buffers of memory which harms your latency, and may come at a premium. When I've seen them required (e.g, a codec), it was typically on a dedicated FPGA with the algorithm burned in, not running on the DSP.


Hearing aid processing chips are incredibly sophisticated and specialised - just check out the block diagram in this datasheet:

https://www.onsemi.com/pub/Collateral/SA3291-D.PDF

That chip is doing graphic equalisation, multi-band dynamic range compression, noise cancellation, feedback cancellation, data logging and wireless communication with an absolute maximum power dissipation of 50mW. It's a $100 chip for good reason.


Every single chip function cited here is available in any given wearable/hearable chipset, many of which are available in consumer products that cost less than $100. Economies of scale suggest that the price of the actual chip that does all that is less than $10. Personal experience suggests it's likely less than $5 - maybe even less than $2 in large quantities.

A big part of why hearing aids haven't faced much disruption, as I understand it, is their qualification as a medical device. A major competitor of my current employer had a policy of many years that they would not do hearing aids, in spite of extensive audio and audio processing experience internally. The reason? Amar Bose wanted nothing to do with medical device certifications.


>Every single chip function cited here is available in any given wearable/hearable chipset

All but one - low power. A typical pair of true wireless headphones runs for three hours on a charge; a typical hearing aid runs for a week on a zinc-air cell. Any Turing-complete processor will do whatever kind of DSP you need it to do, but the trick is doing it incredibly efficiently.


>All but one - low power

Even that part is becoming less true. I've seen multiple datasheets in the last year that promise worst case FFT/IFFT performance at 2mA. Measured performance has been less than a quarter of thst.


So what you're saying is that if someone put in the effort to get through device certifications they could disrupt a rather profitable market. I feel like it would be relatively easy to get approval with a 510k (basically saying your device is similar to a pre existing device)


Yeah, I suppose I am.

Easier said than done, though.


The costs of a chip have very little to do with what is in them. It's a $100 chip for a good reason: profits. Because if you can't hear then $100 extra is very little money. Once the first silicon samples are in and the numbers are large enough it is very rare to see a chip that would command such figures in other fields, there is no reason that I am aware of that would justify this price.


I think cost is a large factor because retirees have fixed income. If insurance pays for a two thousand dollar hearing aid that lasts three years, you get it. If insurance doesn't pay, you become socially isolated.

Do you know of other hearing aid options cheap enough to not require being covered by insurance?


> numbers are large enough

How many people have hearing aids, though? Seems like the market for this would be much smaller than other things and therefore the price higher.


More people would buy them if they were cheaper. That's simple economics. They are priced right now to extract maximum $ from the market, not to ensure that the largest number of people have access. 100's of millions of people world wide have hearing issues, some 15-20% iirc.


> That's simple economics.

Sure, it's simple economics, but in this case it is too simple.

It turns out that the hearing aid market has inelastic demand [1][2]. In 2010, market penetration for hearing aids in the U.S. was around 24% (8.2 million users). Amlani estimated that with a complete subsidy of hearing aids, market penetration would only increase to 34% (11.2 million users) [1].

Combined with the lifespan of 5-8 years, that is a quite small scale for an asic. (Apple will sell 40+ million airpods this year alone).

[1]: Amlani, A. M. (2010). Will government subsidies increase the US hearing aid market penetration rate? Audiology Today, 22(2), 40-46.

[2]: Lee, K. & Lotz, P. (1998). Noise and silence in the hearing instrument industry. Working Paper, Department of Industrial Economics & Strategy, Copenhagen Business School.


> Amlani estimated that with a complete subsidy of hearing aids, market penetration would only increase to 34% (11.2 million users) [1].

That's an interesting use of the word 'only', you're looking at a 30% or so increase and at that scale this would not have a huge affect on the price of the chip itself because once the costs of developing and the start-up costs have been born the rest is marginal. These chips probably cost < $2 to produce even at this quantity.

What you seem to forget is that once it is worth doing an ASIC that is pretty much proof that the economies of scale are there. The very rare cases where an ASIC is still expensive is when they are top of the line in switching speed, density or pin count and these devices have none of that.


> That's an interesting use of the word 'only'

A ~thousand to several thousand dollar product with a market penetration of 24% (low) becomes completely free, and the market penetration shifts to 34% (still low). I think 'only' is justified.

> the costs of developing and the start-up costs have been born the rest is marginal

The NRE costs have not been 'born' until the product is EOL. They are distributed across the price of each unit. My hypothesis is that this chip has a sufficiently low volume that the NRE cost / chip is substantial.

> What you seem to forget is that once it is worth doing an ASIC that is pretty much proof that the economies of scale are there

When you have strict power requirements (i.e. need a battery life of a month), it can still make sense to go for an ASIC even with relatively low volume. Add to this an inelastic demand curve (i.e. you will sell the same number independent of price), and there isn't a compelling reason to try to do it with a DSP or FPGA.

> These chips probably cost < $2 to produce even at this quantity

If we assume that OnSemi could design this chip for $10M, then they would have to sell 5M of them to have your proposed unit cost of < $2 (assuming a wafer cost of zero, which is obviously wrong). I would guess that $10M is a lowball for the total development cost, and that 5M is way optimistic for volume (that would pretty much require this random chip is in 100% of hearing aids sold in the U.S. in the last few years). They've probably sold an order of magnitude less than that.

Maybe I'm wrong here, but it isn't as obvious to me as it apparently is to you.


While writing this blog post I've discovered that way more people have hearing problems than actually buy hearing aids. The largest part is the expense, but another part is the stigma. No one wants to wear a hearing aid if they're under the age of forty or so.


50 yo. congentitally profoundly deaf person here. Cochlear implant fitted last month. My quality of life has been utterly transformed: I can't imagine what life would have been like if I had had one from birth. If appropriate for the patient, the healthcare system should skip the hearing aid and go straight for an implant. If you happen to be reading this, and have a hearing loss, explore cochlear implants whatever your age.

Whilst I wish the OS hearing aid project every success, what we really need is open source implants - the pricing is even more insane than hearing aid technology and is relatively well understood. The existing, certified implants could be used, but with an OS/DIY processor - circumventing many of the potential certification issues.


Thanks for sharing your story and glad to read about your positive experience. Just a comment regarding your advice to others.

I went to bed with perfect hearing and then woke up with 90% hearing loss in one ear (sudden nerve deafness). Some of it has come back but the gain is not flat across frequencies. It is incredibly disorienting. I was almost hit by a car the other day since I now look in the opposite direction when I hear a car accelerate. I no longer enjoy listening to music and it limits my ability to be witty in casual conversation since I now often second guess what was said. Music sounds weird.

I know a hearing aid would help but I don't want anyone to know I have this problem. Hopefully my brain can just recalibrate the frequencies especially since I have one pretty normal ear.

Regarding your suggestion. I believe a good cochlear implant has about 22 channels. A healthy human ear can discern 300,000+ frequencies. This is why cochlear implants are only given to totally deaf patients. The technology is no where near where it needs to be used on patients with hearing loss.


I have a friend who is the perfect candidate for a cochlear implant, but refuses to have closed source inside his head.

He does not wish to have his body hostage to any company, and I can understand that.


That block diagram looks like any other embedded device available today. Less complicated than most. No video processing in there, so lots less complicated.

And most are under $10.


> This stands out to me since doing that on a battery powered device is unusual

This is absolutely not true. I would bet that > 50% of all devices shipping with a digital signal processor are doing FFT/IFFT at some point.

> useful FFTs require rather large buffers of memory

Loads of realtime FIR filtering is implemented with overlap-add or overlap-save - i.e. block-based FFT/IFFT.


Plus, many ffts actually benefit from fewer points and less bandwidth - especially in voice applications, where much of the spectral content is under 4kHz.


The question is how much these additional fancy features actually improve the product over what is feasible DiY? How much longer is the battery life?


I'm reasonably sure it would be possible to implement appropriate DSP Voodoo Magic (aka finite response filters) on the Teensy that does a _pretty_ good job of all of these things.

However, I agree with the OP that it does have a fairly large number of shortcomings:

-- The number of man-years that have been spent on commercial products is huge, and they provide a highly optimised solution in a competitive domain. My late grandfather had, in the 1970s, a hearing aid with feedback reduction powered by an analogue computer (aka "electronics") that hung around his neck. -- A lot of the tuning parameters of these algorithms really are hardware specific and would require quite a lot of tuning /iteration -- At the end of the day, a Teensy is a moderately large rectangular board that will not fit behind your ear, has nonzero power requirements, and is a general purpose CPU. A 3D printed case is an expensive way of making a plastic box to put it in. If you were going to go down the open hardware route, you'd start somewhere very different, with power efficient dedicated DSP units on a small, thick multilayer board milled to be a bit more ergonomic. A modern hearing aid needs a new battery every month or so, and is powered by a 0,54 g 1.4 V 180 mAh battery (that is a 4x6 mm cylinder [h x d]). You're not going to get anything like that for a general purpose CPU.

Still, this is a fun project, and I commend people doing it. As ever with anything to do with the USA and healthcare products, however, I can't help but think that their efforts would be better spent trying to get universal healthcare. The cost to the NHS for two hearing aids, multiple fitting appointments included, is around £400.


I'm no expert in the field, but it's my understanding that those features are why modern devices allow people to hear in noisy environments. Feedback reduction especially, which is not easy to deal with (tl;dr, the mic/speaker are mechanically coupled, with a poor seal and loud enough environment you get feedback and the DSP has to compensate).

And in terms of battery life, I don't know. I do know that most battery powered audio devices with DSP throw floating point math out the window from the get-go, and I haven't seen a job opening for DSP in hearing aids that didn't mention fixed point math in awhile. I don't know of any processors that fit the bill there however, those things usually have a proprietary IDE/debugger/flasher you need to pay for.


I wanted to applaud the hackability aspect here.

Traditional aids are aimed at (1) old people who can't handle more than 4 buttons on a TV remote, who (2) are willing to accept whatever they're given and put up with inconveniences, and (3) don't notice their audiologist and aid vendor are kinda in cahoots to their detriment.

There's all kinds of interesting things aid users might want to experiment with, once an audiologist gives them a profile of their hearing loss. Aids have programs that help for different situations, like crowded room, quiet conversation, etc, and the user may want to adjust those settings themselves.


The solution to killing off outrageously-priced hearing aids is to leverage the smartphone. It has everything that's required to make an superlative hearing aid: excellent microphone, earphones, powerful CPU, and an all-day battery. A modern smartphone might not have a DSP but it has sufficient processing power to do the required audio processing. Many people who'd be embarrassed with visible hearing aids wouldn't mind walking around with AirPods or other visible Bluetooth earbuds these days. The hurdle is that the hearing aid manufacturers will be complaining to regulators that you can't sell a smartphone as a "medical device".

The Apple AirPods with the "live listen" feature on iPhones (and iPads) is already being used by some people as a substitute hearing aid. You can use "live listen" with an actual hearing aid as well, but I've seen people using it with just AirPods and they seem to find it quite helpful. What's missing is the software to do the audio processing specific to that person's hearing loss. Plus overcoming the regulatory burden.


And the likely reason why this is not happening (in the commercial space at least) is FDA regulatory hurdles in having a product (and probably an entire company) shut down for making "unapproved" medical claims.

I've got a set of Bose Hearphones, about $500, and which use the Bose Hear app. You can modify treble and bass but that's about it; even so if you are hard of hearing then the default iPhone sound profiles can be used to fine tune the in-ear audio and they work well as a low cost hearing aid. Same deal with my AirPods. Today Tim Cook could wave a magic wand and literally cure deafness for hard of hearing iPhone + AirPod users; ain't happening.

No question Bose Hearphones with the Bose Hear app or AirPods with a complementing iPhone app could easily perform spectral analysis with an easy-to-use-equalizer, to increase and/or decrease the amplitude of specific areas of the auditory spectrum; this would be ideal for tinnitus patients and would without question put the entire hearing aid industry out of business. Current generation hearing aids are about $15 worth of analog parts that they are selling for thousands of dollars each, just by virtue of having invested in some horseshit FDA regulatory process when infinitely more capable technology has been available to hearing impaired individuals for well over two decades at this point.

This is not complex, and what sucks about tinnitus is that it affects each person differently which in turn requires specific auditory tuning for each individual. But companies such as Bose or Apple that have the technology with more than sufficient computational horsepower to replace hearing aids simply refuse to do so, for whatever reasons that are likely FDA regulatory hurdle in nature.

On a side note as to OP, $300 for a Teensy-based hardware platform with BT stack is f'ing outrageous, probably worse than what the actual hearing aid vendors are ripping off.


> specific auditory tuning

I use a site called MyNoise to play nature sounds to drown out background noise (neighbors, traffic). The thing that sets MyNoise apart from the other ones I've tried is that it has a EQ you can tune, and specific instructions for tuning it: for each frequency range you find the lowest volume where you can still hear it. The result is a curve matched to your hearing curve -- the tuning process accounts for hearing loss and tinnitus. Now, if that curve could be combined with the "live listen" / Bose Hear... then you got yourself a hearing aid, using hardware you already own!


It can't give you the one thing that is more valuable than sound: silence. I wonder what happened to practical applications of 'anti-sound'.


Parent of 2 deaf children here. When our first was born, we were handed a very thick packet of information warning us that without intervention (i.e., hearing aids) our child would be at much higher risk of falling behind in school and suicide because of social isolation. So, y’know, respectfully, nah.


Of course silence by choice is a different thing than silence by force. I have a huge concentration problem and silence is what I crave most when I'm working or trying to read. That says nothing at all about a child that is born with a hearing defect.


I agree that the FDA is a huge problem, and that Something Needs To Be Done, much like the CPUC in California which is really just an arm of industry. I don't know how to approach implementing a complete overhaul of the FDA without some truly atrocious people taking advantage of any loopholes which inevitably appear.


> People who'd be embarrassed with visible hearing aids wouldn't mind walking around with AirPods or other visible Bluetooth earbuds these days.

Had a relative who tried that with an app. The problem is that most folks still assume that if you've got headphones in your ears, you're listening to something else (not them.) Reactions varied from impatience (people who thought he was impolite for refusing to remove his headphones to focus on them) to anger (refusing to interact with him until he removed the headphones.)


The Google's official hearing aid app works fantastically well. Latency is fairly low, and you can filter out parts of spectrum. If the phone has a decent microphone this app gives you super-hearing.

https://play.google.com/store/apps/details?id=com.google.and...


The problem with AirPods is that people are not inclined to talk to people wearing them (thinking that they are listening to music, so don't want to be interrupted, etc.) I know this because in the gym half of the people wear AirPods, and I feel reluctant talking to them, even though they seem ok with it when I do.

You probably don't want to give hearing-impaired people another barrier in communication.


There are a couple of issues with this; none are dealbreakers.

1. As others have mentioned in this thread, people are reluctant to talk to people wearing earphones. Although perhaps this is less of a problem than the stigma of "hearing aids".

2. Earphones don't usually contain microphones. If you want directionality, the mikes have to be located in or on the ear, not on your phone. Some earphones do contain microphones, and they are the place to start.

3. IOS has pretty low latency in its audio channel, but Android is (or was) terrible. There were so many layers of software in the Android audio path that there was no way to process live sound without so much delay that a hearing aid wearer hearing both live and processed sound together would just hear mush. I read somewhere that Android has fixed this, but I don't know.

4. Bluetooth induces some latency into the live audio path as well. I don't know how much.


I agree that the smartphone could potentially be leveraged to do some audio processing in this context, but you have to consider the latency introduced by the forth and back over bluetooth. Not sure if such latency is a pleasent experience for the user.


Latency.


Pretty sure that in my country Western imported hearing aids start around 300 or 500 dollars—mostly European ones. And afaik those aren't the ‘just blare at 5x the volume’ type but tuned by the audiologist to the patient's hearing profile. And they go behind the ear.

2000 bucks are the type with a dozen advanced effects, bluetooth connection to work as music player headphones, and adjustable with a remote or an app.

So, people in the US might find it cheaper to order European aids or smuggle something from Canada. (BTW, afaik plane tickets cost way less if you buy them in the airport just before the flight.)

----

Also, personally I'd be wary of twiddling the params of a hearing aid since I'm not an audiologist nor any kind of doctor at all and don't know if my choice would cause more damage in the long term. And if audiologists mostly reside at hearing aid dealerships then I'm not sure they will just give you a chart of your hearing so you can walk home and tune your aid by it.

Ah, and azalemeth's comment reminds me: if the board just has jacks for headphones then you can't even use the hearing charts without also knowing the frequency response for the headphones.


If it were that simple, people would be importing them and selling them already. Black market or not. But high end hearing aids take multiple in-person visits with experts for in-depth hearing tests and then impressions for a perfect fit, then hand adjusting to tune the fit and return visits to tune the sound and fit.

The technology is probably not the most expensive part of high end hearing aids. It's the service.


My eye doctor puts me in front of a bunch of lenses and then says "better... flip or worse?" over and over and we end up with a set of lenses. Without access to a trained doctor, at first glance (no pun intended) it seems like you could do a 95%+ job of fitting a hearing aid's sound profile using a better/worse UI on a phone app and some prerecorded sounds/speech samples. I have zero experience with hearing loss, though.


I wear a special type of contact lens because it gives me much better vision than any glasses ever could. My doctor starts with "better or worse" and that gives her a starting point. Then we have measure and fit and then of course the weight and shape of the lens affects the shape of the eye, which changes the lens requirements so it takes several fittings and trials to usually get a good fit.

Glasses are more like a hearing aid that just makes everything louder. Which will work in most cases. But many people need or want more.


Yeah let me tell you about something which people in the Quality Assurance industry might call an "indicator": The US Army doesn't do much more than what you're describing for hearing aids.


Well, you see, sound has this magic quality that the more you jack it up, the more you hear. Ever been to a concert of a warm homey Ninja Tune artist, such as Coldcut or Amon Tobin? They turn out to be eye-popping industrial hardcore on a club's 10000 watt system.

The problem is, it's not good to put too much sound into the ears.


The person you're replying to didn't say anything about jacking up the sound. He said, "you could do a 95%+ job of fitting a hearing aid's sound profile using a better/worse UI on a phone app and some prerecorded sounds/speech samples". You'd train it--at a constant sound level--by listening to sample audio recordings and repeatedly saying "better" or "worse". The same way you need to train a speech-to-text dictation system by repeating a bunch of phrases, or a fingerprint scanner by repeatedly swiping your finger, or voice recognition by repeating some words. It sounds like a totally reasonable way to eliminate high-priced fitting sessions.


Hearing testing for the most part isn't about the patient giving better/worse feedback. It's about determining the quietest levels at which you can hear each frequency, and how much loudness you need to accurately identify sounds and speech. The tester has to assess when the patient's performance is better or not, because it's blind testing. The patient's feedback is in the form of either signalling "I heard that", or repeating the words that were just played.


> The patient's feedback is in the form of either signalling "I heard that", or repeating the words that were just played.

Why is a human tester needed? I'm not seeing any reason that couldn't be automated.


Because a person tuning an aid for themselves is rather likely, IMO, to simply choose more amplification since they can hear better then. Which will then backfire when they lose hearing even more. I've seen braggart reports of people with strong hearing loss and aids able to hear speech from more distance than healthy people—but is that good in the long term?

Firstly, people already have to be told to not play music too loud with headphones. And secondly, when you buy an aid, the audiologist tells you that you won't be comfortable with it for some time until you get used to it—even though the aid is supposedly tuned to the exact profile of your hearing loss. People aren't good at getting used to uncomfortable things without adjusting them to their short-term liking.

On top of that, audio engineers, musicians and graphic artists know that it's difficult to do fine adjustments of audio or graphics for long because the senses become tired and ‘burned out’ after a while and you don't see or hear the same (even just five–ten minutes is enough sometimes). Novices are likely to be unfamiliar with these effects, have less stamina for them, and unable to counteract them without overcompensating.


Agreed, that last 5% of fine tuning frequency bands etc you need a professional, but if you were going to roll out $12 hearing aids to all of the low income areas of the world 1 billion people at a time, using an android app on a cell phone you could probably give a lot of people back their hearing. Volume levels could be hard-limited in software, and people who need to exceed the preset limits could be referenced to a senior technician in those 5% of cases or whatever it might be.

Being able to hear even every other word would improve quality of life of many people. Probably one in three words is the acceptable limit of "good enough" where suddenly they are worthwhile to hassle with. My grandparents now hate family gatherings because they might understand only one in five sentences spoken directly to them because of hearing problems and the quality of the hearing aids they're able to afford with their insurance.


I'm not saying it can't be automated, just pointing out that the testing process is pretty different from what you seemed to be describing. And I left out some of the steps that aren't easy to automate: checking for fit, deciding what style of hearing aid would work best, visual inspection of the ear canal.


Because automation trims excess, which I think is something they want to avoid at all costs.


> No way I can get that kind of money, so I gave up on that idea.

I'm curious if the Tympan solution has done this testing? What is the perceived downside to not doing human testing on what is presumably going to be a hobby project?


I asked Tympan about this, they said they've partnered with Boys Town National Research Hospital and with the National Institute on Deafness and Communication Disorders.

Those two powerful partners are working on relaxing the laws some so that small companies/teams could create something like the Tympan.

I'm still not sure if the Tympan is a "human use approved hearing aid" or if it's just for research purposes. Hopefully I'll have discovered that before I publish part 2.


Ah, interesting! Thanks!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: