Hacker News new | past | comments | ask | show | jobs | submit login
Norway switches off FM radio, but a station is defying government order (cbc.ca)
230 points by rocky1138 on Dec 15, 2017 | hide | past | favorite | 252 comments



I really dislike digital audio broadcasts. They have a higher quality, but the lag associated with changing channels (I change between preset channels a lot) makes it really annoying. Also when driving between cities, I can get FM stations for a lot longer distance than the digital ones.


The nice thing about analog is that signals may get weaker but they don’t just go away as easily so you hear continuous music, etc.

Digital can be maddening because it alternates between crystal-clear and OFF, which ultimately I think is worse. Some streaming services will also decide to auto-next-song after the slightest glitch which is annoying too.


> Digital can be maddening because it alternates between crystal-clear and OFF, which ultimately I think is worse. Some streaming services will also decide to auto-next-song after the slightest glitch which is annoying too.

I wonder if there are audio codecs that work similarly to wavelet based video codecs (DWT instead of DCT) where you could have stepwise degradation.


There is somewhat similar concept of Bitrate peeling [1] in Ogg Vorbis. You can drop portion of the stream without transcoding and still be able to decode original audio at lower bitrate. With proper over-the-air modulation and multiplexing scheme (like ADSL interleaving), one should be able to have graceful degradation of digital audio.

[1] https://en.wikipedia.org/wiki/Bitrate_peeling


Problem is the fading. You could drop the bitrate to boost the link margin a few dB, which works fine for SATCOM, but it won't help with deep fades. Putting two antennas on the car for diversity reception would work much better; surprised no one is doing it.


Only cheap car radios do not do diversity. A lot of cars have three antennas and three or more tuners. You do not see all antennas, with a lot of cars you do not see any radio antenna.


Ha ha, I guess my cars are too cheap. I have never had a car with more than one VHF antenna.


Bitrate peeling is a beautiful concept, it's too bad that it was never implemented.


The original DAB codec had more error protection for the more important parts of the audio data (unequal error protection), so while not the same it would give a more gradual reduction in quality instead of a sharp drop-off. DAB+ doesn't do this though.


It's a broadcast, you don't know what is the bandwidth of receiving device so you can't adjust accordingly.


Nah you can do something like:

1. Sort your bits by importance. 2. Encode them as a real number using arithmetic coding. 3. Transmit that real number in an analogue way.

That means the more noise you have in your signal the fewer bits you receive correctly.

There are probably much more clever ways to do it than that but I don't think much work has really been done on it unfortunately because it's quite complex.


The broadcast equivalent would be to transmit more error correction bits per capita with the important bits (i.e. those present in the lower constant bitrate stream). This is implemented in codec2 or the related modem system, I think.


Well, digital protocols CAN be designed to degrade.

Why the digital FM standard doesn't do that, I'm at a loss.

There are definitely progressive codecs. Hell, look at JPEG (which uses Wavelets). Older kids will remember waiting for JPEG's of naked ladies to get "clearer and clearer" as the interlaced mode progressively builds the picture.

p.s. Wavelets are a super fun Google/Wiki to read up on over a night.

[edit] IT SHOULD BE NOTED, not all countries may have adapted the same codec so this "popping out" issue may only affect certain countries.


I think you're referring to progressive (regular) JPEG, which is not wavelet-based but just incremental DCT. JPEG2000 is wavelet-based, requires many times more processing power, and is hardly encountered on the Internet due to nearly nonexistent browser support(see https://caniuse.com/jpeg2000 ).

Why the digital FM standard doesn't do that, I'm at a loss.

The form of progressiveness in JPEG and other file formats is conceptually quite simple --- you send the "rough draft" first, then incrementally improve quality by sending more bytes. "Degradation" is basically limited to truncating the stream.

Broadcast radio reception degrades in a very different way --- (unpredictable) parts of the signal vary in strength or actually become corrupted. Either the decoder can decode the bits correctly or it can't, and which parts get weakened or corrupted is not easily predictable.


Thanks for the insight. I guess I never realized JPEG (as opposed to 2000) wasn't wavelet.


It makes it harder to "tune" it by moving the antenna around, too.


Digital can be maddening because it alternates between crystal-clear and OFF

You mean between 1 and 0?


> The nice thing about analog is that signals may get weaker but they don’t just go away as easily so you hear continuous music, etc.

I think it is worth mentioning that this is called fault tolerance, aka graceful degradation.

https://en.wikipedia.org/wiki/Fault_tolerance


Anyone who's driven cross-country knows that FM will do that just as bad.


Anyone who's driven cross-country knows that FM is nice and clear for a while, then kind of fades out as you reach the edge of the transmitter's range. There's at least a sense of continuity with the drop in signal strength. Digital has a sharp transition between crystal-clear and completely dead. That's the distinction being made above.


We have regressed on so many audio-related fronts. Our phone conversations may have far better sound today than they used to, but anyone who remembers true analog telephony knows what we have left behind in terms of synchronicity and natural flow of dialogue.


> anyone who remembers true analog telephony knows what we have left behind in terms of synchronicity and natural flow of dialogue.

You mean yelling into the phone really loud and than also knowing your paying $0.29 a minute? The technology was bad and the quality was bad unless you were calling locally. I think what your missing is people not talking to you and not being distracted by the internet or some other diversion?


You know how I can tell that I accidentally use the normal phone app, instead of voip, when I call my family internationally? Because the sound quality is good. Low latency, no dips, no buffering, smooth. To the point that my reflex is now: “hmm , the quality is too good, this must be the normal phone.”


I make quite a few international calls on my phone every day in rural areas and in terms of quality and reliability I would rank the available options as: FaceTime > normal phone (I’m on Verizon) > Skype > most other VoIP > Google Hangouts (absolutely the worst).


Facetime Audio is so crystal clear for me that it feels like I am sitting next to my friend/family vs analog lines where there is generally an echo, and other weird delays. When I talk to my grandparents it always feels like we need to put pauses in our conversation to make sure we don't start talking at the same time.


doesn't verzion use VoLTE? which would mean that your normal phone app isn't your "analog" phone anymore.. (https://en.wikipedia.org/wiki/Voice_over_LTE)

Basically there aren't many phone calls over non LTE based services. Especially since VoLTE can be used over WLAN via Wlan calling techniques. (the latter isn't supported in all networks.)


VoLTE is only between your phone and your carrier; if you're calling someone on another carrier as far as I know it still goes over legacy garbage with a shitty codec (or worse, an analog link, with in-band signaling).


How do you tell if it's voip or not? A lot of phone operators use voip for calls. So choosing the default phone app doesn't necessarily mean that you are not using voip. And the phone operators infrastructure for voip is probably way ahead of most other call apps.


> A lot of phone operators use voip for calls.

I think the problem is that with "analogue" each jump can use a hodgepodge of their own voip going from Analog -> Digital -> Analog. So you can get multiple, lossy compressions with varying settings multiplied together. With user-initiated voip, it's a single compression over the whole connection. So it could be better than user-initiated voip, or a lot worse.

At least that's my laymen's understanding of legacy phone systems.


There is no such thing as analog telephony anymore, and there hasn't been for a long time. While you might still have analog local loops, the network has been completely digital for a long time, be it ISDN or VoIP, and VoIP is perfectly capable of transmitting ISDN voice data without recoding.


I think what might be being referred to as "analog telephony" is actually the "original digital" system using 64Kbps "uncompressed" PCM, vs. VoIP compressed audio at much lower bitrates. It's like the difference between CD and MP3.


I would be very surprised if any telco were using any sort of compression internally (other than G.711, obviously). 64 kb/s with RTP/IP framing overhead is about 100 kb/s, that's just not a lot of bandwidth anymore that it would be worth installing hardware to do de-/compression. Even a gigabit link can carry ~ 10000 such calls--that's a drop in the bucket compared to the internet needs of the many more than 10000 subscribers that you need to ever have 10000 concurrent calls (probably more in the range hundreds of gigabits, possibly terabits).


I'm pretty sure that most operators either use analogue or digital end to end, no mix.

VoLTE (Voice over LTE) have some other advantages over other calling applications. VoLTE is most likely on a very high QCI (priority), which means it probably has less delay and less packet loss than other traffic. VoLTE also often uses header compression over the air which means the packets gets a lot smaller and therefore also reduces packet loss. I don't think VoLTE usually have very high bitrate though, maybe around 10kbit/s. This is one area where other apps could be a lot better.

EDIT: I am of course only speaking about mobile calls. For landlines I agree that other applications provide better quality.


The issue is that VoLTE only works for calls made to the same carrier's subscribers, as soon as you call someone on another carrier it will go through a shitty interconnect with a bad codec (usually a variant of G711 which is a joke in 2017).


Really? My experience has been the opposite. I can tell I'm not on a Facetime audio call or what have you when the call is clearly undergoing the super lackluster compression used by the traditional system.


Not prior to 1990. Also, you are on digital with your mobile phone, the OP I was responding to was talking about old analog lines.


most POTS are digital and I assume a lot of them just put your call through some kind of VoIP today (not necessarily SIP)


Depends on the provider and codec. Who are you using for VoIP now?


Frankly, all I miss from the pre-VoIP world is being able to start with "Hey, how are you" instead of "Hey, can you hear me".


No. Local calls were always free here. The conversation could flow. The calls were of relatively poor audio quality but perfectly adequate for conversation. No delays, no distortion and no dropped calls. I’d have a call with quality so poor that meaning is lost requiring a “sorry, what was that.... are you there?” about half the time now.


No. What I miss is telephone conversations without anyone constantly interrupting each other because we automatically interpret even tiny delays as pauses inviting us to speak.


That seems more like a matter of culture though? It's not that hard to wait several seconds before starting to speak, right?


Unright. That is not how humans talk to each other.


Two things:

1. I'm not sure if people are universally incapable of talking like that, ie. with a bit of self-restraint and without cutting in at every opportunity. I'm not sure, but I strongly suspect that this is related to the culture of people involved. As an example (or an anecdote) I work remotely full-time and participate in a lot of conference calls, yet I was interrupted just a couple of times. Never twice by the same person, though.

2. People are able to talk (and communicate) in so many different circumstances and environments, like over the radio, on a boat during a storm, on paper, with a flashlight and Morse code... that I don't think it's that hard for them to assimilate yet another protocol to follow when talking. It really shouldn't be impossible, I think.

I really strongly suspect it's a matter of culture. I was taught, from a very young age, that interrupting someone when they're speaking is simply rude, and that if you respect someone enough to talk to them, you should also respect them enough to let them say whatever they wished to say.

It's also true for many of my acquaintances, at least those from my generation and older. People who talk too much, too often and are unable to let others finish what they're saying are regarded as hyperactive, perhaps with ADHD, or really bad marketers, sect recruiters and such. This is still just an anecdote, but it at least shows that it's possible for groups of people to use a different mode of talking.

An honest question: listening what the person has to say from start to finish is so tightly correlated (in my mind) with respect for that person, that I never considered that you can get interrupted, possibly many times in a row, and not feel offended. What do you think about this notion? Do you think that it's a speaker fault if he's out of words for a few seconds, and is responsible for maintaining a steady stream of speech from start to finish? In other words, how do you justify people cutting in? Or is that simply a non-issue to you?


Your argument sounds awfully like a "you're holding it wrong" argument. Demanding that people adapt to newer technology with more restrictions than previous iterations of the technology is a tall order that many will not entertain.

Rude people with poor manners is orthogonal to this issue.


> Demanding that people adapt to newer technology with more restrictions than previous iterations of the technology is a tall order that many will not entertain.

After thinking it through while writing a sibling post, I now think that it's very probable to depend on a country. I think people from (to give an example I'm familiar with) most Warsaw Pact countries may be able to tolerate doing this much better than people in countries accustomed to high-quality products and services.

Or it may be just a kind of bubble I live in or something, I'm not entirely sure :)


I think you're asking the wrong questions. If there's a pause in speech that reaches a culturally- and situationally-defined length of time, that's basically a signal that someone is done talking, especially if what they've said seems to constitute a complete thought. Imagine that they start talking toward the middle of that pause-period, but the 1/2 second delay on the line stretches the silence past the time where it would be reasonable for you to respond...so you start your response, just in time for the rest of their response to start, then stop 1 second later, when they hear that you started talking. It's not a matter of rudeness or a lack of consideration, it's a misunderstanding partially brought about by the technology in use.

Conversation is a fluent flow. It isn't always quick, but it's often constant. There's a rhythm. Break the rhythm, and you're breaking the expectations of your conversational partner.

> An honest question: listening what the person has to say from start to finish is so tightly correlated (in my mind) with respect for that person, that I never considered that you can get interrupted, possibly many times in a row, and not feel offended. What do you think about this notion?

I was taught that avoiding interruption was a sign of respect, as well. Do you actually get offended if someone interrupts you without meaning to?

> Do you think that it's a speaker fault if he's out of words for a few seconds, and is responsible for maintaining a steady stream of speech from start to finish?

Usually being the one at a loss for words myself, my answer is "yes". If you don't make some signal that you're working on formulating a response, you can reasonably expect someone else to jump in and fill the space. I don't feel that it's something that needs justified; one can't really "justify" social norms.


> If there's a pause in speech that reaches a culturally- and situationally-defined length of time

That's kind of my point! Talking over a weak connection is a situation which warrants making that "length of time" longer. My argument is that doing so is not a big deal, not hard to learn, and quite a practical (if temporary, hopefully) solution to problems with the transmission during calls.

> Conversation is a fluent flow. It isn't always quick, but it's often constant. There's a rhythm. Break the rhythm, and you're breaking the expectations of your conversational partner.

Yes, that's all true, but I don't want to break the flow, just slow it down a little. If we both know that we're talking over an unreliable connection, is it really that hard to adjust and give your partner additional few seconds in cases where you suspect they not finished yet?

> Do you actually get offended if someone interrupts you without meaning to?

I'm not sure. As I said, I don't have much experience with this kind of situations. But I think that, if I said (when joining the conversation) that "I use a rather bad connection right now and I'd like to ask you guys to take that into account if I happen to fall silent for a second, sorry for the inconvenience" and still got interrupted many times in a row, then I guess yes, I'd feel somewhat offended (that may not be the best word for it, though, but I'd feel rather uncomfortable).

My reasoning is that we're all adults, we know that the technology we use is imperfect but decide to use it anyway, so we should learn to mitigate the problems with it. They may get solved one day, maybe even soon, but, in the meantime, we can make the experience much better by ourselves. (Now I'm seriously considering if that kind of mentality seems natural to me because of being brought up in a country behind an Iron Curtain, where nearly everything by default was crap and you needed a lot of creativity and skill to make these damn things even usable. Interesting thought.)

> Usually being the one at a loss for words myself, my answer is "yes". If you don't make some signal that you're working on formulating a response

Yes, but that's assuming nearly perfect connection at all times. But we know we're using something that can be rather shitty at times, so we should just adjust accordingly. Or rather, I have a hard time understanding why wouldn't you want to do just that. Isn't it frustrating to get interrupted (even if only by mistake) constantly?

Anyway, returning to the beginning, the various "parameters" of a spoken discussion, like how long you wait for your partner to start responding, how long you wait to ascertain they're done speaking, how loud you need to speak (or how closely you need to listen on the other side), what kinds of interruptions are expected, and so on are there to be tweaked depending on circumstances.

I just now realized that in principle we could be in agreement, but differ in how far we're willing to tweak these parameters just to talk comfortably. I'm obviously inclined to accept even relatively rigid rules of conversation (the worse the connection, the more of a protocol I would be ok with), but I realize it may be just my personal preference. Well, food for thought me anyway, so thanks :)


Long-distance audio quality in the 80s and 90s was excellent everywhere I lived in the US, no matter where I called. Remember the pin drop commercial?


YES, 100% right that was when they switched to digital from analog. The OP was saying that prior to the digital conversion it was better.


I have an old fashioned landline at work. The Sound quality vs my cell phone is night and day.


I'm tired of these so called "phones" that do everything well - except being a phone. My "phone" is a great camera, plays games,etc. - but how I hate talking on them. I suppose a good part of the problem is the network, not my specific phone.


> yelling into the phone really loud and than also knowing your paying $0.29 a minute

This does not describe any landline I've ever used ever, either locally or long distance, even to this day, even at 00:00:01 on any given New Year's Day.


On the flip side was getting busy signals because of there not being enough capacity somewhere between you and the person you were calling, and also cross-talk issues, where you could hear other peoples' phone conversations on your line.


Absolutely. But the natural, two-ways, un-echoed flow of conversation, just as if you were talking to someone in the room - we have lost that, and it's a major loss.


For me it's the slurring compression artifacts as much as the lag that makes cell phone conversations, web meetings, etc. inferior. Local digital cell phone calls mimic the old cross-country long distance analog calls of yesterday in terms of delay.

w.r.t broadcast video, the eye/mind can still pick out an image from a static-filled snowy image, but digital artifacts from low/poor signal result in freeze-frames and garbled mess.

Summary: Digital. When it's good, it's very good. But when it's bad, it's awful.


This is it exactly. Calls though the landline failed rather more gracefully.


Agreed, heck, I still lament low latency dial-up shell access. Even at 9600 baud it was still more responsive and felt like using a computer in the same room. Chatting with somebody like that, in talk/ytalk/etc, where you could see their every keystroke was a totally different experience because you could even get a vague sense of their thinking process through their pauses and edits.


I was using a 9600 baud connection about an hour ago. It sucks rocks. You can see the ascii-menu updating. Everything flickers. (Serial console access to a server, if you really care.)

If you run ytalk on a modern machine, it doesn't flicker, but you can still get every keystroke, because that's the protocol.


I have a server with ntalk for a few friends and it is my favorite method of non face-to-face communication.


I still get the busy signal calling into the Seattle area from the Midwest once every 4-5 months or so when it's clear the other end wasn't on the phone. I'm not sure I got that before. So I wouldn't say the problem is solved, though the scale of communications has surely increased.


Cross-talk was an analog problem related to center frequency drift on muxed lines. As soon as phone companies put in ATM networks it went away.


Did you ever call internationally? My parents would call our grandparents using those phone cards in the 90s and they would sound like they were the good ol' two cans and a string technology.


Absolutely. Phone conversations now are awkward and annoying because of the lag and sound activated cancellation (simplex? - I don't know if this is a feature of the handset or something else).

I find older people on the phone get frustrated with the modern phone system and they don't know why.


I'm older. I get frustrated. But I do know why. You're right, though - many don't. I find the whole thing a neat example how not every change is undilutedly for the better, and how we tend unwittingly to accept those creeping degradations. Being old and crotchety, I could name examples...


I would be shocked to find many young people with such intricate knowledge of the inner workings of phone systems. This is not something taught in school so I don't know what age has to do with it.


The worst part about working from home is how difficult it is to have a conversation with anybody over the phone. The lag makes it very difficult for things to flow naturally.


HD Voice isn't common where you live yet?


No idea. No idea what it is (Well, yes, HD, I get it). They can hidef or lodef as much as they please, I don't really care. All I want is for the latency to be gone.


I suspect that will require more powerful processing in the mobiles themselves.


> the lag associated with changing channels

This isn't even an audio thing. With a classic television, you can crank the channel dial and (along with the instant audio output) see a stable frame within 5-20 frames of vsync (so like 300ms or so worst case).

My TV at home can't switch an input (e.g. between two HDMI inputs) faster than about 3-4 seconds. No, I don't know why either. It's actually much worse if the device being switched away from is a Chromecast. Don't know why there either.


I've heard newer HDMI standards have HDMI Fast Switching. Skimming over the first URL google gave me[1] it sounds like the slowness is because of HDCP (HD Content Protection). Although, it sounds like it will theoretically take it from ~10seconds to maximum of 2 (average of 1) for proper implementations.

https://www.synopsys.com/designware-ip/technical-bulletin/hd...


Looks like "HDMI fast switching" just means "open all the input connections all the time", which is sort of a hack. But yeah, that seems plausible.

Of course, in the time it takes for my television to initiate a secure transaction channel with the input hardware on the other side of the cable, my ?!%!? Netflix app has done a arguably more secure transaction with a server on the other side of the continent. Sigh.


You're in good company

> I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

John Carmack

https://twitter.com/id_aa_carmack/status/193480622533120001?...


I've read a comment here that the HDMI lag on source switch is caused by DRM. I haven't investigated the claim but I consider it plausible... ^__^;


I've noticed that my TV takes a while to change resolution+refresh rate, which typically happens in concert with switching inputs to/from my Chromecast.

Hardcoded "everything is NTSC" vs a mutual handshake/negotiation and mode change makes sense, though I too wish it was much faster.


Plus, when they cut out it's actual silence. When FM cuts out it's a little bit of a fade in or static or fade out. Sure both are bad but there's just something about the silence during a digital broadcast interference that is just horrible.


My car's XM (digital satellite in the US) radio generates artificial white noise when the signal drops out. If the signal doesn't come back within a couple of seconds, then you get silence.

I'm told GSM phone calls do the same thing, apparently so you can tell the difference between a temporary dropout and a dead line.

It's kinda funny.


Yeah, interruption with noise is much less disruptive than interruption with silence. It's not a drawback of the technology though; it's a design flaw in the receiver not to mask the dropouts with some white noise.


The quality is not guaranteed. Many of the DAB stations in the UK are broadcast at such a low bitrate they are noticeably lower quality than good quality FM reception.


The industry trend is clear: Choice over quality.

Almost everyone wants sub streams so they can offer more options.

Never waste a bit on quality that could be delivering choice.


I believe there was a deliberate choice to keep DAB quality low to avoid people getting clear digital recordings for piracy.

Subsequently rendered redundant by Napster, but we are stuck with DAB.


That was a factor, but the core was more choice. Broadcast used to be about quality.

It's quantity now, with "just goos enough" translating to what people will tolerate.

Thought experiment for you:

Which do you prefer?

Low quality, but compelling. Or high quality, not so compelling?

That answer is what drives everything. People will take compelling every time. Very few will take quality as a primary factor.

More choice = more opportunity to be more compelling = higher AD rates = more revenue per bit.


Sounds like a recipe to kill radio in favour of higher quality cell data streaming.


It's never higher quality.


In the US everybody wants sub streams so they can sell more commercials. It's all about the money. Thankfully FM is still going strong here.


My mom just switched from DirectTV to Comcast (...) and one of the "hilarious" parts of the Comcast (X1) experience is their "Changing channel to N..." interstitial when you try to change to another channel.


> They have a higher quality

That's how it is sold indeed, but not entirely true. There is definitely a better signal to noise ratio, but the sample rate is pretty bad which makes the quality lower than normal FM. It's only a 128kbs MP3/AAC stream!

It's only because people don't hear noise anymore they think the quality is better. Analog FM is by the way much more robust too as a signal.

A shame the government forces people to invest in lower quality qear..


Most DAB stations are MP2, not MP3. MP3 is not supported in DAB. AAC is supported in DAB+, though you need to upgrade your radio again.

MP2 is transparent once you get to 256kbit, so the actual codec isn't really the problem, it's the bitrate choice. I don't have any experience of EU DAB, but in the UK, most DAB stations are 128K, which isn't even close to being transparent. The audio quality on FM is considerably better.


Almost every station in Norway is DAB+ instead of DAB, most between 64...96kbps.

Source http://www.fmlist.org/sendertabelle/dab-ww.php


This bothers me a lot, but strangely, traversing a menu on my Roku to select a station (or youtube video or whatever) does not bother me. If the radio had a menu I could quickly traverse that showed station information and perhaps some kind of preview, then I might be OK with this latency.

On the other hand, having to look a menu while driving car is not good.


Is the lag inherent to the protocol, or just a matter of poor receiver implementation?


It's probably similar to switching TV channels (which isn't instant anymore either). The receiver needs to wait for a new "key frame" (on that isn't compressed) before it can begin displaying/sounding?/outputting the rest of the compressed stream.


True that. I feel like we've really gone backwards on that aspect concerning TV. I remember a time where I could turn on the TV and be 3 channels further all within the space of 2 seconds. Now my "smart" TV takes 5 seconds just for a first paint, and changing a channel once takes a second. You just can't beat the latency of analog circuits!


The radios from my various cars throughout the years are just as bad. My 1970s era radio turned on instantly and there was zero delay as I tuned through channels with a nice smooth analog dial. My 1990s era radio took about 500ms from power on to audio output. The "dial" had steps in it though and was not continuous, and there was a noticeable (again probably around 500ms) delay between when 'click' tactile feedback and when the station was successfully tuned. My current car takes 15 seconds just to boot (WTF), and then I have to navigate some horrible menu just to get to the radio controls. It's like these guys are actively trying to make things worse.


Even the 1980s car radios with digital controls had no perceptible delay, despite containing microcontrollers with a tiny fraction of the speed and memory of the modern ones.

Maybe it's the whole "let's run Linux on it because we can" phenomenon. The microcontrollers in the old electronic radios didn't run an OS but interacted with the hardware directly. The number of instructions executed from sensing the dial change to updating the tuner and display would probably be less than 100. Now it's half a dozen layers of abstraction and millions of instructions to do the same thing, on a processor which is maybe "only" 1000x faster at most...


My new car has a radio with a touch-screen interface. When I turn the car on there's a space of about 10 seconds where the radio is operating but I can't change the station or volume or even turn it off. It's quite annoying when you were rocking out last time you were in the car and now you have passengers.


To me this is symptomatic of a long or broken feedback loop in the development process. E.g., a classic waterfall approach where somebody writes a big spec, a team spends a bunch of time implementing, and then sometime at or after the deadline, a working model is produced.

In that setting, nobody's going to say, "Hey, let's just pop this radio into our cars, live with it for a few weeks, and then figure out where we need to go next." It's too late. Instead I think the thing to do is force convergence early and often in the product timeline, so that the supposedly-little things like this have plenty of time to get noticed and fixed.

A good example of this is Apple. I'm told that in consumer electronics the typical number of iterations with physical prototypes is 3-7. For the first iPod, Apple went through more than 100. The difference was enough for them to crush the competition so thoroughly that people these days are surprised to hear that there were MP3 players before the iPod (and smartphones before the iPhone).


a classic waterfall approach where somebody writes a big spec, a team spends a bunch of time implementing, and then sometime at or after the deadline, a working model is produced.

...which is actually quite suitable for the safety-critical parts of a car like the ECU and other controllers that control the actual driving aspect, but not for everything else that doesn't need such a level of process.

On the other hand, I suppose it could also be blamed on the lack of "performance is a feature" --- if they specified the radio to be as responsive as the accelerator and brake, for example.


I think the choice of process is basically orthogonal. Waterfall and Agile processes done well both have been demonstrated to produce safety-critical software. (See, e.g., Nancy van Schooenderwoert's work in medical devices.) Both can produce dangerous software. (E.g., Toyota's "unintended acceleration" issues.) The question is much more about what practices you put in place to assure quality.

I do think short-cycle methods have some intrinsic advantages, though. By making critical functionality available much earlier for testing, you get more time and more chances to make sure it's really safe. If early versions are bad, you get early warning signs that aren't available in a Waterfall process. And if testing turns up issues early on, it's much cheaper to make fundamental architectural changes: less code to change and more time to change it in. And Waterfall processes aren't at good at dealing with unknown unknowns; some things you only learn by trying them out.


Same, and it's all the more frustrating with the realization that what is effectively a tablet should be able to sleep for an inordinately long time, especially in "airplane mode" (mine has a SIM card, but there's no reason to be running an AP or Bluetooth when off), rather than cold booting, off a car battery.


You mean uncompressed data. Yes, analogue is uncompressed, but digital could also be uncompressed. However, everyone wants to stuff more video, more audio over the link, so we compress... Bandwidth goes up, latency also goes up, and things get more complicated...


Analogue video signals are absolutely compressed (interlacing, PAL, NTSC, SECAM). It's just not as efficient as MPEG-X combined with hyper-aggressive modulation schemes and forward-error correction, channel-capacity wise.


Audio compression doesn't work that way. It doesn't use keyframes.


It has some kind of equivalent from my understanding. MP3 has frames https://en.wikipedia.org/wiki/MP3#File_structure and I'd assume all methods do, otherwise how could you start decompression if you don't have a starting point?


MP3 frames are all full frames, whereas a key frame is a full frame that is followed by incremental updates.


The prediction gain of audio codecs is <1, which basically means that if you start at a random packet, once you play enough packets it'll converge to sound the same as if you had started playing from the beginning. It'll sound bad at the beginning though, so you usually crop off the first bit after seeking (80ms for Opus, much longer for other codecs).

Another source of delay is just the length of audio packets themselves (have to wait for the beginning of one) and radio limitations.


It sounds like you could build a TV/device that caches key frames for all channels on the client/consumer0side.

Or am I totally misreading this?


There are set top boxes that have many tuners. The STB uses the extra tuners to anticipate a channel change and tune ahead to the next channel. (Fast Channel switch).


For sattelite tv you wouls need to handle over a Gigabit mpeg-2. Not easy.


Caching without decoding would still be a nice boost, wouldn't it ?

Coupled with a decoder that can manage double the actual frame rate, it should cut the switch time with something like on average 1/4, worst case better than 1/2.


You'd still need the additional tuner(s) to provide the data to cache.


Technical detail: for video, key frames are compressed. And this technique isn't used for audio.


> for video, key frames are compressed

But they're compressed in such a way that doesn't require any previous frame. It's compressed like a picture would be.


I can't speak about the DAB protocol, but the HD Radio scheme that is used for digital in the United States seems to be a protocol problem. I say this because I've owned four HD Radios each from a different manufacturer, and each one had the tuning lag problem.

The sad part is that HD Radio is actually startlingly better on AM, while FM is an incremental improvement. But few AM stations use HD because most AM stations operate on a shoestring, and it ruins your fringe coverage.

But when you find an AM station that's in HD and you hear the receiver switch from analog to digital -- holy cow!


Some buffering is probably inherent. The protocol is at http://www.etsi.org/deliver/etsi_en/300400_300499/300401/01.... if anyone wants a long read.

At a guess the critical number is "The transmitted signal has a frame structure of 96 ms duration (Transmission mode I)", which implies that most systems will need to buffer at least that much and probably several times that.

Theoretically you could decode all the signals in a particular DAB multiplex at the same time (e.g. all the BBC stations are on the same multiplex in the UK), and then change instantly between them, but I don't think consumer recievers let you do this. Might be able to try it with SDR.


It's probably also doing some kind of interleaving along with forward error correction, and would need to collect a few frames before it can start decoding. There's a section on interleaving under convolution coding in the spec, but it looked pretty complicated and I've got no idea how stations are usually configured.


I'm quite ignorant about wireless signals, but do you know why they chose 96ms? That sounds huge for a single frame.

I guess an optimization the receivers could do is decode all the presets, then at least switching between those would be instant.


There's a tradeoff between latency and efficiency. Longer frames can compress better, and you amortize any per-frame header over more stuff. Shorter frames incur more overhead from headers, and may not compress as well. 96ms seems like a reasonable choice for a realtimeish system without tight latency requirements, but I couldn't say why they chose exactly that number.


> ...but I couldn't say why they chose exactly that number.

I would speculate that it's because it's fairly close to 100 and is also divisible by a fairly high power of 2 (32). Buffer management is easier (less arithmetic) when using powers of 2.

I could be completely wrong, but 96 is the kind of number that crops up in computing quite a bit for this reason. Having said that, the number of samples may be more important than the number of milliseconds.


I dug into the PDF above and and it looks like it's mostly due to the underlying codec. They use MPEG-1 Audio Layer II, which has a frame size of 1152 samples. They support 24kHz or 48kHz sample rates, which means that a 1152 sample frame is either 48ms or 24ms. Apparently they stuff 2 or 4 of these audio frames into each transmission frame.


That just punts the question. Why does the codec have a frame size of 1152 samples? Is it because that brings the frame size to sums of powers of two milliseconds? :)


Damn, I was hoping nobody would notice that.

I mean, good point. I dug a bit farther and found this document which discusses the design in some detail:

https://web.archive.org/web/20040919073530/http://www.cs.col...

It looks like MP1 has 32 subbands and works on 12 samples each, which results in a frame size of 384. MP2 works on 3 groups of 12 samples for each subband, resulting in 1152 samples.

Of course this just punts it to the question of why 32, 12, and 3? To which I can only answer: quick, look behind you! <runs away>

At least the mystery numbers are smaller.

But seriously, I'm at (beyond) the limits of my understanding of this stuff, so if anyone with more knowledge would care to chime in and explain those, I'd be most interested to know.


Also, currently, all decoder chips (at least the 'official' ones you need for an 'HD Radio' designation) are controlled/manufactured/sold by one group, 'Xperi'. The protocol itself is closed/proprietary, and it's not trivial to build any kind of receiver/decoder, like you could with analog FM/AM.


HD Radio != DAB


I don't even think it sounds very good, at least the US HD Radio implementation. Most channels have dialed the bitrate way down so they can jam in subchannels of useless stuff like weather and the audio sounds horrible, worse than analog FM.


Oh yes. I'm this close to canceling my Sirius subscription because some of the music channels are 32kbps or even 16kbps.


This is called the Cliff effect btw: https://en.wikipedia.org/wiki/Cliff_effect


I'm not going to take a position on whether DAB is worth it, however:

> ...the lag associated with changing channels...

It seems like this is mostly the fault of the receiver, to me it doesn't seem like there's any reason that the receiver can't just have several filters (tuned to adjacent or saved channels) and demodulators running, so that you can tune instantly to them.

I think the bigger problem with DAB, to my eyes, is codec selection and graceful degradation.


Driving long distances I get continuous mobile coverage but radio is non-existent in many places. I can now stream Spotify on a 5 hour drive with no interruptions.


Agreed. To me radio is not for _listening_ to music, which is next to impossible due to constant advertising interruptions, shortened tracks etc (there are online streaming stations for that), so getting very good audio quality instead of good doesn't mean much, while being able to sustain partial loss of signal and switch faster from one station to another is much more important to me.


I lament the loss of AM too. I used to get broadcasts from over 500km away. And the sound quality was such that it stayed in the background rather than intrude on conversation.


XM Radio quality was embarrassingly bad. Quantized robot talk. and their sales were so relentlessly annoying trying to upsell my free trial for my new car.


XM Radio could be better if they killed about half of the channels. It's just a matter of how much sausage they want to cram into the pipe.

When I worked on an XM receiver project, I recall they had about 1.5 mbit of bandwidth to work with. Most stations were 16 or 32Kbit AAC+.

The talk stations used 8kbit AMBE, which is pretty much the same as a vocoder. You weren't far off by calling it robot talk.


It doesn't help that they broadcast everything twice, because some of their older receivers have different frequency ranges from before the merger.

I think they could do a lot better if they used their modern position to their advantage. Most receivers are in cars, and many cars are coming with always on internet. Hook up to that and a) report on coverage deficiencies, b) fill in from internet streaming when in poor coverage areas, c) get metrics on listener numbers so things that nobody hears can be dropped, d) use streaming to boost bitrate.


I have XM for a few years. I don't have any problems with audio quality. Reception can go mute under bridges, tunnels, or other large obstacles, but that's that. Otherwise, is the only thing that works on mountains, remote roads, or anywhere really.

For the sales, I call them every ~5 months, say I will pay $25+tax for 5 months or something like that, I set the next calendar event.


My neighborhood is filled with trees so there is a 5 minute part of my commute that XM can't get a signal for more than 5 seconds in a row. It's as if it only takes a single leaf to block the signal.


Their bit rate is embarrassing. Dither and audio frequency drops so terrible I asked myself "why was I paying for this?"

That was 2005

It's not any better now.


> It's not any better now.

Well, of course. Sirius XM, where's the competition?


It wouldn't be hard to start a company that builds a radio that buffers every reasonably-strong signal for instant switching...


I remember the signal as being super crappie for anything not pop music or talk. Out was particularly bad if anybody clapped: it sounded like you are blowing into a piece of aluminum foil.

The solution is to kill radio and use the signal to increase internet access, then everybody can hear what they want.


Unfortunately, that's just a sliver of spectrum; the entire FM broadcast band is about 20 MHz wide, equivalent to a single 802.11b/g channel.

The entire AM (MW) band is little over 1.1 Mhz wide.


Keep in mind that comparing bandwidth between bands that are very far apart isn't super useful. You can easily select down to fractions of 1khz at 1mhz but not at 2.4ghz so 1.1mhz of bandwidth means something different at UHF vs HF/MF.


My memories about radio are a bit faint, so forgive me if I am wrong, but the fact that you ate able to discern narrower channels does not mean that you can transmit more information, doesn't it?

As far as I remember, the upper limit to the quantity of information you can send is solely set by the bandwidth, and is not dependent on the carrying frequency.


That's mostly right, the theoretical limit to the amount of information you can carry over a channel is set by the bandwidth and the channel signal to noise ratio. See the Shannon Limit [1]. I should add, however, the lower frequency you build your radio to operate on, the narrower your channel usually is. Antennas and filters tend to only be well matched over one or a few narrow frequency bands. These bands of good impedance get smaller as you go down in frequency, logarithmically.

[1] https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem


There's also the limitation of oscillator precision IIRC. 1ppm deviation is 1hz at 1mhz and 1khz at 1ghz


Modern frequency synthesizers can easily do a few Hz step size up in the gigahertz range and be within a few ppm. That doesn't really matter that much though, since modern radios (at least these digital ones) are built as SDRs. They can shift the signal around and select whatever band they feel like.


Sure, but that's on the receiver side for carrier recovery. Wouldn't it be problematic if you mashed a bunch of channels/allocations sufficiently close together that they started wandering on top of each other?


Modern DAB/DVB receivers are usually zero-IF, the wanted band of frequencies is mixed down to baseband and filtered with a sharp cutoff Surface Acoustic Wave filter then sampled with a fast ADC. Selectivity is then dependent on DSP.

On the transmit side, you have to carefully combine the outputs of your various transmitters with a complex filter network, which at high powers looks something like:

http://admin.mb21.co.uk/tx/userimages/8448proc20120403120238...


The spectrum used by FM/UKW radio really doesn't matter. It's not particularly wide, neither supplies much bandwidth.

I frequently listen to a classical music station using a radio from the 80s and the audio quality is very good, apart from the noise floor I'd call it almost close to a 128 kbit/s MP3. I know because I own (CD) copies of a whole bunch of recordings the station has in their inventory as well.


random anecdote, I like shitty FM quality. It kills lots of signals but .. I like it. It's like film grain to me.


My main issue with deprecating FM in favour of digital radio is its use for emergency broadcasts or in war. Not because of the protocol itself, but because of adoption: almost everyone has a working FM radio available, but few people buy dedicated radios that support digital radio. I would guess that new cars is the main source of compatible receivers.


Norway has a nationwide network of warning sirens that was built out during the cold war and that is tested regularly that could be used to convey a wider range of critical messages without the need for a radio receiver (during my childhood we heard these very regularly, but with the collapse of the Soviet Union, the test frequency was reduced significantly and now they're only tested twice a year).

However with the rise of the internet and decline of the printed phone book, which used to be distributed to absolutely every household, and that listed the (small) set of warning patterns the sirens would use (that included e.g. warning about air raids), they've now reduced it to only one signal to avoid depending on people to remember them:

"Important message - listen to radio"

...

I wonder how exactly they'll resolve that with a diminishing proportion of the population having "radio" that's independent of internet access being up.


..and guess how this system is activated? Using traditional FM (some form of RDS, I think). Which is why not all the transmitters have been turned off yet.


I've only realized now how odd this is: Boulder, co is one of the safest places in the United States (crime and natural disasters), but they test the super loud warning system every monday during the summer.


These days they're mostly used to warn about tornadoes, which Colorado ranks #9 in the nation for.


This was true a decade ago. After several moves, I no longer have VHS player, Floppy disk drives, CRT, or radio. The only am/fm raidios are in our cars. If anything AM should be kept on, as you can broadcast farther on lower power or we should get younger generations into ham radio.


Radios are still sold in stores. Also, Phones and MP3 players often have FM radios. Heck, my best radio is probably my OP-1 synthesizer.

The real "magic" of FM radio that it something that Just Works. You don't have to pay, you don't have to connect to some network, and it's something that worked since late 1930s. No matter what people say, any current digital radio formal will almost certainly obsolete in less than 20 years.


Yeah but... Way more people have phones. Just send your emergency broadcast by SMS. There's even an official way to do that.


How do you like your OP-1?


Love it. It's one of the most well-designed electronic devices I ever owned. Rally fun to use. I think it's worth studying as an example of great user experience and designing for "emergent" features - even if you don't care about synthesizers all that much.

Also, OP-1's synthesis capabilities are much deeper than it would seem from reading the manual or watching a typical review. And it has an amazing online community.


>If anything AM should be kept on, as you can broadcast farther on lower power

And receivers can be built with extremely simple parts so for the war/disaster situation it's a hedge against the complexity of our technology stacks.


Actually you can build a simple FM receiver with the same parts you would use for a simple AM receiver, if you are willing to accept a fair amount of distortion. It would not be great for music, but would be OK for receiving news or weather or emergency instructions.

One way to do this is to use something called an "FM slope detector" [1]. The idea is that you have a tuned circuit tuned to slightly above or below the carrier frequency of the FM signal. As the FM signal frequency moves around, it also moves nearer and farther from the center frequency of the tuned circuit, which means the output of the tuned circuit rises and falls. The result is that the frequency modulation of the FM signal gets turned into amplitude modulation of the output of the tuned circuit.

You can demonstrate this if you have one of those cheap SDR dongles. Tell your SDR software to decode as AM, but tune to an FM broadcast station, and then start slowly moving the tuning up, and you'll find a point where the broadcast becomes quite intelligible.

[1] https://www.electronics-notes.com/articles/radio/modulation/...


On the transmitting side, there might be advantages to medium wave am for coverage compared to FM on 98/108 MHz. In the UK on long wave we have (or pehaps had) basically one large transmitter covering a large percentage of the population.

https://en.wikipedia.org/wiki/Droitwich_Transmitting_Station


Except here in Belgium, they already switched off DAB in favour of DAB+. Now everyone can go out and buy a new radio again.

Meanwhile, they also rally against planned obsolescence. How's that going to work?

(luckely, they haven't yet switched of FM, that's scheduled for 2020)


Same story in Norway - first they tried very hard to push DAB receivers, and I, being a firm believer in radio over TV for mass communication duly bought DAB radios for the car, kitchen, bathroom, living room hi-fi - then they (in fairness, very sensibly) switched to DAB+. Off to ye olde electronics shoppe I went again.

I wonder how long DAB+ will last; I expect that sooner rather than later it will be scrapped for streaming via cellular networks - because doing so will be cheaper still than operating a DAB network.

Sigh.


The problem is that the average age of the engineers that have knowledge of high power AM at LF/MF/HF is probably well into the 50s and some of the actual transmitters are not far off that either.


You might really enjoy this blog, then. The author has been watching both the decline and advancement of radio of the last few decades. It was interesting to see these big-iron transmitters being replaced by half-a-rack of solid state gear, but then still taking up four racks because of all the other gear.

His last post is poignant, though: "...the absolute soul crushing mediocrity of automated programming is killing the entire industry."

http://www.engineeringradio.us/blog/


For uses like that AM on the regular MW band (530 to 1600 KHz) is much better. FM receivers are relatively complicated compared to AM receivers, require more and more complex parts and in general are a lot harder to put together without substantial skill. Then, finally, the range of FM transmitters is limited and during war you'd want to stay in touch with 'the free world' as much as possible which quite likely is substantially over the horizon.


.....young people don't have FM radios --- those who don't own cars.

No one I know does.


Many Android phones contain built in FM tuners, carriers just don't currently utilize them:

https://www.cnet.com/how-to/unlock-the-secret-fm-tuner-in-yo...


Interesting. This was pre-installed on all smartphones I owned (and I expect it to), perhaps this is something carriers incentivize to disable in some countries?


My phone can play FM radio by plugging in a pair of wired headphones. And while most young people don't have cars, pretty much everyone around here has one of those.


My cellphone is an FM receiver if you plug in earphones to act like an antenna. It's not like the headphone jack is going anywhere...


That hard "if" really bothers me, too. My phone has the FM receiver unlocked, but the software will not allow it to be turned on unless there is something plugged into the headphone jack (which my phone is also progressive enough to still possess). I am fully aware that the reception is going to be poor without a good antenna, but it really bothers me to have that option taken away. I'm standing right next to a kilowatt transmitter. Let's give it a try and see what happens, no?


Trust the computer, the computer is your friend.


And here I am with my iPhone 7 that does not have an headphone jack.


Your iPhone doesn't have an FM receiver anyways:

https://www.theverge.com/2017/9/28/16379316/fcc-iphone-fm-ra...

> “iPhone 7 and iPhone 8 models do not have FM radio chips in them nor do they have antennas designed to support FM signals, so it is not possible to enable FM reception in these products,” an Apple spokesperson said in a statement.


I'm not really "all change is bad" kind of a guy, but I've already lost the damn dongle 2 times and had to buy a new one. I think bluetooth headphones are the future, but for now they're simply way too expensive if you're going for quality :/


Pixel too :(


It seems like that use case is being taken over by cell phones, which the government can mass-message in an emergency. Of course, that requires a lot more infrastructure that could potentially get taken out by that same emergency.


FM is just a mode of modulation. A general purpose receiving radio can have multiple modes (AM, FM, WFM, SSB, digital, etc.). I would not worry about receivers dropping support for FM.


This is driven by a desire to cut costs; operating a DAB network is orders of magnitude cheaper than FM.

The station in question ceased broadcasting at midnight, by the way, after having been threatened with stiff fines.

NRK (PBS/CBC equivalent) has lost 1/5 of its listeners after the transition to DAB. Tough luck.


Do you have a good source on the cost diff?


Nope.

The reason, however (doubly so in Norway, which is basically as inconvenient as countries come for broadcasting purposes - lots of mountains and hardly any people) is that in DAB, you interleave a lot of channels in one data stream, transmitted from a single transmitter, whereas in FM, you need a dedicated transmitter for each channel.

The lower power bill is the major benefit seen from an operator's point of view.

Additionally, with FM you need to make sure no transmitters on the same frequency can cover the same area, as that would lead to interference. With DAB, they performed the brilliant trick of using multiple carriers to get the symbol rate on each channel very low; this means that if two DAB transmitters cover the same area, you get constructive interference - a signal boost instead.

The benefit of this in Norway cannot be overstated - we have (literally!) thousands of low-power FM relays to cover every nook and cranny with broadcast radio. (Denmark, home to a larger population, had a handful of FM transmitter sites.)

Hence network planning is a lot easier in a DAB network.


Ok, great answer. I live in Norway btw.

Both the constructive interference and reduced number of transmitter arguments sound reasonable. I'm a bit more skeptical to the power cost, how much more expensive can that really be for FM? Electric power is pretty cheap. Does not really matter what the answer to this is though, given the other arguments.

At the moment I have to say I'm not very impressed with the coverage and quality of DAB in Norway, in my opinion it seems worse than FM. But I'm guessing this will only get better over time.

My main issue with the adoption is the requirement to buy new receivers, it's just so much waste. The adapters they sell for cars are in general just so ugly and a technically inelegant solution (local FM broadcasting, pretty ironic). A funny anecdote I read in the news was that some of these adapters come with a bluetooth handsfree solution, which also gets broadcast locally on FM. This means that other people driving close to you can listen in on your conversations if you use that functionality.


Hi,

Agreed - DAB coverage is seemingly much more spotty than what is claimed by Norkring (the operator) - makes you wonder what models (or antennas, in the case of field tests) they use to produce the maps.

Add to this the annoying property that whereas a FM signal degrades gracefully - as the signal gets poorer, you simply get more hiss, but can still make out what is being said, DAB basically drops off a cliff - it is fine until it - well, isn't, at which point it is pretty much unlistenable.

As for power, there are two factors both tipping in DAB's favour - first of all, you get good area coverage with lots of channels from one transmitter, seeing as the different stations are interleaved in one stream - say, a dozen channels for a realistic example. Having that same dozen channels on FM covering the same area, you'd need a dozen transmitters, all illuminating the same area.

Additionally, an FM tuner requires a signal which is significantly stronger if it is to decode it properly than what a DAB receiver does.

(This benefit is somewhat offset, power-bill wise, by the fact that DAB final amplifiers must be very, very linear in operation (because of the number of carriers) whereas an FM amplifier need not bother about linearity at all - and, hence, can be run at higher efficiency.)

(For a very rough comparison - the DAB transmitter on Tryvannshøgda providing local radio to Oslo boasts less than 5kW output. The Radio Norge FM site next to it? 88kW.


Thanks for the additional information, I completely agree with your description of coverage and the graceful degradation of FM.

DAB has obviously arrived and is here to stay in Norway, but I'm interested to see what the rest of the world does. My guess is that internet connected vehicles will be a both cheaper and better solution for consumers in many regions. Given the amount of data people use on their phones today, it seems likely to me that providing audio streaming of DAB equivalent bandwidth is feasible - even in Norway.


My complaint is that the usability sucks. Early adopters had to throw out their DAB receivers and buy DAB+ receivers. You regularly have to factory reset them because they keep moving the frequencies. Bad UIs for handling 20+ stations. The stations are just niche playlists, no content. I can just stream those ad free instead...

The car adapters are even worse.


> Additionally, with FM you need to make sure no transmitters on the same frequency can cover the same area, as that would lead to interference.

In the UK the BBC transmits their FM channels on the same range across the whole country [0]. In most cases the frequency between transmitters is only slightly different (e.g. 88.5 vs 88.1), so it can be picked up without having to retune the receiver. It requires a bit more thought up front, but it’s not that difficult (compared to other considerations you need to have for tranmitting radio), and it’s a much better UX than having to remember different frequencies for the same station in different cities.

[0] http://downloads.bbc.co.uk/reception/pdfs/FMradiotrans.pdf


I suspect (mind, suspect!) that approach wouldn't work here in Norway because of the (generally) different geography; in most of the UK, the main problem would be that you got too far away from the transmitter site for proper reception, in which case overlapping like you describe would work fine.

Over here, multipath interference is the main problem with FM - signals bouncing off any number of mountainsides before arriving at the receiver, causing a signal transmitted at time T to arrive at [T+t1], [T+t2]...[T+tn] at the receiver, with no way for the receiver to tell them apart as they are all on the same frequency. Garbled mess ensues.

Hence, rather than a few huge transmitter sites, we rely on thousands of smaller sites, the smallest of which used to operate at 200mW, IIRC.


The UK suffers from similar issues, we are not that flat. If you see a map of TX sites like [0] then you will see many small TV and Radio relays covering valleys.

[0]http://tx.mb21.co.uk/mapsys/google/alltx.php


> The reason, however (doubly so in Norway, which is basically as inconvenient as countries come for broadcasting purposes - lots of mountains and hardly any people) is that in DAB, you interleave a lot of channels in one data stream, transmitted from a single transmitter, whereas in FM, you need a dedicated transmitter for each channel.

Would it be possible to broadcast multiple FM channels with one magic sdr transmitter?


-It would be technically possible, but probably not make much financial sense.

Each FM channel would still require as much power as if it was operated separately to give adequate coverage; so rolling [n] channels into one SDR exciter, you'd need a power amplifier with [n] times the output.

And that is only the beginning of your probl... eh, challenges. This monster amplifier would need to let go of one of FM's greatest advantages - the ability to use high-efficiency, non-linear amplifiers for the final output stage. Now that you have multiple carriers, you would need to ensure the amplifier operated very linearly (and, in practice, at much lower efficiency) to keep the various carriers from interfering with each other.

Also, you'd probably be unable to use the same antenna for all channels, unless they were very closely spaced - or were all meant to cover the exact same area - and then you would need to apply extensive, expensive filtering to separate the carriers before feeding them to different antennas. (Which would be much simpler and cheaper to do just after the SDR, since the signal there would be very low power - but then you could use separate power amplifiers and make it even easier on yourself. (At which point you might as well use separate exciters, too - and be back where we are today. :))


I found a link which shows on slide 6 that it's about 41x cheaper, but mostly because you can fit more stations on it.

http://www.gatesair.com/documents/slides/2015-09-Anderson-Ad...


It's a similar debate as when we north americans decided to switch to DTTV [1]. Many people disliked that; they had to buy a box for their old TV and it was more complicated to use, etc...

Digital radio [2] has it's advantages. For once, it's more efficient in it's use of the spectrum. In many densely populated towns, there is no (or not many) more spectrum left for new broadcasters.

It's a matter of time, and people will adapt and forget they "hated it and doesn't want to change".

[1] https://en.wikipedia.org/wiki/Digital_terrestrial_television

[2] https://en.wikipedia.org/wiki/Digital_radio


Just as a point of clarification, people didn't have to buy boxes to make their old NTSC televisions work with ATSC. The gub'mint gave them away for free through distributors like Radio Shack, and even via mail. You only had to pay for a box if you waited something like five years after the program began, when it was all over.


One difference is that most TVs are big, bulky appliances that could be cheaply retrofitted to the new technology. Sure, portable analog TVs existed but didn't really take off.

If FM is turned off, every single pre-digital clock radio, portable radio, integrated FM cellphone radio, car radio and boombox suddenly stops working and needs replacing. There's no government program to hand out vouchers to replace my boombox; I just have to junk it and buy another.

If DAB is so great, why does hardly anyone in the US bother with it? Unlike DTTV, it's broadcast alongside regular FM, so people can go out and buy a digital receiver any time they like. But they don't, because they have a perfectly good clock radio already and don't see a benefit.


> If DAB is so great, why does hardly anyone in the US bother with it?

'cause it isn't? It just generates a huge pile of electronics waste [1] with marginal if any advantages for users... legislation here is an obvious money grab in my eyes.

[1] Because that's what we need! Another pile of never-to-be-recycled waste, sky-high, in the name of efficiency (maybe some lobbyist even argued with the energy savings - assuming they actually exist - look how good this move would be for mother earth!)


People will still remember when they were able to get more OTA channels, because analog signals degrade when weak instead of being non-decodeable.


Where I used to live went from NBC,CBS,PBS and Fox to about 15 channels. 8 or 10 of them are channels that play old sitcoms all the time, but ABC and Create were nice additions.

(ABC rarely came in on analog and was added as a secondary channel on the NBC transmitter)


And here, we went from being able to get ABC from literally anywhere in the house without the antenna extended to having to get the position right when in a good location; we also lost plenty of other channels.


So basically urban areas get more channels because the new tech uses less spectrum and everywhere else gets less because they will no longer be able to listen to the channels physically located in urban areas that they could pick up using the old tech.

Does the new tech make bandwidth sufficiently cheap that more stations can broadcast their content on multiple frequencies in physically separate locations(which is not uncommon for large FM stations to do today)? If so you'd basically get the same coverage using less spectrum by broadcasting the same station from more sites using the newer tech (not that this would happen if the initial investment in the physical equipment is anything large enough to cars about).


My folks (living in the upper peninsula of MI) went from being able to pick up all the stations from Greenbay from across the lake to being able to pick up one random analog station out of Canada.

Everyone ended up on satellite to compensate. But it was really viewed as yet another Urban vs. Rural fight with Rural getting the short end of the stick again.


I agree. Also I heard on one of my local repeaters the other day that ham radio operators were protesting TV before TV went mainstream (1930s?). It was funny to think about that.


The TV transition benefited from the conversion to flat screen that happened at approximately the same time. Increased resolution was also a huge draw. None of that applies in the case of digital radio.


People hate change.


People hate change that screws us. Like the ATSC 3.0 changes that are coming and bringing with it the ability to track consumers.


I hate spending lots of money on DAB infrastructure right before it turns outdated and irrelevant. The future is internet. Many are now switching from FM to Spotify or internet radio on 4G and ignoring DAB.


DAB doesn't cost me per minute listened. I'm on a very good mobile plan here in Australia (10GB for $40/month) that would finally mean I could stream (128kbit, blech) music all day and not immediately hit the data cap. Many people aren't on that kind of data cap though. Not to worry - there wasn't ever any mobile data net neutrality so Spotify, Apple Music and Netflix aren't included in the cap (with a quality limit, of course)



People hate costs being foisted on them (new receivers) by government. If DAB is that much better (and cost effective), that government should've paid for new receivers with the FM->DAB cost savings (the US gov subsidized digital TV broadcast receivers when the transition occurred).


You know "the government" is us, the people of the country, right?


In a game of telephone sort of way. The will of the people appears to get mangled severely on the way to government leadership.

Norway's government cut off 50% of its population from public broadcasts with this change. And their electorate is okay with that? I find that hard to believe but perhaps their electorate finds that acceptable for the cost savings.


It does. Maybe we should fix that instead of constantly trying to disempower it. Conversations in the US in particular always get conducted as if government had to be bad and is out definition disconnected from the people. We have technologies like approval voting to name one that could improve this. It's insane that you have to hijack one of two parties to bring any change. We must have a functioning government! We must stop working around that.

How the government makes decisions aside, for this concrete case, the money the government pays with is ultimately the tax payer money. So why not buy the received yourself? I don't want a receiver and I don't want to pay for yours. I don't even understand why it's not all just internet at this point.


> Conversations in the US in particular always get conducted as if government had to be bad...

Yes, that is generally the classical conservative point of view (as distinct from the current Republican party, which has been imploding for a while now).

This point of view holds that government should be small and basically its responsibilities are common defense, protecting and defending individual rights, and bringing bad actors to justice.

> We must have a functioning government! We must stop working around that.

Beyond the above definition, there can be no "functioning" government, because the more power given to government the more corrupt and self-serving it becomes. This is due to inherent imperfection in human nature and can't be avoided.


So how did that work out for you when the FCC decided to kill net neutrality?


>So how did that work out for you when the FCC decided to kill net neutrality?

Exactly as it’s supposed to? The Republicans have had eliminating Net Neutrality as part of their platform since at least 2012, and are generally hostile to consumer protection and the like in general. Conversely, the Democrats have supported it, to the extent that Hillary Clinton made it an explicit part of her technology platform. It’s right there on her website still.

Even more then that, when the political coalitions were being formed, the American people had every chance to influence what would go into each coalition. America could have ensured that both supported it. They didn’t, so instead we ended up with two very distinct coalitions on a range of issues, with NN as one of them. The Republicans promised to scrap it. The Democrats promised to keep and enforce it. America democratically voted to hand total power for this cycle over Federal government to a single coalition, the Republicans. There was no significant fraud whatsoever. There were no major national emergencies causing disruption. The Republicans won and have proceeded on policy just as they said they would (as expected, since most politicians work to keep their campaign promises).

Please explain what in this doesn’t reflect “our will” as expressed by the electorate? We all know how it works, what the dates are, etc. It’s easier now to get information and organize and get involved then at any time in history. If America had voted differently we’d have had a different result, simple as that. It’s our government, we are not ruled by anything but a democracy. If we don’t like the results it’s our responsibility to make it change.


My statement wasn't about the decision making part, but about the other commenter's suggestion to have"the government" pay for the new receivers.


They should just operate both.


This is a decent dummy introduction to what DAB is:

https://en.wikipedia.org/wiki/Digital_audio_broadcasting

> DAB is more efficient in its use of spectrum than analogue FM radio, and thus may offer more radio services for the same given bandwidth. DAB is more robust with regard to noise and multipath fading for mobile listening,[3] since DAB reception quality first degrades rapidly when the signal strength falls below a critical threshold, whereas FM reception quality degrades slowly with the decreasing signal.


That's funny. It should say FM is more robust, for exactly the same reason!


DAB can handle receiving the signal from multiple broadcasting antennas on one frequency (or reflections of the signal from one antenna), where with FM you get interference.

So if the infrastructure is good using enough base stations you can get much better coverage. That’s how it is more robust.


I thought that too. Funny how they spin it the other way, like "hey, signals either win and get transmitted perfectly or lose and aren't receivable, so there's no icky mixing of signals!"


I love listening to analog radio on my (now oldschool) iPod Nano 6th generation, just enough technology to bring something new with featuring live-pause so you can pause the radio.


Hell I'm building an FM radio as we speak - I'm literally just waiting for my soldering iron to heat up and checking the internet.

Of course I'm making an IoT radio, which can be controlled via my PC, mobile, or via buttons. But it still counts as old-school! ;)


Your own design or someone else's?

Schematics :) ?


My own design, no schematics as of yet.

If it means anything to you it is based around a TEA5767 FM-receiver module, an ESP8266 device for controlling it (wemos mini d1), an 4x20 LCD display for output and a small opamp for driving speakers.

Most of the project is in the software, the hardware bits are essentially plugged-together with no specific design - except for the opamp.


I don't see why this was really necessary. It strikes me as the same move the cable companies made, which was more for their benefit than the consumer.


Digital takes up less bandwidth per channel than FM. This allows the government to resell the frequency allocations. This is what happened with analog broadcast TV in the US.


> Digital takes up less bandwidth per channel than FM. This allows the government to resell the frequency allocations. This is what happened with analog broadcast TV in the US.

I'd be all for it if it meant more of the spectrum went for unlicensed use like 2.4GHz or 5GHz. Anyone know what happened to the white space thing? Sounds like a really good thing.


The technology isn't there yet (or more to the point, the technology is burdened by unreasonable regulatory requirements). The government has been pretty insistent that white space devices protect all incumbents on existing channels, but that's very hard to do. Imagine, for example, that you've got a TV receiver between you and the TV tower. You might be too far away to hear the TV tower, but the receiver might be in range. If you decide that frequency is empty, your transmission might interference with the receiver's. (And TV receivers are pretty dumb devices that aren't very resistant to interference.)

For white space to work well, you've got to kick off the incumbents and force all devices to follow certain very basic rules. (Think the rules you have to follow to drive on the road.) See: http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=5C9.... This is a political impossibility.


In the U.S., the switch freed up frequencies that were used for cellular LTE coverage.

Right now, the American TV stations are going through a second frequency shift (called a re-pack). Everyone's being shoved into a smaller chunk of the spectrum, made possible by the transition to digital.

Some stations are sharing channels. Others are being paid by the federal government to go off the air entirely. The resulting frequencies will be auctioned off to the wireless carriers for 5G data.

Re-packing has happened before, even in analog days. When I was a kid, analog TV sets had channels up to 74. Then channels 70-74 were reassigned, my guess is around the early 80's.

And if you go way way back, there used to be a channel one in the United States, but it got reassigned, too. These things happen.


> In the U.S., the switch freed up frequencies that were used for cellular LTE coverage.

My country (Bosnia & Herzegovina) still hasn't introduced LTE because we still didn't go through the entire process of switching to digital television.

In the local media, that means we're "the only ones in Europe", but I can't find an external source to confirm that.


You don't have to free up digital TV frequencies to introduce LTE. Many European/Asian countries launched LTE on 2600 MHz before their digital switchovers were completed, and others have recycled GSM/UMTS frequencies, reducing the capacity/coverage of the legacy networks.


I am most certainly aware of that, but that's the official reason according to the government, that's the reason served in the media, and finally, that's the reason mobile providers were given for not being able to serve LTE signal (two out of three most popular mobile providers did the tests in most populous cities half a decade ago).


> The resulting frequencies will be auctioned off to the wireless carriers for 5G data.

The spectrum auction has come and gone: https://www.fcc.gov/about-fcc/fcc-initiatives/incentive-auct...


Tv uses a _lot_ more bandwidth than radio. The analogue tv spectrum gets repurposed for 3g/4g etc, but the frequency of FM radio doesn't provide remotely enough capacity to be able to justify it for this reason (though there are other reasons, such as being able to have to have overlapping transmission towers)


There is no payment involved when getting your frequency in Norway


Pedantry mode: there is, but it is basically irrelevant - IIRC the one-time fee is NOK 2000 / USD250 or so.


You are right, of course. I completely forgot about that fee, and apologize for the misinformation.


>"Officials say the move to digital will save money"

Could someone explain the economics of how switching from FM to DAB saves Noway money?


Without addressing the net cost, one place where it saves money is in electricity for transmitters; digital broadcasts get similar geographic coverage with a fraction of the power.


I found a link[1] that says that DAB is about 41x cheaper on slide 6... But mostly because you can fit more stations on it.

[1] http://www.gatesair.com/documents/slides/2015-09-Anderson-Ad...


Sure, that's economies of scale. Does Norway have that much demand for radio where they would reach that?



Norway switching of FM is one of the most stupid political decisions made in the country in decades. And for God's sake, let's not let politicians make technology decisions ever again.


If that's true, you must be blessed with a fantastic group of politicians.


Why do you say that ?


Is it easy to find free (as in freedom) DAB receivers?. Analog radio circuits are easy to examine, but I suspect that DAB receivers must be impossible to examine thoroughly for obvious reasons. There might be some security concerns because of it. What have the Norwegian authorities said about that?.


In the US, companies are dumping their analog assets.

Check out this video for the end of 100.3 FM "The Sound" in LA:

https://www.youtube.com/watch?v=L4uj2kBdfP8

(look at the video description for related links, also the final moments of The Sound is at the end of the video).


We would need FM radio for the war against Skynet




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: