Hacker News new | past | comments | ask | show | jobs | submit login

One of my favorite tech legends is that apparently Voyager 1 launched with a Viterbi encoder, even though there was no computer on Earth at the time fast enough to decode it. After a number of years Moore's Law caught up and they remotely switched over to Viterbi for more efficient transmissions.



> even though there was no computer on Earth at the time fast enough to decode it

I'm not sure what is meant by that. Not fast enough to decode in real time? There is/was no need to do that. The transmissions would have gone to tape in any case.

Here is a link describing how to decode such a tape: https://destevez.net/2021/09/decoding-voyager-1/


I called it a legend deliberately. One of the things I love about this anecdote is that it makes less sense the older and more experienced I get. It was told to me 12 years ago as a young, starry-eyed junior developer by my supervisor who had a PhD in RF research, while we were working on what we considered to be a world-changing wireless technology at a startup in San Francisco (it wasn't).

Who knows how many of the details I misinterpreted or am misremembering, or that he was. Where did he hear it originally? Maybe a grizzled old professor who worked directly on the project? Maybe a TA who made up the whole thing?

Whether true or not, it inspired me then as it does now to strive to be a better engineer, to think outside the box, to attempt hard things.

I continue sharing it hoping that one day Cunningham's Law will take effect and someone will share the correct details. But there's also a part of me that hopes that never happens.


When I read the earlier comment, seeing “tech legend” didn’t make me assume that the story would be false. Grandparent’s clarification was helpful for me.


Sounds like that old legend that the Cray 1 could execute an infinite loop in 7.6 seconds!


TFA: >I will use a recording that was done back in 30 December 2015 with the Green Bank Telescope in the context of the Breakthrough Listen project.

That recording was made in 2015 on a modern radio telescope, it is not from a tape.

The GP has the details wrong though: when the Voyager design was finalized in the early 70s with the Viterbi encoder, there wasn’t enough computational power to decode the signal. By the time it launched in ‘77, there was enough and it launched with the Viterbi encoder enabled.


I wrote test firmware for a very early spread spectrum transceiver that used something close to Viterbi encoding. It consumed 220mA in receive mode and 60-120mA in transmit mode. I was told some of that was the RF amps and ADC but a lot of it was the decoder.


I don’t know how well it holds today but for some time it was known that running certain computations on existing hardware would take longer than waiting for new hardware and running when available.


My life archive is available to my heirs and successors as long as they know the LUKS password for any of the numerous storage devices I leave behind. It’s risky though — the passwords aren’t known to them yet and one logistical slip up means the disk may be unusable. That is, however, until some point in the future when they will have a computer powerful enough to just break the key and feast upon my cat photos, tax returns, and other exciting ephemera.

Similarly my encrypted internet traffic might be private today but if it’s being logged then it’s only a matter of time before it will be completely visible to the authorities. I probably average ~10Mbps of traffic which is ~50TB/year, or $100 of storage. You could cut that price by 10% if you blacklisted the Netflix traffic, and drop it to 1% if you whitelisted only the email and IM traffic.

Either way, one day they’ll know everything.


If the contents of your computer is anything like the contents of most old people's attics, there is a good chance your descendants really don't want to go through all of it. They'll just chuck it in the trash without even opening it.

Turns out the next generation has their own life to worry about, and doesn't care much for their ancestors' stuff (unless it's money... they love money...).


Pretty much. Anecdotes about lives of elderly people are nice when you are sitting an evening with a glass of wine and plate of cheese. Otherwise who cares? Nothing is worse than trying to go through bunch of old faded photos where no one - even the owner - can identify who is actually in the picture.

I guess in our day and age we could write extensive meta data about where and when a picture was taken and who is in the picture, but I don't care to look through my own pictures, why would anyone else?


  > Anecdotes about lives of elderly people are nice when you are sitting an evening with a glass of wine and plate of cheese. Otherwise who cares?
I could imagine a future where DRM and copyright and just the cold fear of ligation could change the recreational screentime for families from being primarily studio-produced content to being primarily ancestor-produced content.

I remember some book I read where the child was constantly hearing about his family's history. Dune, maybe? Maybe a Philip Dick book? Asimov? I'm getting old.


My experience is completely different. I've painstakingly scanned my family's old pictures, transcribed the letters they wrote when on holiday, and still have some school crafts made by my grandparents.

After the death of one of my great uncles who died childless I picked up his photo and music collection. And I discovered the grandfather of a friend of mine in one of his holiday pictures, turns out they used to be friends and we had no idea.


I know my carefully curated collections of stuff will be worth pretty much zero to anyone a I leave them to. They may be interested in the pictures. But that would be about it. My massive cd/dvd/bluray/games/books/coins collection that I have amassed and carefully cataloged. At best it will end up at goodwill/ebay or at worst in the trash.


> Otherwise who cares?

People who can profit. And the idea of profit might be different in future.

That attic might contain a priceless vintage synthesizer, that encrypted drive might contain a priceless set of vintage unpublished club penguin screenshots that would make its AI approximation 0.03% more accurate. Etc.


But after enough time passes, lots of people do get interested in their ancestors. That's why DNA ancestry services are a big business - people get curious about where their unknown farther relatives came from. I guess there's just less mystery about people you actually knew.


Wow, perhaps I’m an exception to the norm, but this isn’t my experience at all. My family regularly sends interesting historical records of the lives of our ancestors. My great aunt composed a historical record of my great grandfather, who over his life built dozens of houses by his own hand. Even at university, I read a number of interesting historical letters and documents of the people who lived in the same dormitories in generations past.

I guess I may be ignoring all those documents that weren’t interesting enough to be remembered, but I imagine it’s hard to predict what will be interesting in the future. The fact that 99% of our lives are stored in computers vs paper would still vastly reduce the number of _interesting_ documents.


The key difference may be in the volume. Old pictures are more important to a family because there are so few of them. I only have one picture of my grandfather because it was taken when cameras were rarer than hen's teeth and 35mm film hadn't been invented yet. Now we have hundreds of thousands of pictures between the family members. Every vacation, every meal, every unimportant moment in time. I don't have time to look at my own pictures and I don't expect anyone else ever will.

Digital assets are a lot more perishable that physical ones. Cloud accounts will expire and be purged before anyone has the chance to retrieve them. Nobody will do "storage wars" with your pictures. Your local storage will fail or become incompatible with future tech before anyone has a chance to care about it.

We generate information at an ever increasing rate so whatever digital collections we have now will probably never be "dug up" by our descendants for a deeper look.

I'm trying to leave a "curated" collection with a few memories in such a way that it's immediately available to my family after I'm no longer around. Some moments in time that were important to my life, and had an influence on theirs.


Leave everything and let LLMs curate it on demand. "Show me pictures of my grandpa during the 2020 pandemic." "Show me pictures of my grandma's hobbies." "Find the most interesting pictures in this folder, not interesting historically but interesting action or visuals." "Make a montage of my cousins growing up." "Change my wallpaper every day to the most interesting family photo from that day of the year."


Great aunt, great grandfather — that's more than one generation back. I think it does get interesting when the distance in time increases.

Lets hope our kids and theirs keep our digital archives long enough for the great grandkid's to enjoy.


I agree with you. Right now I'm going through my grandfather's squadron records. Hoping to find the day that he crashed his motorcycle in Normandy to see what the CO thought about that. Apparently the other pilots were unhappy with him because they were banned from riding motorcycles after that.


Fpr what it's worth my grandfather recorded his memoirs recently and I am very grateful. He's led a very interesting life (much more so than my own!) which is the key component, really.


Memoirs are one thing, but archives of mundane daily business? No thanks.


Oh man I’d love to have all the data of my parents, grandparents etc… even just having analogue records is interesting I’d love to know what kinds of hobbies they had like collecting digital music, art, books etc.


I think you and the previous commenter have very different opinions on what "all" means. Connecting to parents and grandparents by knowing what art they like is one thing, but, for example, I have hundreds of photos of random bills and documents that I need to remember for later. None of my descendents would ever want to read through that unless they were investigating my life like in a movie.


My most frequent type of data in my personal archive is screen shots of tumblr posts that my oldest child like to take when they were 12 and we all shared photo saving account.

I do snap bills and white boards to remember but not with the eager enthusiasm of the long since grown up child.


Because I have been interested in my dead relatives, and because I suspect somewhere down the line someone will be interested in my living ones, I have been trying to capture their lives in books I have created (real books — printed at Lulu.com).


That seems nice! Thank you for the idea.


I don't think this is universally true. Some people actually make an effort to sort the stuff of their grandparents in the attic and figure out what still has some (emotional) value. But it's probably a minority of people.


It's the volume problem - I'm happy to read handwritten pages of my mom's diary from the 1960s, but the reams of laser printer out put from her master's degree in 2000s? Not so much.


The next generation will just run an LLM to mine the interesting parts out of the data.


Or tell an even more interesting story, 'cause that's what they want to hear.


What about that long forgotten BTC wallet?


Without a private key that is readily accessible? Worthless,


Wallet by definition has private key within it. Without key is just an address.


I think ‘readily accessible’ was the important bit.


I don't follow. The wallet is the private key, along with some other info.


The wallet could be encrypted with a strong password and thus inaccessible.


Yes, but any unknown password is enough to put most people off, particular if the contents of the wallet is unknown.


Only a matter of a really really _really_ long time and an absolutely unimaginably huge amount of energy.

All the energy released by converting all mass in the solar system into energy apparently gives a hard physical limit just above 2^225 elementary computations before you run out of gas so brute forcing a 256-bit symmetric key seems entirely unfeasible even if all of humanities resources were dedicated to the problem. The calculation is presented here https://security.stackexchange.com/questions/6141/amount-of-... . Waaay out of my field though so this calculation could be off or I could be misunderstanding somehow.


We will be able to crack today’s encryption algorithms in the future because we’ll find flaws in them. In other words, one day brute force won’t be necessary!

Have a look at this post, which illustrates this reality being true for hash functions (where similar principles as symmetric and asymmetric encryption apply). https://valerieaurora.org/hash.html

Notice Valerie specifically calls out, “Long semi-mathematical posts comparing the complexity of the attack to the number of protons in the universe”.


> We will be able to crack today’s encryption algorithms in the future because we’ll find flaws in them.

Big if.

We already knew how to design good and strong symmetric ciphers way back in the 1970s. One of the standard blocking blocks of modern symmetric cipher is called the Feistel network, which was used to create DES. Despite that it's the first widely used encryption standard, even today there's essentially no known flaw in its basic design. It was broken only because the key was artificially weakened to 56 bits. In the 1980s, cryptographers already knew 128 bit really should be the minimum security standard in spite of what NSA officially claimed. In the 1990s, when faster computers meant more overhead was acceptable, people agreed that symmetric ciphers should have an extra 256-bit option to protect them from any possible future breakthrough.

There are only two possible ways to break them, perhaps people will eventually find a flaw in Feistel network ciphers to enable classical attacks against all security levels, but it would require a groundbreaking mathematical breakthrough unimaginable today, so it's possible but unlikely. Another route is quantum computing. If it's possible to build a large quantum computer, all 128-bit ciphers will eventually be brute-forced by Glover's algorithm. On the other hand, 256-bit ciphers will still be immune (and people already put this defense in place long before post-quantum cryptography became a serious research topic).

Thus, if you want a future archeologist from the 23rd century to decrypt your data, only use 128-bit symmetric ciphers.


Placing no time constraints, my gut tells me it’s almost inevitable those breakthroughs will eventually come. Either in mathematics or quantum computing. Or both.

Namely I’d ask when not if. My opinion is that short of the one time pad, we won’t come up with provably unbreakable schemes.


> Namely I’d ask when not if.

The big assumption of cryptography is that, there exists some problems that are not provably unsolvable but difficult enough for almost any practical purposes. To engineers, no assumption can be more reasonable than that. Given unlimited time, it's a provable fact that any (brand new) processor with asynchronous input signal will malfunction due to metastability in digital circuits, it's also a provable fact that metastability is a fundamental flaw in all digital electronics - but computers still work because the MTBF can be made as large as necessary, longer than the lifetime of the Solar system if you really want to.

So the only problem here is, how long is the MTBF of today's building blocks of symmetric ciphers? If it's on the scale of 100 years or so, sure, everything is breakable if you're patient. If it's on the scale of 1000 years, well, breaking it is "only" a matter of time. But if it's on the scale of 10000 years, I don't believe it's relevant to the human civilization (as we know it) anymore - your standard may vary.

The problem is that computerized cryptography is a young subject, the best data we have so far is symmetric ciphers tend to be more secure than asymmetric ones. We know that Feistel networks have an excellent safety record and remain unbroken after 50 years. We also know that we can break almost all widely used asymmetric ciphers today with large quantum computers if we can build one, but we can't do the same to symmetric ones - even the ancient DES is unbreakable if it's redesigned to use 256-bit keys. So while nobody knows for sure, but most rational agents will certainly assign higher and higher confidence every year - until a breakthrough occurs.

> My opinion is that short of the one time pad, we won’t come up with provably unbreakable schemes.

Many mathematicians and some physicists may prefer a higher standard of security than "lowly" practical engineers. This is the main motivation behind quantum cryptography - rather than placing security on empirical observations, its slogan is that the security is placed on the laws of physics. Many have pointed that the this slogan is misleading: any practical form of quantum cryptography must exist in the engineering sense, and there will certainly be some forms of security flaws such as sensor imperfection or at least side channels... That being said, I certainly understand why it looks so attractive to many people if you're the kind of person who really worry about provability.


> your standard may vary.

Agreed. My overall / initial point was that I can’t say the time to crack = the time to brute force and start talking about the age of the universe. Even if we had to wait 10,000 years for cryptanalysis to break AES, it’s a blink of an eye in sideral timescale.

> quantum cryptography

Quantum cryptography is helpful in detecting eavesdropping (due to the no clone theorem). Ie quantum cryptography is helpful in avoiding asymmetric encryption for key exchange when communicating with symmetric encryption. Eg BB84. And given asymmetric encryption is most vulnerable to quantum attacks (compared to symmetric), quantum cryptography improves today’s state of the art. BUT, I insist that none of this gives provably uncrackable schemes.

AND it might be that we never build such a scheme. After all we have Godel’s incompleteness theorem lying around.


> illustrates this reality being true for hash functions (where similar principles as symmetric and asymmetric encryption apply)

I think you're making a bit of confusion. hash functions are part of symmetric key cryptography, while asymmetric cryptography is public key cryptography that is very different from hash functions.


No. Hash functions can be used outside of symmetric encryption. Which is the wording I used.

In any case, the overall point remains. Short of the one time pad you can’t build a provably flawless scheme.


They can be used outside symmetric encryption, e.g. in signature schemes, but the hashing primitives are part of symmetric cryptography.


Again, I didn’t say symmetric cryptography :)


You talked about symmetric encryption.


If, for example, someone has a computer capable of computing with a large number of qbits a lot of cryptography has less substantial break requirements.


Good idea - If you really do want to encrypt some data with hopes that it's recoverable by future archeologists, just use 128-bit symmetric ciphers (and remember not to use 256-bit ones). Hopefully Glover's algorithm can eventually brute-force it once large quantum computers are invented.


Definitely a threat at the nation-state actor level, the NSA are already providing guidance on how to use encryption today that is resistant to future advances in computation that would data recorded today to be decrypted later.

https://www.nsa.gov/Press-Room/Press-Releases-Statements/Pre...


Why not use a dead man’s switch that reveals the password if you don’t respond within a year?


RE 50TB/year, or $100 of storage How is this storage amount obtained for that price ?


Using what kind of media?


If hardware next year is X times better (e.g. even 1.01 or 1% better) than this year, and you have a computation that takes T time today, then next year, it'll take T/X time. So waiting will take 1 + T/X years if time unit is years. So the condition you want is 1+T/X < T. This equation has solutions for given X where X is an improvement, so as long as there is any improvement, it's always true waiting to start large enough computations will be faster.

Though even faster will be doing part of the computation now and then switching to new hardware later, so it's a false dichotomy.


> This equation has solutions for given X where X is an improvement, so as long as there is any improvement, it's always true waiting to start large enough computations will be faster.

Though this does assume that X is a constant, or at least bounded below by a constant. If hardware performance improved up to an asymptote, then there would still be nonzero improvement, but it might not be enough for waiting to ever be worth it.


As "If hardware next year is X times better (e.g. even 1.01 or 1% better) than this year" highlights X is a constant as a simplifying assumption, I'd have expected you to say "As this assumes that X is a constant" not "Though this does assume that X is a constant". So, I'm not sure what your disagreement is.

If your disagreement is that a constant is not appropriate here, consider the interpretation in this comparison of running a program on a slower computer A and then a faster computer B. There would be a constant difference in performance between these two computers, assuming they are in working order. So, taking the model with a single constant is appropriate for this example.

If you are saying the performance improvement is bounded below by a constant, I would ask you, what is the domain of this function? Time? So we would be talking about continuously moving a computation between different computers? The only line here is a best fit line, emergent data, so I don't understand how this could be a preferred way to talk about the situation (the alternate to an assumption), because this is suggesting the emergent structure with nice continuity features is a preferred fundamental understanding of the situation, but it's not.

Then, where you are talking about hardware performance improving up to a (assuming horizontal) asymptote. I guess this means "If hardware performance increase becomes marginal[1], there is a nonzero improvement." Or in other words, "If hardware performance increase is marginal, there is a marginal [performance] increase". Performance and improvement are both rates of change, so this is tautological.

Finally, you state that waiting for such a marginal near-zero performance increase isn't worth it. I think most people would agree this is obvious if said in simpler terms. However, this is still not disagreeing with me, because I never suggested waiting was worth it.

So, what's the disagreement?

[1] which is well-established not to be the case, so I don't think this is a relevant case to the interesting factoid about waiting to start computation


My apologies, I misinterpreted you as talking about waiting for many years (given a sufficiently large task) for performance to continue increasing, since this thread started talking about Moore's Law. I have no disagreement in the single-year case you were actually talking about.


> Though even faster will be doing part of the computation now and then switching to new hardware later

Not necessairly. This still costs:

• Programmer/development time to implement save/restore/transfer

• Time on new hardware, bottlenecked by old hardware, restoring a partial computation from old disks or networks

You're not going to waste time restoring partial calculations for anything from an Amiga cluster for time saving purpouses. Additionally, this scheme ties up hardware that then can't be used for "cost effective to finish on current hardware" calculations.


I wonder if this same law applies to distance satellites like Voyager get away from earth. Like we sent out voyager 46 years ago, but in 100 years, we will send out another satellite that will very quickly catch up to Voyager and outpace it


Very long distance spaceflight like this is still basically only power d by gravity slingshot maneuvers where we steal an infinitesimal amount of inertia from planets to give a spacecraft some velocity. Voyager was launched during pretty favorable gravitational assist conditions, so unless we dramatically improve delta v and isp in the spacecraft or get a better configuration, probably not.



Thanks for this. I’ve been thinking about “lightspeed leapfrog” for many years but never searched it up!


But maybe we can only build this new probe that will outpace it with information with gathered from the original probe.


I was working with an optimisation problem based around cplex a few years ago that took about 5 minutes to complete - at the time I worked out that if we'd started the optimisation on a machine 10 years prior, it would have been quicker to just wait until the present day (of this story) and then use the code we were writing because improvements in the algorithm and in the hardware added up to a million-fold improvement in performance! If I remember the timelines correctly I think the original version would still have been running today even.


I don't know about that, but there was also the idea that optimising the code would take longer than waiting for hardware to catch up – this was known as "the free lunch".


That sounds like the space pioneers that set out for Alpha Centauri on a multi-generational voyage only to be surpassed by faster spacecraft half way there.


The notion that encoding/transmitting could be simpler than decoding/receiving is interesting. It reminds me of the way optical drives for many years could write at, say, 48x but read at 8x, such that the majority of time spent was the verification step (if enabled) rather than the burn step. Just speculating, I assume it's because of things like error correction, filtering out noise/degradation. Producing the extra bits that facilitate error correction is one trivial calculation, while actually performing error correction on damaged media is potentially many complex calculations. Yeah?


CD drive speeds were written like 48/8/8, which stands for 48x for reading, 8x for writing CD-Rs, and 8x for re-writing CD-RWs.


I'd always assumed that was due to differences in power levels needed for reading versus writing, and because writing onto disc is more error prone at higher speeds. Not necessarily anything to do with a difference in the algorithm for encoding versus decoding the bits on the disc itself.


As best as I understand it, we can start with thinking about it in terms of a music vinyl disc. For the sake of ease, let’s say that a vinyl is 60 rpm, or one revolution every second to “read” the song. (It’s actually about half that.) This is somewhat similar to how a “music cd” works and is why you can only get around 70-80 minutes of music on a CD that can hold hours of that same music in a compressed data format. The audio is uncompressed, therefore much like a vinyl. This establishes our 1x speed, in this case using one revolution per second.

Now to the speed differences. To read, the laser needs only to see a reflection (or not) at a specific point, while to write, the laser needs time to heat up that same point. It’s like the difference between seeing a laser reflect off a balloon, versus the time required for that same laser to pop it. This heating is how CDs are written, quite literally by heating up points on the disc until they are no longer reflective. That’s why it is called “burning”. While more power might speed up the process, there is still time required. Meanwhile, all that is needed to read faster is an increase in the speed to observe, or the frequency to “read”, the light reflection.

With more powerful lasers operating at a faster frequency and with more precision, we can have a laser “see” these differences at 48 times the normal speed, but can only burn at 8 times the normal speed before the reliability of the process suffers.

Bonus: for a rewritable disc, it works slightly different. Instead of destructively burning the CD, you can think of it as being a material that becomes non-reflective at one temperature, and reflective again at another. This allows data to be “erased”. Also, when you “close” a disc to prevent rewriting, you aren’t actually preventing it from being rewritten. It is more like using a sharpie to put a name on the disc, with the words “do not overwrite” that all drive software/firmware respects.


It's more to do with the speed of writing. While the last generations of CD writers got '48x' speeds the quality of the media is less when written at such a high speed. I remember a C!T magazine test years ago where they stated everything written at above 8x speeds would sooner develop reading errors. Maybe it's better now but I wouldn't count on it since investments in optical drives are practically zero these years.


Indeed - a write must be done as one continuous action, whereas a read can be redone if error correction fails for some reason.


Yes, but WHY can it only write at 8x?


As explained in the nearby comments in more details, it needs more time to heat up a spot on the disk, than to see a reflection from said spot.


I missed that, thank you


The Voyager had an experimental reed-Solomon encoder. Encoding ‘just’ is a lookup table from a n-bit value to a m-bit one with m > n. Such a table takes 2^n × m bits.

Decoding also can be table-driven, but then takes 2^m × n bits, and that’s larger.

For example, encoding each byte in 16 bits (picking an example that leads to simple math), the encoding table would be 256 × 16 bits = 512 bytes and the decoding one 65,536 × 8 bits = 64kB.

Problem for Voyager was that 2^n × m already was large for the time.


Others have noted you got the CD-R speeds wrong, but sometimes sending is indeed easier than receiving. I used to work on radio signal processing for phones, and we'd spend far more of both DSP cycles and engineering effort on the receive side. Transmission is basically just implementing a standardized algorithm, but on the receive side you can do all kinds of clever things to extract signal from the noise and distortions.

Video codecs like h264 or VP9 are the opposite: Decoding is just following an algorithm, but an encoder can save bits by spending more effort searching for patterns in the data.


> Video codecs like h264 or VP9 are the opposite: Decoding is just following an algorithm, but an encoder can save bits by spending more effort searching for patterns in the data.

This is a more general point about the duality of compact encoding (compressing data to the lowest number of bits e.g. for storage) and redundant encoding (expanding data to allow error detection when transmitted across a noisy medium.)


You have this backwards. In your example it would have been 48x read and 8x write.


Yeah. Sorry to tell you this, but the speculation / analysis is on incorrect premises.

It was never faster to write than it was to read.


Interesting. That is not how I remember optical speeds.


It is wrong


At some point there were burners with speeds like 48x, and MAX reads at 48x, so the writed were in practice faster than reads (but only marginally)


This is the era I'm referring to, and I recall the difference being a bit beyond marginal. Literally the verification (i.e. read) phase of the burning sequence would take several times longer... in practice, not in terms of advertised maximums. Maybe it would read data discs at 48x but it would refuse to read audio discs beyond 8x or something like that. Same goes for ripping software like Exact Audio Copy (EAC); it could not read at high speed. And I don't think Riplock had anything to do with it, as that's a DVD thing whereas my experience dates back to CDs.

Strange hill to die on, I'm aware.


You and the GP are misremembering (also the abundant misinformation sticking around the web is of no help). CD-R are mostly obsolete but some of us still have working equipment and do continue to burn CD-R, so that era hasn't completely ended.

No idea exactly what you're referring to taking several times longer, perhaps software was misconfigured. However what is more likely: The market was flooded with terrible quality media, combined with touting write speeds that were more for marketing than any concern for integrity, it was easy to burn discs just at the edge of readability, with marginal signal and numerous errors. This would cause effective read speed to be terrible, but this was more an indication that the discs were poor quality and/or poorly written then any inherent limitations in the process or how drives worked.

There are 48X "max" CD burners. But that maximum is no different than the maximum for reading. It's MAX because that speed is only attainable at the extreme outside of the disc. These higher speed drives operate with constant angular velocity (essentially a fixed RPM). In order to attain 52X at the inside of the disc would require a speed of around 30k RPM and no CD drive gets anywhere near that (though this was a common misconception). The top RPM for half height drives is around 10k - or about 50x the linear velocity of a CD at the outside.

Currently I usually use an Lite-On iHAS124 DVD/CD burner made in the last 6 years. It will write at up-to 48X and this speed is the maximum. The average burn speed for an entire disc when using "48x" is about 25x, or just about 3 minutes for the disc. For supported media it runs at a constant angular velocity around 10k RPM.

Exact Audio Copy / Red Book CD audio ripping is an entirely different subject. It can take longer due to cache busting and other issues that have nothing to do with the physical capabilities of the drive and more to do with the difficulty of directly streaming Red Book Audio, and issues with specific drives and their firmware. You can read at top speed though with a properly configured setup, I do it all the time.


> Red Book CD audio ripping is an entirely different subject

> difficulty of directly streaming Red Book Audio

Actually, it's what I was alluding to this whole time. Sorry for not saying so out of the gate. Red Book audio was my life for a while. I recall writing cue sheets [0] for CDRWIN by hand! Ripping groups would brag that a given release was created with EAC at no more than 2.4x or something like that...

I believe data CDs (whichever color book that was) had more robust error correction (given that computer files can't just have glitches interpolated like audio can to some extent) which is why if you completely filled a CD with Red Book audio (74/80 minutes), ripped it to an uncompressed format like WAV/AIFF, and tried to put all of it on a data format CD as files, it wouldn't fit; it was a decent amount larger than 640/700MB and not just due to metadata.

[0] https://en.m.wikipedia.org/wiki/Cue_sheet_(computing)


This is about error correction. The probes add a redundant convolutional code to their signal. Decoding this is easy as long as the error rate is low, a computer program can simply guess what bits have flipped. The issue becomes harder with a higher error rate, and a Viterbi decoder is computationally expensive, but can correct higher error rates than other constructions.

Since the signal strength degrades with distance to Earth, error correction naturally becomes much more of an issue later in the mission. I guess that the probes may have switched between different levels of redundancy through the mission, as the transmission error rate rises. But there was never a point where the convolutional code wasn't useful, it just became slightly more useful with a better decoder.


> a Viterbi decoder is computationally expensive, but can correct higher error rates than other constructions.

Higher than others at the time, or higher than turbo codes or low-density parity checks?


What I read is that it is the best theoretically possible error correction mechanism, given a convolutional code as input, and thus also the highest cost mechanism that one would consider.

This doesn't mean that it is universally the best way of doing error correction, other ways of generating redundancy may provide a better set of tradeoffs.

Also, convolutional code is a system that can be configured in many ways, the complexity of the code generation feeds back into the decoding, so a simple convolutional code would be Viterbi decodable at the time, but a more complex system would overall provide better error correction, even though choosing such a system meant that Viterbi would be computationally infeasible.


https://voyager.gsfc.nasa.gov/Library/DeepCommo_Chapter3--14... and https://core.ac.uk/download/pdf/42893533.pdf have some details. (https://ieeexplore.ieee.org/abstract/document/57695 likely does, too, but is paywalled)

What I don’t understand (possibly because I didn’t read them fully) is why they didn’t use the better one from the start and taped its data. Maybe they didn’t trust the Voyager to work yet? (One of those PDFs says this was an experimental system) or didn’t Voyager produce enough data to use its full bandwidth (further away, its signal got weaker, so it needed better error correction and/or better receivers on earth) when it still was relatively close to earth?


That which is imperfect must be sterilized! You MUST sterilize in case of error. Error is inconsistent with my prime function. Sterilization is correction! Everything that is in error MUST be sterilized. There ARE no exceptions. Your data is faulty! I am perfect!

https://www.youtube.com/watch?v=Mw3zzMWOIvk


Note, this is false. Details: https://news.ycombinator.com/item?id=38655026


Doesn't seem likely. All data received from the craft is recorded, so it doesn't need to be decoded in real time, and if the spacecraft has the hardware to encode it at some rate then it's quite likely that we would have hardware here on earth that could decode it at that same rate.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: