That's absolutely not true, and some of us have directly experienced what happens when it runs out. There are implementations that will keep producing output without an ongoing and sufficient supply of fresh entropy, but there are also implementations that do not. Generally, those that do keep producing output in those cases are formally weaker.
When the supply is treated conscientiously, depletion broadly doesn't matter outside of weird race conditions during system startup. But if everybody naively consumes cryptographically secure randomness for every casual occasion, it becomes a much bigger problem.
No, it's definitely true. A conventional OS CSPRNG can be effectively modeled as a stream cipher, where the key is a hash of observed entropy. As you encrypt more bytes with a stream cipher, you don't "deplete key". Real CSPRNGs are more complex than this, but that complexity is there to improve things from the baseline of "hash some interrupts and key ChaCha20 with it"; the argument gets weaker on real CSPRNGs, not stronger.
The parent comment is correct: entropy depletion is a weird meme. Unfortunately, Linux encoded that meme into its legacy KRNG ("/dev/random"), so lots of people believe it to be real. It is not, and modern userland programs don't use "/dev/random" anyways.
> Unfortunately, Linux encoded that meme into its legacy KRNG ("/dev/random"), so lots of people believe it to be real. It is not, and modern userland programs don't use "/dev/random" anyways.
It's not the only such system, nor has that "legacy KPRN" been eradicated from real-world use.
Moreso, I can't understand how someone who seemingly has as much seasoning and experience as yourself can fathom making generalizing claims like "modern userland programs don't X".
Application software and the platforms that it runs are insanely more diverse than that, especially now that we live in a world where people just paste in verbatim docker stacks that they read about in some blogspam tutorial written 12 years ago, and countless others are building on frameworks and dependencies that are themselves broken in ways they oughtn't be (like Flutter, in the article).
While there exist contexts where cryptographically secure randomness may be treated as inexhaustible, we're decades away from being able to take that for granted across "modern userland programs" and its much safer to have people anticipate the dangers of exhaustion than it is to have them presume its universally inexhaustible.
"LRNG" => (idiosyncratic) shorthand for "Linux's KRNG".
The LRNG never had the problem you're alluding to. It's not as if the literature eventually found "inexhaustible" designs. They were never exhaustible. There is no such danger. You can get to this axiomatically by looking at how a CSPRNG works (again: it's effectively a stream cipher construction), or empirically by reading about it. Either way: no, you're wrong here.
As a project rescue specialist whose had to diagnose and treat exactly this issue more than once, I feel like I'm being gaslit, although I know that's not your intent.
As recently as node 16 or 18 or so, which is still in real-world use despite its age, reaning on the real-word docker images that these projects use, it happens. And when it does, you can rightly guess that all the client's javascript developers were utterly perplexed specifically because they'd never heard of such a concern. But you point to the stack trace for when their call to uuid never resolves and provide all the documentation from familiar bloggers to help them wrap their heads around it, and voila -- the mysterious and catastrophic issue that's been plaguing them is suddenly remedied.
As I said, I appreciate that exhaustion doesn't need to happen in anything with an appropriate design. That's genuinely great. But even if it's just a certain and inappropriate way of using the linux RNG in a certain window of Linux versions (and I'm not strictly convinced its even so narrow a scenario), that's a actually a pretty common real thing that real developers will continue to encounter in the wild for longer than any us might want, and not some baseless myth or meme.
You're not being gaslit by me, but rather by older Linux kernels. There's no such thing as exhaustion. But there's a legacy kernel interface that pretends there is. The implication of using it is that your program will randomly stall, because the kernel is (was) galighting you. But that's not a cryptographic security issue.
Honestly it's kind of refreshing to see a thread like this. This was a live issue on HN like 10 years ago, when cryptographers started yelling at Ted T'so about how dumb "/dev/random" was, and generalist engineers on HN argued about it. But it has since become a settled matter --- here, I mean. Among cryptography engineers: never a doubt.
There are KRNG security issues! You can come up with cases, especially with embedded software or older hypervisors, where code runs with an unseeded RNG. That's very bad. But those issues aren't relevant to ordinary userland code; if your system can't run with the assumption that "/dev/urandom" is seeded, you're fucked for other reasons, so distributions (and hypervisors) go out of their way to ensure that can't happen.
At any rate: there is no case where "exhaustion" comes into play, and there never has been, but there was an archaic kernel interface in Linux that thought there could be such a thing, so there's a whole generation of Linux developers who believe it's a big deal. It is not.
No, you are mistaken, as were the Linux engineers who embedded that assumption into /dev/random. (That has been resolved on the Linux side for some time now.) I will claim (without evidence) to be a subject matter expert on this topic.
"It was an issue in Linux for a while, but isn't in recent kernels" is not really sufficient grounds for acting like it's not an issue.
A language designer or library developer, especially, generally can't assert that their code will only run on the most robust, modern, popular systems currently available. They need to work defensively, anticipating the issues that they may plausibly encounter -- especially where those issues will not be apparent to less expert application developers.
And those less expert application developers are in an even worse position, because many barely even know what they're doing in the first place, making all kinds of foolish decisions while they paste together their docker script and relying on inexpert devops/sre people (or end users!) who make their own choices about what things are running on.
It's encouraging to learn that there are now established, widespread, inexhaustibly secure RNG's out there. But it remains irrelevant to a discussion of what developers need to keep themselves mindful of because there remain many implementations in the wild that don't behave like that and most developers aren't in a position to guarantee which their code will encounter.
No, there's really no space for wriggling here. You use getrandom() or, in the worst case, /dev/urandom, and this kernel nit goes away. It was never the case that the LRNG was "exhaustible"; there was only a broken interface to it.
Personally, I forgot that Linux only fixed /dev/random in 2020 in kernel 5.6. That's not that long ago in terms of enterprise / LTS kernels. I'm sure this has been a surprising pain point for end-users for a long time, and perhaps still is in some environments.
Right. But the context here is whether languages can make the (very big) decision to default to a CSPRNG, in which case: you make that decision once, for all your users, and when you do, you don't use "/dev/random".
When the supply is treated conscientiously, depletion broadly doesn't matter outside of weird race conditions during system startup. But if everybody naively consumes cryptographically secure randomness for every casual occasion, it becomes a much bigger problem.