I love <random>. Really. Sure, some other languages make the simple cases a little simpler, but no other system offers up the flexibility and control that <random> does.
Re: Predictability. I don't care. Truely. If unpredicatablity is important, the application must use a cryptographic random number generator. Those are the only known family that are hard-to-predict. Valuing un-predictabiltiy from a non-cryptographic PRNG has always been one of PCG's demerits in the xoshiro-versus-pcg debacle.
Re: Seed_seq's problems. The biases she's making out to be catastrophic are really quite small. In actual scientific monte carlo simulations, they don't affect things at all. I want to know the difference between 10^6 floats generated from the gaussian distribution, one with seed-seq's optimally seeded and one with it sub-optimally seeded. Because the "motivating scenario" isn't. You shouldn't ever be using a PRNG if all you need is one random number. You just grab a seed from a strongly-random generator like /dev/random.
The argument about seed_seq failing to be a bijection is completely irrelevant. You just need it to uniquely select one internal state of MT from the one initial value. So long each sequence of the first 600-someodd values drawn from it are unique, you've done that.
The demonstrated bias in MT's initial conditions doesn't matter one whit.
I think one of the main criticisms of <random> is that the distrubutions are implementation-defined. That means you get different outputs based on your compiler, even if your seed and even the rng itself is the same.
Lets take PCG's author's comments one by one:
https://www.pcg-random.org/posts/cpp-seeding-surprises.html
Re: Predictability. I don't care. Truely. If unpredicatablity is important, the application must use a cryptographic random number generator. Those are the only known family that are hard-to-predict. Valuing un-predictabiltiy from a non-cryptographic PRNG has always been one of PCG's demerits in the xoshiro-versus-pcg debacle.
Re: Seed_seq's problems. The biases she's making out to be catastrophic are really quite small. In actual scientific monte carlo simulations, they don't affect things at all. I want to know the difference between 10^6 floats generated from the gaussian distribution, one with seed-seq's optimally seeded and one with it sub-optimally seeded. Because the "motivating scenario" isn't. You shouldn't ever be using a PRNG if all you need is one random number. You just grab a seed from a strongly-random generator like /dev/random.
The argument about seed_seq failing to be a bijection is completely irrelevant. You just need it to uniquely select one internal state of MT from the one initial value. So long each sequence of the first 600-someodd values drawn from it are unique, you've done that.
The demonstrated bias in MT's initial conditions doesn't matter one whit.