It makes no sense. The only difference between /dev/random and /dev/urandom is that random uses the estimator of the seed entropy (which doesn't make that much sense to begin with) to block when it "estimates" that there wasn't enough entropy in the seed, but apart from that gives numbers identical to the ones urandom gives.
Maybe I'm not following your response. What I'm saying is that there are reasons to use /dev/random at least after boot (to block for seed entropy). Saurik gives a real world example of the "cold boot" problem as other articles have done in the past. Are you saying we should be able to simply stop there and then just use /dev/urandom? If so, then why is the CSRNG re-seeded every 60 seconds? It would seem that there are reasons for doing so.
> Are you saying we should be able to simply stop there and then just use /dev/urandom?
Yes.
> If so, then why is the CSRNG re-seeded every 60 seconds? It would seem that there are reasons for doing so.
From what I understand about it (IANACryptographer), mostly to safeguard against situations where an attacker can learn the internal state of the CSPRNG at one moment. But I don't think this is too relevant for the problem at hand.
The only difference in Linux between /dev/random and /dev/urandom is that /dev/random always plays a guessing game about the seed entropy. The main argument here is, I guess, that this is a silly game. How can you really estimate entropy? It doesn't only depend on what physical sources in the machine give you, but also on what the attacker knows or can know about these sources and the state of the machine. How do you estimate that?
This was the main motivation behind the Fortuna CSPRNG by Schneier, Ferguson and Kelsey. There is no guessing going on, rather the algo is designed to not need to guess. Interestingly, the way Fortuna works seems to me would also somewhat mitigate the kind of attack DJB talks about in the article referenced by OP[1].