I think the OpenSSL situation you're talking about arises because of a mistake by a maintainer.
MD_Update(&m,buf,j);
Kurt Roeckx found this line twice in OpenSSL. Valgrind moaned about this code and Kurt proposed removing it. Nobody objected, so in Debian Kurt removed the two lines.
One of these occasions is, as you described, mixing uninitialized (in practice likely zero) bytes into a pool of other data and removing it does indeed silence the Valgrind error and fixes the problem. The other, however is actually how real random numbers get fed into OpenSSL's "entropy pool", by removing it there is no entropy and the result was the "Debian keys" - predictable keys "randomly" generated by affected OpenSSL builds.
I haven't seen OpenSSL people claim that the first, erroneous, call was somehow supposed to make OpenSSL produce random bits on some hypothetical platform where the contents of uninitialised memory doesn't start as zero, it looks more like ordinary C programmer laziness to me.
The odd thing with that incident is that the "PURIFY" define long predated it-- the correct fix in debian should have been "Just compile with DPURIFY"-- I believe redhat was already doing so at the time.
> I haven't seen OpenSSL people claim that the first, erroneous, call was somehow supposed to make OpenSSL produce random bits on some hypothetical platform where the contents of uninitialised memory doesn't start as zero
I had an openssl dev explain (in person) to to me when I complained about the default behavior: that there had been platforms that depended on that behavior, that they weren't sure that which ones did, and so it didn't seem safe to eliminate it. (I'd complained because I couldn't have users with non -DPURIFY openssl code run valgrind as part of troubleshooting). IIRC the use of uninitialized memory was intentional and remarked on in comments in the code.
- If the "uninitialized" data is actually somehow some kind of interference.
- In LLVM, using a "undef" value will not always do the same thing each time; however, the "freeze" command can be used to avoid that problem. (I don't know if this feature of LLVM can be accessed from C codes, or how the similar things are working in GCC.)
- If the code seems unusual, then you should write comments to explain why it is written in the way that it is. (You can then also know what considerations to make if you want to remove it.)
- Whether or not there is uninitialized data, you will need to make proper entropy too, from other properly entropy data.
MD_Update(&m,buf,j);
Kurt Roeckx found this line twice in OpenSSL. Valgrind moaned about this code and Kurt proposed removing it. Nobody objected, so in Debian Kurt removed the two lines.
One of these occasions is, as you described, mixing uninitialized (in practice likely zero) bytes into a pool of other data and removing it does indeed silence the Valgrind error and fixes the problem. The other, however is actually how real random numbers get fed into OpenSSL's "entropy pool", by removing it there is no entropy and the result was the "Debian keys" - predictable keys "randomly" generated by affected OpenSSL builds.
I haven't seen OpenSSL people claim that the first, erroneous, call was somehow supposed to make OpenSSL produce random bits on some hypothetical platform where the contents of uninitialised memory doesn't start as zero, it looks more like ordinary C programmer laziness to me.