Hacker News new | past | comments | ask | show | jobs | submit login
VeraCrypt 1.24 (veracrypt.fr)
200 points by h0ek on Oct 7, 2019 | hide | past | favorite | 75 comments



> Use Hardware RNG based on CPU timing jitter "Jitterentropy" by Stephan Mueller as a good alternative to CPU RDRAND (http://www.chronox.de/jent.html)

I have never heard of Jitterentropy but it sounds vaguely like previous attempts at RNGs.

/r/crypto had a good discussion about this topic a while ago: https://www.reddit.com/r/crypto/comments/9dln0v/whats_the_pr...

My intuition is that this is dangerous and will likely result in security bugs in the future.


From https://github.com/smuellerDD/jitterentropy-rngd

> Using the Jitter RNG core, the rngd provides an entropy source that feeds into the Linux /dev/random device if its entropy runs low. It updates the /dev/random entropy estimator such that the newly provided entropy unblocks /dev/random.

This is a red flag. Entropy doesn't run low. https://www.2uo.de/myths-about-urandom

I'm calling it now: Don't use VeraCrypt.

They made a very questionable decision based on the sort of ignorance that leads people to use /dev/random and haveged rather than RtlGenRandom (Windows), getrandom(2) (new Linux), or /dev/urandom (old Linux).

Cryptography engineering requires care and this sort of ignorance tends to undermine secure implementations.


Entropy running low is a frequent and actual problem for me on virtualized or headless servers serving https content. It slows everything, including ssh, to a halt.

You can see your current entropy level, which is probably high due to having a keyboard, with this command:

cat /proc/sys/kernel/random/entropy_avail

You can watch this number drain by reading from /dev/random continuously.

http://www.issihosts.com/haveged/ Haveged has existed since 2003 and has lots of documentation and discussion of its randomness.


> Entropy running low is a frequent and actual problem for me on virtualized or headless servers serving https content. It slows everything, including ssh, to a halt.

Entropy running low is not an actual problem. AES-CTR doesn't "run out of key".

If your OS's "entropy estimator" is producing small numbers and your userspace applications are using /dev/random, yes, that will degrade your performance. That's the actual problem.

The solution is for the developers of your software to stop using /dev/random, wholesale.

Saying that the actual problem is "entropy running low" is like saying "water is flammable". That might be true in extreme cases, but isn't in the general case.


You're right, except for an uninitialized entropy pool

Once the generator had been seeded (once), sure, read from urandom and don't worry.

The main issue is trying to read from it at boot time


See https://news.ycombinator.com/item?id=21187044

> If you're on an ancient Linux kernel, you can poll /dev/random until it's available if you're uncertain whether or not /dev/urandom has ever been seeded. Once /dev/random is available, don't use /dev/random, use /dev/urandom. This side-steps the "/dev/urandom never blocks" concern that people love to cite in their fearmongering. This is essentially what getrandom(2) does.

This is outlined here as well: https://paragonie.com/blog/2016/05/how-generate-secure-rando...

This is what randombytes_buf() does in libsodium on older Linux kernels.

This is actually the best of both worlds: Although /dev/urandom on Linux will happily give you predictable values if you try to read from it before the RNG has been seeded on first boot, once /dev/random is "ready" to be read from once, you know that the entropy pool powering /dev/urandom has been seeded. And then you can guarantee that /dev/urandom is secure and nonblocking henceforth.

If you can't just use getrandom(2), do the "poll /dev/random, then read /dev/urandom" dance and even the fearmonger's favorite issue to cite becomes a non-issue.


I don't think using poll on /dev/random is a good idea, here's why:

1. /proc/sys/kernel/random/read_wakeup_threshold is 64 by default [1,2], but the kernel only considers the pool initialized with >128 bits [3]. So in an early userspace you're likely to be reading from an uninitialized /dev/urandom after waking up from poll if you're not also checking the entropy count to be >128.

2. /dev/random could be a symlink to /dev/urandom, so the call to poll would return as soon as possible.

3. The system could only be providing /dev/urandom.

So if you're going to have to check the entropy count anyway, a less convoluted approach would be to repeatedly retrieve it via the RNDGETENTCNT ioctl [1] on a /dev/urandom file descriptor and sleeping while it hasn't reached >128 bits yet.

Don't take my word for the feasibility of this fallback method as I'm not a cryptographer or implementer of cryptographic interfaces. Instead, consider BoringSSL (Adam Langley et al.) that does almost the same in its /dev/urandom fallback. [4]

[1]: http://man7.org/linux/man-pages/man4/random.4.html

[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

[3]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

[4]: https://boringssl.googlesource.com/boringssl/+/refs/heads/ma...


Thanks, I saw this comment right after I posted mine.


You are not observing the "problem" of entropy running low. Because it doesn't.

You are observing the misguided attempts of fixing this non-problem.


At the risk of getting downvoted for this, I urged caution against VeraCrypt in the past, because of the ties the company of its developer (IDRIX) has with the French government, lack of a history in cryptography before he took over VeraCrypt, and the overall possibility of nearness to the French security apparatus. People say that you can audit the code and compile it on your own, but who does that? Who checks the binaries?

To me with my overly vivid imagination, the fast forking of TrueCrypt to VeraCrypt looked like a hasty power grab by DGSE. Pure speculation, though.


DGSE is not your opponent, they use rubber-hose decryption, works every time.


what do you suggest as an encryption tool for full-drive encryption?


BitLocker is great, especially as it can also take advantage of a TPM to protect against evil maid attacks .

In contrast, VeraCrypt developers did not seem to understand what a TPM is when I tried to discuss the topic with them.


dubious - i would not trust bitlocker as it is not open source, and you cannot have it audited.


Entropy doesn't run low, but /dev/random does.

It's a valid statement in context, and using it as a dependency is not a red flag.

(Though VeraCrypt doesn't seem to get enough randomness to be safe if jitter fails, which is pretty worrying.)


What about https://github.com/nhorman/rng-tools ? Would this tool be problematic too?


What do you recommend that has similar functionality?


Currently, nothing. I used to recommend VeraCrypt for the exact set of use cases it satisfies.

Having an answer to this question is going to require research, which might take time.


How about just the previous older version of VeraCrypt for now?


The problem is that a design decision like this calls into question the competence of the developers, and therefore the safety of any modifications they've made in the past.


How about the original truecrypt? AFAIK it was audited and no bugs were found.




Whether or not the kernel should be doing this is one issue, but VeraCrypt should be using whatever the kernel provides rather than rolling their own.


Maybe they need their own random number generator for the pre-boot decryption process? (Although come to think of it, I guess you only really need random numbers to generate the key when encrypting.)


You might be interested to know that Linux just merged a patch that fallbacks to jitter entropy when nothing else is available.

See 50ee7529ec4500c88f8664560770a7a1b65db72b.


Does jitter entropy mitigate the need to install haveged [1] on systems with high entropy requirements?

[1] https://issihosts.com/haveged/


What is a "high entropy requirement"? It is hard for me to come up with a secure design that somehow depends on having haveged installed; in practice, you should never be using it.


The most common is an internal CA server (certificate authority) that needs to generate many certs on demand.


What about that environment needs "extra" entropy? You either have entropy or you don't; the volume of signings you do doesn't have anything to do with it.


Reading from `/dev/random` (true randomness) will block if there is insufficient system entropy available. You can read the currently available amount via `cat /proc/sys/kernel/random/entropy_avail`.

If you have an SLA that guarantees true randomness, and a certain response time or availability, you cannot really afford to block indefinitely while Linux builds up more entropy via its natural mechanisms.

Augmenting with haveged is pretty common, all it does is add more sources of randomness. I was hoping that jitter entropy (which seems kinda like the same thing) would alleviate the need to install one more package in these cases, but it's not clear. I will have to try it out sometime.


> Reading from `/dev/random` (true randomness)

/dev/random doesn't provide "true randomness".

/dev/random and /dev/urandom provide the same kind of randomness. The difference is that, for 99.999% of developers, /dev/random blocks for no good reason.

All haveged does is pollute the kernel entropy pool to make /dev/random less unstable in production.

https://www.2uo.de/myths-about-urandom



No, it's not. The old man page was incredibly confusing and misleading and has created a weird culture of fear around a simple issue.

What you care about is whether the RNG is securely seeded or not. Once it is, you don't care anymore. Think of an RNG like a stream cipher (which is what many of them are under the hood). Key AES-CTR with a 128 bit random key and run it to generate all the "random bytes" you will ever realistically use in any usage scenario. AES never "runs out of key". In the same way, a CSPRNG never "runs out of entropy".

We regularly re-key the system CSPRNG (by updating it, in the kernel, with "more entropy"), but not because entropy is depleted; rather, because it provides a measure of future security: if our machine is compromised, but only briefly, we don't want attackers to permanently predict the outputs of the RNG.

What you want are random/urandom interfaces that block at system startup until the RNG is seeded, and never again. What you do not want are userland CSPRNGs and CPSRNG "helpers"; those aren't solving real problems, have in the past introduced horrible security vulnerabilities, and perpetuate confusion about the security properties we're looking for.

Sign one X.509 certificate or ten million of them; the same initial secure dose of "entropy" will do just fine.


An archived email to a mailing list from one person in 2013 that can never be corrected or amended isn't the most reliable way to spread information about cryptography engineering.

What does Gutmann say in 2019 about /dev/urandom vs /dev/random?

Which of the two do JP Aumasson (author of Serious Cryptography and inventor of several cryptography algorithms used today, including BLAKE2 and SipHash), Dan Bernstein (Salsa20, ChaCha20, Poly1305, Curve25519, Ed25519, etc.), Matthew Green (professor associated with the TrueCrypt audit), et al. prefer in their own designs?

I can promise you the answer is /dev/urandom. Why do they prefer /dev/urandom? Because of the reasons outlined in the article I linked (which, unlike the mailing list post you linked, is occasionally updated with corrections).

It's not really that complicated: Use /dev/urandom.

If you're on an ancient Linux kernel, you can poll /dev/random until it's available if you're uncertain whether or not /dev/urandom has ever been seeded. Once /dev/random is available, don't use /dev/random, use /dev/urandom. This side-steps the "/dev/urandom never blocks" concern that people love to cite in their fearmongering. This is essentially what getrandom(2) does.

If you're on a recent Linux kernel, you can say "just use getrandom(2)" instead of "just use /dev/urandom", but the premise of the discussion is whether to use /dev/random or /dev/urandom not which of all possible options should be used.

See also: https://paragonie.com/blog/2016/05/how-generate-secure-rando...

The belief that /dev/random must somehow be better than /dev/urandom is, frankly, security theater.


It's worth maybe providing some detail as to why, so I will do so.

Entropy is a measure of unexpectedness present in a system. It is not a property of the data you have, but the physical system that generates some signal, called information content. In other words, a given password doesn't have entropy, but the process that generates it does. What this measures is how unexpected an outcome is on average. If it produces the same outcome every single time, it has no entropy because you know exactly what it will do. The best you can do is a 50% chance any bit will be 0 r 1 - you can't get more unexpected than this. Failure to achieve something close to this is called having biais and the result is that you must go directly to the NSA. Do not pass go, do not collect 200 dollars.

Then you have functions called deterministic random bit generators. These are deterministic - they behave the same way for a given set of input parameters. They also produce a stream of output that is sufficiently random-looking for cryptographic use. They can produce very large volumes of this - up to 2^44 blocks of entropy for example from NIST's CTR_DRBG before you have to abandon this stream and start again.

The entire sequence of random bits has exactly the same entropy as the seed that started it. I said data doesn't have entropy and I stick by it: the process has expanded beyond "sample environmental data" to "sample environmental data and shove it through a function good enough for cryptographic pseudorandomness then generate a nice big 2^44 128-bit blocks of data from that". The _entire output_ has the same entropy, as it is part of the same process.

I'm not saying the linux kernel does this exactly, more using it as a simple explanation for what I'm about to say - once you've got some entropy up to some nice amount like 256 bits you can use this as a seed for a deterministic random bit generator. What you get is a huge stream of randomness, almost certainly unique to you never to ever be seen again. And you only needed a very small amount of real entropy from hardware to get so much you will never run out.

Of course, there is nothing stopping you from "reseeding". You constantly have more entropy available and there's no harm mixing more into what you're generating (more entropy available comes from doing some more sampling of the environment and throwing this in).

Entropy is never used up. The randomness you get from /dev/random is not better. The randomness from a €1000 euro quantum random number generator is not better. Randomness doesn't rot like eggs and it is not known to the state of california to cause cancer. In fact, if you generate a key, it isn't random. It's known. It's static. What you get is some data that was generated with a certain amount of "surprisingness" in the process involved in getting it. If you repeat the process, you get more. There are obviously limits to the amount you get depending on the amount of data you ask for, because there are only finite possible outcomes, so it can only be on average so surprising. So you ask for enough that there are sufficiently many outcomes that if you could enumerate them all, you could also bruteforce AES by trying every possible key (we would not have enough energy to do this if we converted the entire mass of the earth to energy to power a classical quantum computer, let alone to do it before the heat death of the universe) - approximately speaking this will have more than enough entropy.

Throwing away those generated seeds because a single application has used them is extraordinarily wasteful and stupid because there is no need to wait. You can use that seed and the massive amount of "permutations" it generates in the DRBG for pretty much ever - I mean, unless of course you happen to need 2^44 or so 128-bit AES keys (that's over 2500 aes keys for everyone on earth). In that case, you can basically start generating them while you wait for another seed in your own time. You can probably nip to the beach or leisurely read a book. You'll still have enough time for your random process to give you another seed generated with a process of suitable entropy about a billion times over and then you've got so much cryptographic-quality random material it's like leaking out of your computer all over the floor!!!! Bits! Bits everywhere? Have you ever spilt rice? Kinda like that!

I endorse the above answer by Scott entirely, especially the bit about using /dev/urandom. This applies to any desktop or laptop computer system and probably most smartphones. The only time you can have problems is in getting the initial seed in the first place, and this only happens in embedded contexts with little variation in their runtime and no way to observe environmental noise and in this case /dev/urandom is preferred because getting more seeds is hard work and basically requires more time waiting for data to be seen. So /dev/urandom is objectively the better choice.

Perhaps you don't care, but OpenSSL's new random number generator is based on NIST's CTR_DRBG. It samples entropy from the environment using the approximations I have glossed overhere (it matches the NIST Special Publication's entropy estimation methods). They then generate a master source CTR_DRBG for the application context. Every SSL context object gets its own CTR_DRBG seeded from the master one, which is also valid to do! I told you! Randomness all over the kitchen floor like the bag of rice I just dropped so much of it you can't even vacuum it all up again how did it get in my washing exactly!!!????

Stop it. Cease and desist using /dev/random.


> Augmenting with haveged is pretty common

So was snake oil.


No, it doesn't prevent /dev/random from blocking when estimated entropy runs out.


Thank you. This is what I was asking.


> My intuition is that this is dangerous and will likely result in security bugs in the future.

Dangerous in what way? If it's properly mixed into the pool, it shouldn't make the pool more predictable.


It can be dangerous in a lot of ways.

Is JitterEntropy actually a CSPRNG or just a PRNG?

Is it fork-safe?

Is VeraCrypt's implementation secure?


Sorry, I mixed up the comment thread in which I was responding. I thought you were replying to the comment mentioning Linux adding a jitter entropy source during early boot in an effort to mitigate the lack of other entropy at that stage.


This is quite dishonest and truly perplexing. If you took 2 minutes to read the code and the docs, you'd know that the Jitter Entropy is NEVER the only source of entropy VeraCrypt uses but is instead added to other sources (/dev/urandom under Linux). Instead of just assuming things and jumping into conclusion, you'd better take your time and read. Also, wouldn't it be better if you created a 'ticket' or 'issue' in sourceforge / github if you think this is a very serious issue, or better, wouldn't t be better if you contacted the VeraCrypt team directly, instead of going ham like this. Truly dishonest.


Linux kernel devs did exactly the same: using jitter entropy for randomness. If it is OK for kernel devs, then it should be OK for VC.


That's why I like to use a keyfile. Unless it is badly coded, taking my 1kb of random bytes should help.


TrueCrypt used CRC32 to mix keyfiles into the KDF. I never saw an entry in VeraCrypt's changelog addressing this deficit.

I originally proposed BLAKE2 for this purpose on their issue tracker, circa 2014.


It would be a good improvement. I'll try to check the source to see how it's done


Does it mean that no matter the size of the key file, only the 32 bits of its CRC32 is being utilized?



slightly off-topic, but could someone verify a memory I have regarding the shutdown of TrueCrypt?

I vaguely remember after it was announced that the project was disbanded, that there was one single commit from the authors whose only change was a textual replacement of "United States" with "USA" everywhere it appeared.

Coming from the highly anonymous and secretive TrueCrypt group, this was interpreted in this most scandalous possible way. I.e. the US govt was somehow strongarming them. I'm not claiming that part is true, I'm just curious if my memory is correct about the whole incident.



> It's pretty likely that Paul Le Roux was behind TrueCrypt.

He worked on E4M, but I can't imagine that he did TrueCrypt, given the stuff he did at these (post-E4M) times. It simply doesn't fit. Also, from Wikipedia link:

"Le Roux himself has denied developing TrueCrypt in a court hearing in March 2016, in which he also confirmed he had written E4M."

Also:

https://en.wikipedia.org/wiki/TrueCrypt#E4M_and_SecurStar_di...

Also:

https://www.securstar.com/en/about-us.html

Also:

https://groups.google.com/forum/?_escaped_fragment_=topic/al...

And:

https://groups.google.com/forum/?hl=en&_escaped_fragment_=ms...


It's interesting that he is from Zimbabwe (then Rhodesia). Of course, his name gives it away (that's why I checked). But what I wanted to say was something else.

> he [liked] the video game Wing Commander [1]

Somewhere between 1990 and 1995 I recall that Windows PCs in South Africa shipped with a games CD. You would get Ultima (immediately banned by your parents), Wing Commander, Sea Wolf and other games and this would be your staple for a while. When Warcraft 2 (or it may have been Starcraft) came out I think it cost about R 350 (USD 90 in 1995) so you would need to be more picky in the games you could buy than today. One should mention the rand's historic dissonance with regard to the exchange rate and PPP, but whatever.

I remember buying some shitty Disney game for the same amount and regretting it instantly (I played Warcraft 2 instead). It was not until 1999 that I saw games "in bulk" with Tiberian Sun, ReVolt and a multitude of other games being shared on DC.

[1] https://en.wikipedia.org/wiki/Paul_Le_Roux, edited for sensational language. Note that he had the console version, but I am not sure which console that would be. The PS1 was the first major console in South Africa (apart from "video games", i.e. famicon or NES clones) but I may stand corrected.


I’m reading the Atavist piece and noticed that he was extremely racist towards Asians and yet two of his ex-wives were Asian. I wonder how he reconciled that.

I feel really bad for his kids. When I studying in Asia, I met a few mixed-race kids that had racist expat parents and they often had major identity issues. For those that didn’t stay in the expat bubble, they would shun their “Western side” and try to completely blend in with the locals.


Well, where I come from, the racist people have a tendency to do that, so they themselves have identity issues.


Wow that Wikipedia article reads like a synopsis of a Bond movie.


you do yourself a disservice if you don't read the atavist piece


Yes that is the Bond movie script with all the juicy bits.


Satoshi Nakamoto Could Be Criminal Mastermind Paul Le Roux

https://news.bitcoin.com/satoshi-nakamoto-could-be-criminal-...


It doesn’t fit with any of his skills. People think “oh, he knows cryptography, so he Must be a good candidate to be satoshi”. It’s not that simple, the perfect candidate is someone versed in the ecash literature as well as someone who can write an academic paper (even if lacking, the form is there).


Wow. That Wikipedia entry on Le Roux is wild. I never knew any of that.

Something straight of a complex spy thriller.


From the TrueCrypt shutdown page: http://truecrypt.sourceforge.net/

> WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues

Which if you are into conspiracy theories can be rendered as

Using TrueCrypt is Not Secure As => using TrueCrypt is NSA



No need to read the blog post, the "end" page is still reachable:

http://truecrypt.sourceforge.net/

and Wikipedia article is accurate:

https://en.wikipedia.org/wiki/TrueCrypt


I really tried to use VeraCrypt on my main desktop Windows machine. But every Windows 10 major update was a chore to get working with decrypting everything, especially with GPT. (Did you know that if you leave updating Windows too long the calculator just straight up stops working?) Eventually I caved in and bought Windows Pro so I could use BitLocker (I'm aware of the problems of using closed-source encryption software but it's good enough for me). Another sad victory for Microsoft's anti-competitive practices.


I had that issue for a while but VC finally fixed it and haven’t seen any problems since.


My computer had a problem (not uncommon apparently) that forced Windows to be the default EUFI boot option. Admittedly this is perhaps more of a laptop manufacturer bug than a problem from Microsoft.

I used a workaround suggested on the VeraCrypt forums: renaming the Windows boot loader, replacing it with the VeraCrypt one, and pointing VeraCrypt at the renamed option. This meant that any Windows update that updated the boot loader would break the whole arrangement and to fix it I needed to boot from a USB stick.

I think I most recently tried on version 1.22. It does sound like this version has some relevant fixes though.


One of my computers successfully updated from 1803 to 1809 without having to decrypt and re-encrypt the boot drive. So I thought the problem had been fixed, but it wasn't. I couldn't go from 1809 to 1903 without decrypting. I'll cross my fingers and try it again when 1909 comes out.

Another machine couldn't even update to 1809 without decrypting. After updating and re-encrypting, it refuses to mount any system favorite volumes. VeraCrypt is a mess. Or rather, Windows keeps breaking things.


I have the same problem at the moment.... I want to update my Windows 10 (from 1803 to 1903), but it is simply not possible. Everytime I want to update I get an error (0xc190012e)... I uninstalled VeraCrypt and decrypted all of my drives (Took quite a long time..), but no success.

Probably this does not even has to do with VeraCrypt: After a bit of research I found some posts. It seems like this error is caused by my SSD... I guess I have to reinstall my Windows.


>Did you know that if you leave updating Windows too long the calculator just straight up stops working?

Doesn't make sense to me. Cos on my laptops is running a 2015 version of windows 10. I refused to update it. Yet calculator still works there.

It's possible that you're right and mine only works cos I did a lot of firewalling.


After reading this thread, what alternatives do we have to VeraCrypt? Or simply do we put our effort into improving the current version and help it improve?


I don't think there are any decent alternatives to replace TrueCrypt/VeraCrypt. There are some once you narrow the scope, like being fine with linux-only etc. With the same scope as VeraCrypt there's nothing better.

Additionally TrueCrypt has been out there and well known for a long time. It even had a pretty decent audit done. Even with its flaws this is a strong foundation to build on. Any brand new undertaking is pretty much guaranteed to have more flaws. Thus we should definitely focus on improving VeraCrypt. It would also be easier to do an audit on just the changes from TrueCrypt to VeraCrypt as opposed to a whole new program.


macOS Catalina related: Can Veracrypt read and write a completely Truecrypt encrypted drive? Asking because new macOS will not support the latest Truecrypt anymore because it is 32bit software :/


I'd suggest using an encrypted volume file, that's what I currently do and am using MacOS Catalina. So far, no problems.

The added advantage is that you can easily do backups of it.


I haver some legacy external drives which needs encryption (just for loss protection and nobody can steal the content) and need to support multi OSes. Otherwise I would agree.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: