In the past I was part of an attempt to make asymmetric crypto easier to use for a commercial product. My role was to help design the product (simple, collaborative file sharing; you've heard it before) and to let the experts ensure the crypto was used properly. What drove me absolutely nuts was the level of paranoia and uncertainty around the safety of the libraries and algorithms used. The available experts were confident in the concepts, but feared every implementation. "Par for the course" you might say, but the status quo around OpenSSL and other libraries is unacceptable.
From an application developer's perspective crypto libraries look like a lightning rod even for the experts. No one wants to get too involved or make recommendations lest they back the wrong horse (admittedly an easy thing to do). So kudos to the OpenBSD team for rolling up their sleeves and attempting to build a solid foundation for the future.
I'd be happy if OpenSSL is simply fixed. I'd be overjoyed if there was a solid crypto system underneath an OpenSSL compatible API that gives us a path towards an open source, reusable crypto platform.
Can any crypto experts comment on whether it is feasible / how much work it is to implement SSL on NaCl? Maybe the issue is that NaCl doesn't support all the ciphers you need.
I do, and I hope to give it a try in the latest product I'm building. Like everyone else I'd prefer to see some expert validation of NaCl before I put a ton of trust into it. That said DJB's track record is pretty good in my eyes (I liked the design of qmail, a lot).
I have been wondering the same thing. This link suggests that there are problems with NaCl preventing adoption, and puts forth a repackaged alternative called Sodium:
The SSL/TLS protocol unfortunately uses some known-bad constructions, which lead to intractable issues (see: BEAST, Lucky13 for examples)
NaCl's goals are vastly different to those of SSL/TLS. SSL/TLS aims to provide a simple, clean interface with sane defaults for the majority of simple use-cases, whereas SSL/TLS aims to provide an interface with near-infinite flexibility for the case of providing an encrypted, authenticated tunnel.
NaCl also deliberately does not support lots of ciphers, as that makes it easy for developers to choose poorly, for example, (Alleged) RC4, as is supported in OpenSSL.
This might sound totally irrelevant, but it kind of reminds me of the situation with NASA. Once a certain technology has been proven to work, people do NOT like to deviate from that change, since hey, it might fail and no one wants to take the blame.
The end result is that NASA is still using computers from the 60s, and it took someone like Elon Musk to change the status quo which resulted in reduced costs (which is why NASA is currently contracting SpaceX, which can do things cheaper).
Bullshit. SpaceX dropped the launch cost, which is far away from the overall mission costs that NASA has to deal with. Dropping the launch cost means decreasing the cost of red tape and infrastructure (such as using fuel stored at higher temperatures), and increasing modularity and benefits of scale but it doesn't help decrease the cost of R&D and manufacturing of satellites and mars rovers.
NASA uses the RAD750 [1] as well as a few other still in production CPUs. Remember Voyager? The satellite is still flying and sending back priceless data after many decades? To launch missions (and critical infrastructure like GPS) you need years, decades even, for usage & reliability data to accumulate and industry to perfect the manufacturing process of parts that can handle environments that are, by any definition, at the extremes. But, please, show me a mission launched in the last 5 years with a 50 year old CPU design.
The other side of the coin is NASA is rarely rewarded for success, but is punished for failure and it's a problem rife in the space industry - you can't have a mission fail.
One of the things the Deep Space Industries people were feeling confident about was they were signing all their contracts to have a 1/3 success criteria - they could launch 3 spacecraft, but would have fulfilled the contract if only 1 succeeds. That little bit of leeway, they think, will allow them to try new things and definitely lower costs.
Obviously some of this you can't do with manned missions, but its still a big cost saver when you change how you balance the equations. One of the things driving the CubeSat movement at the moment is the ability to fail - the low cost is encouraging people to take non-spacerated components, launch them, and see what happens, which then turns them into space-rated components (for a given mission envelope).
Thanks for bringing this up! In my general aerospace engineering class, taught by several active engineers and project managers from JPL, one of the professors did a rough estimate and figured that NASA could double the number of launches without effecting budget or success rate.
He thinks that, save for a few critical infrastructure missions, the diminishing returns get to such an extreme that they add no benefit to most missions. However, because NASA has to play politics for an ever shrinking pie, they can't afford to take no-brainer risks like launching ten missions instead of five because four failures looks worse than two, even if they both represent 40% failure rates.
> (which is why NASA is currently contracting SpaceX, which can do things cheaper).
And they're also contracting Orbital Sciences and Sierra Nevada and Boeing and probably a bunch of others of whom I have no knowledge.
SpaceX is following in the footsteps of many other private companies that have developed spaceflight hardware on their own initiative. They might be 'younger' and more agile but it's not a new innovation.
I'd be surprised if NASA is using, on a wide-spread basis for missions, computer hardware older than 20 or 25 years at this point. I'm extremely skeptical they are still using much (any?) hardware from the 1960s or 1970s. Your exaggeration goes way too far.
It was an exaggeration, but it seems that the Shuttle was using computers which were a '70s, then a '90s version of a '60s design: https://en.wikipedia.org/wiki/IBM_AP-101 At least the '90s upgrade removed the ferrous core memory.
Part of the problem here is that it is very hard to audit something OpenSSL's size. It has 450,000 lines of code. Are you going to skim, let alone audit, a mathematics-heavy codebase of that size? No.
Some of that is OpenSSL's fault, but a lot of it is inherent in implementing a 200-page (before extensions!) standard. GnuTLS is 170 KLOC, CyaSSL is 205 KLOC, and even PolarSSL, commonly thought of as the gold standard in trimming the fat, is 55 KLOC.
If you want to reduce the level of paranoia, you would have to pair down the size and scope of TLS by an order of magnitude. For the record, I support that kind of thing, but you would have to cut a lot of things people actually use, not to mention breaking backwards compatibility.
While I understand where you've coming from I think your attitude is dangerous. Implementing the SSL/TLS spec(s) is always going to be difficult while they are in widespread use. That said refactoring the existing libraries (to make them more readable and understandable) and then auditing them (to gain confidence in their efficacy) is especially important given the nature of their applications.
Doing the hard work of getting this right, or accepting the status quo and getting it wrong, will have a very real impact on the future of many, many people. Should the time come that changes have to be made to the standards themselves then we can debate the merits of breaking backwards compatibility. I value pragmatism more than most (I think) but there comes a point where the cost of not changing is higher than the cost of changing. I think we're there.
From an application developer's perspective crypto libraries look like a lightning rod even for the experts. No one wants to get too involved or make recommendations lest they back the wrong horse (admittedly an easy thing to do). So kudos to the OpenBSD team for rolling up their sleeves and attempting to build a solid foundation for the future.
I'd be happy if OpenSSL is simply fixed. I'd be overjoyed if there was a solid crypto system underneath an OpenSSL compatible API that gives us a path towards an open source, reusable crypto platform.