Hacker News new | past | comments | ask | show | jobs | submit login
On the Impending Crypto Monoculture (2016) (lwn.net)
246 points by dankohn1 on Jan 12, 2017 | hide | past | favorite | 69 comments



As the article says, Bernstein's stuff won out because his work is at the intersection of solid crypto, clean and performant code, and sane API design. This, coupled with the NIST curve debacle that shook public confidence, makes for a compelling case.

Part of the problem is there is often a wide gulf between cryptographers and the authors of libraries who then go on to use those algorithms. This is perhaps more acute with approved-by-committee, standardized crypto because library authors may have to implement it to be compatible, despite not being experts in the particular algorithm.

In Bernstein's case, he ships (through NaCl) a reference implementation that is good enough to be used in production, and packages complete constructs in ways that are resistant to misuse. This is his innovation -- other libraries tend to offer only the primitives and rely on the user to chain them together in ways that make sense and never in the ways that don't.

There are other cryptographers that ship decent reference implementations -- the Keccak team being one -- but their work is largely restricted to sponges and any construct that can be formed out of it (e.g. PRFs, stream ciphers, hash functions). There is perhaps a shortage of people who look at holistic cryptosystems and possess both the domain knowledge and the ideal attitude towards misuse-resistant design.


> As the article says, Bernstein's stuff won out because his work is at the intersection of solid crypto, clean and performant code, and sane API design.

As a casual observer, my impression has been pretty different. Here's an excerpt from the README of curve25519-donna, which it seemed like everyone was using for a while:

curve25519 is an elliptic curve, developed by Dan Bernstein, for fast Diffie-Hellman key agreement. DJB's original implementation was written in a language of his own devising called qhasm. The original qhasm source isn't available, only the x86 32-bit assembly output.

Since many x86 systems are now 64-bit, and portability is important, this project provides alternative implementations for other platforms.

My impression has always been that what we get from DJB is some wacky implementation written in a language of his own devising, or just the 32bit assembler output of that, or some partial code fragment that has to be disentangled from his benchmarking library, and the only thing that makes this usable are people who are motivated to do the work of making it digestible by mortals.


I'm not sure we need to litigate this, because it's not like John Viega and David McGrew contributed the production versions of AES-GCM that everyone uses.

More importantly: whatever you think of Bernstein's packaging, an area of expertise he clearly shares with just a small subset of cryptographers is the design of cryptographic primitives optimized for consumer compute hardware. There's a reason his primitives tend to outperform the ones they supplant: until relatively recently, Bernstein was the cryptographer who took this challenge most seriously.

Finally: whatever you might think of things like qhasm, it's just a fact that the only mainstream crypto library a majority of crypto engineers are comfortable having generalist developers use is designed (in part) by Bernstein. When you use libsodium, you're (usually) using programming interfaces and constructions he designed.


It is also worth mentioning it is all public domain.

He has gone to great lengths to ensure the algorithms are all side channel resistant. The breadth of his concern and the care behind the decision making is really impressive and most users of his software only really understand the tip/visible portions of it all.

I will forebear the idiosyncrasies, gladly, to get all the benefits compared to the current stew of crypto primitives I see getting misused almost constantly.


> My impression has always been that what we get from DJB is some wacky implementation written in a language of his own devising

Go look at TweetNaCl. It's a very small, very clear implementation.

In my opinion, the reason why djb was always doing "wacky" stuff was because everybody was always bashing him for being the slowest (it's hard to compete with an AES primitive in hardware).

Suddenly, however, performance isn't the boogeyman that it was when nobody else has any useful crypto.



Short unreadable identifiers, abusing macroses, lots of magic numbers. Not really the best example of C, IMO.


> This is his innovation -- other libraries tend to offer only the primitives and rely on the user to chain them together in ways that make sense and never in the ways that don't.

It's a little bit more than that. When you look at something like OpenSSL, it gives you two classes of thing. One is very basic primitives (AES, SHA256) that are useless for mortals and begging for misuse, and the other is the monolithic ball of tar that is TLS with X.509 and export ciphers and the whole crooked mess.

Bernstein gave us something in the middle. Safe primitives that compose.


That's not entirely true, OpenSSL also gives you something in the middle - the "Envelope Encryption" functions like EVP_SealInit(). They're just not widely known (and still allow you wide latitude in choosing obsolete primitives).


> the "Envelope Encryption" functions like EVP_SealInit()

That thing is a footgun.

Those functions encrypt a block of data to a public key without authentication. And encourage you to encrypt to more than one public key at the same time, which is generally a bad idea. The example cipher in the man page is EVP_des_cbc().

It looks like the fact that they're not widely known is the good news. :)


It does exactly what it says on the tin - it encrypts something so that only the recipient(s) can access it. It is very far from the low-level primitives: it takes care of generating the symmetric key, implementing the underlying cipher mode (so, for example, if you use a type like EVP_aes_128_gcm() you'll get correctly implemented GCM) and generating the IV. It takes care of padding for the asymmetric encryption step. In short, it protects you from a whole slew of low-level implementation mistakes that await you if you use the low-level primitives.

There's also EVP_Sign() which also does what it says on the tin - ensures the message comes from who it says it does, but anyone can read it. You can compose these together as well, to create a message with both properties.

What they don't protect you from is designing an insecure protocol* on top, but then NaCl doesn't either: for example it's easy to create a protocol with trivial replay attacks. I believe the classes of errors you allude to fall into this category.

Perhaps you think there should be an EVP_Box*() that combines signing and sealing, but regardless it is simply not true that OpenSSL provides nothing between raw AES and full-blown TLS.


> Those functions encrypt a block of data to a public key without authentication.

How is that a problem? I mean, this is not symmetric crypto where the 2 parties are supposed to share a secret session key. While you generally do want authentication, I can imagine a use case for anonymous messages.

Anonymous messages cannot have authentication. Even if they had, the attacker can just block your message and forge another with another public key of his. The receiver won't make a difference, it's a throwaway key to begin with.

Did I miss something? In crypto, I often do.


> How is that a problem?

The problem is it has the wrong default. The equivalent in NaCl is crypto_box(), which requires the sender's private key to authenticate the message. If you really wanted anonymous messages then you could generate a new private key for each message and immediately throw it away, which gives you a much stronger signal that you're doing something unusual.

What those OpenSSL functions do is generate a symmetric key, encrypt the symmetric key with the public key (to give to the recipient), and then encrypt the data with a symmetric cipher using the symmetric key. Those are each already basic primitives, so all the combined function is really doing is making it easier to do something insecure that most people would rarely actually want.

It's possible to use it correctly, but it's too easy to use it incorrectly. Doing something uncommon with bad security properties should be possible but not easy.


libsodium, on the other hand, provides a function that does exactly that underneath - crypto_box_seal().


EVP lets you shoot yourself in the foot in various ways. OpenSSL also doesn't provide any API to e.g. validate a TLS cert chain, so everyone does it their own way and often gets it wrong.


>"but their work is largely restricted to sponges and any construct that can be formed out of it"

What does "sponge" mean in the context of crypto implementations? I haven't heard this before which is why I ask.


It's a function that takes an input of any length and produces an output of the requested length.

From the Keccak (SHA-3) authors [1]: A sponge function is a generalization of both hash functions, which have a fixed output length, and stream ciphers, which have a fixed input length. It operates on a finite state by iteratively applying the inner permutation to it, interleaved with the entry of input or the retrieval of output.

(...)

On top of its original goal as a security reference, we realized that the sponge construction could also be used to build efficient cryptographic primitives. An important aspect is that the cryptographic primitive to be designed is a fixed-length permutation rather than harder-to-build structures such as block ciphers or dedicated compression functions. This is rather good news in itself, as all the symmetric cryptographic services can be realized using only a single primitive: a fixed-length permutation. As opposed to a block cipher, a fixed-length permutation makes no distinction between data and key input and hence can treat all input bits on an equal footing and at the same time can be made simpler.

(...)

With its arbitrarily long input and output sizes, the sponge construction allows building various primitives such as a hash function, a stream cipher or a MAC. In some applications the input is short (e.g., a key and a nonce) while the output is long (e.g., a key stream). In other applications, the opposite occurs, where the input is long (e.g., a message to hash) and the output is short (e.g., a digest or a MAC).

[1] http://sponge.noekeon.org/


Thanks for the great explanation and the link. I'm glad I asked. Cheers.


To add to my point, once a few key parties chose Bernstein's crypto, network effects did most of the rest.

Adam Langley [1] of Google authored the very first draft [2] of ChaCha20 and Poly1305 in TLS in September 2013 and started the feature request in Mozilla NSS that same month [3], while it presumably shipped in Chromium and BoringSSL earlier. A 2014 Google blog post credits the Poly1305 implementation to Andrew Moon (floodyberry) [4], and the ChaCha20 implementation to Ted Krovetz of CSU Sacramento [5][6].

Meanwhile, a very similar Chacha20+Poly1305 was added [7] to OpenSSH in November 2013 by Damien Miller (djm) [8], who is a longtime OpenSSH contributor but since 2006 has worked for Google.

Google likely did this to find a fast cipher that works on hardware that lacks AES acceleration (e.g. phones), but once they did it everyone else followed. This was of course during that awkward time in mid-2013 when no browser supported TLSv1.2 yet, and all TLSv1.1 ciphersuites were insecure [9]. This also coincided with Brian Smith of Mozilla blogging about cutting down on the number of different ciphersuites browsers are to use in the future [10][11], which began a big purge in ciphersuites [12].

[1] https://www.imperialviolet.org/ [2] https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-0... [3] https://bugzilla.mozilla.org/show_bug.cgi?id=917571 [4] https://github.com/floodyberry?tab=repositories&q=poly [5] http://krovetz.net/csus/ [6] https://github.com/coruus/chacha/tree/master/ext/supercop/kr... [7] http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/cip... [8] http://www.mindrot.org/~djm/ [9] https://news.ycombinator.com/item?id=12393830 [10] https://news.ycombinator.com/item?id=12393683 [11] https://briansmith.org/browser-ciphersuites-01 [12] https://bugzilla.mozilla.org/show_bug.cgi?id=1036765


I get the point, but the current state of affair encourages me to make it even worse. I'm currently working on a tweetNaCl-like library that will integrate not only Chacha20, but also Blake2b and Argon2i, based on the same ideas.

The repeated application of simple ARX based rounds is a damn good idea: it's efficient even for software implementations, it's often "naturally" constant time, and it's simple. Implementing this stuff is easy: if it's Valgrind clean and passes the test vectors, it probably works. It also makes audits and reviews much easier.

Similarly, Wegman-Carter authenticators such as Poly1305 are kind of a no-brainer: no forgeries are possible unless you break the cipher itself! Also, fast. Modular arithmetic and potential limb overflow makes them harder to get right but still.

Finally, there is simply no escaping elliptic curves. I don't trust the NIST curves. 25519 is not too complex, fast, and doesn't leave enough space in its constants for a backdoor (DJB made sure the design constraints were tight).

I mostly agree with this article, but I don't mind a monoculture if it means I get boring crypto.


None of "simple", "fast", and "not backdoored" capture the real reason why Curve25519 is so deservedly popular; the real issue is that Curve25519 is misuse-resistant: random strings are, for 25519 ECDH, valid public keys, and the arithmetic rules for the curve he chose permit straightforward constant-time implementations.

I haven't spoken to a cryptographer who believes the NIST curves are backdoored (or, for that matter, in any of the elaborate hocus-pocus around the Brainpool curves being untrustworthy). But the NIST curves are difficult to use safely, which is why nobody likes them.

The NIST backdoor stuff is an argument that appeals more to message board nerds than to practitioners.

(Is NIST itself trustworthy? Fuck no. The Dual_EC CSPRNG seems pretty clearly to have been a backdoor at this point.)


I don't think the backdoor in the NIST curves, if any, is that the curves themselves are not secure, but that they are so hard to implement that it's practically guaranteed that implementations have more-or-less critical issues / side channels.


But that's pretty much true of all the curves of that vintage, right?


Yes. It's also hard to know whether "they" didn't try "hard enough" to get "better" curves or whether "they" didn't know any better(†) or whether "they" intentionally made these "bad" curves. That's why I don't want to call it a backdoor.

(†) We know that historically NSA had secret cryptography knowledge, the most famous example being differential cryptanalysis and the subsequent hardening of DES (although IBM also knew about that). Since it's a secret agency we cannot know the current state of their secret knowledge; my gut tells me though that since crypto has become a much more open/scientific discipline in the last 10-15 years that they probably don't have much of that anymore.


Good point, forgot how brittle the other curves were. I believe many require unbiased randomness or something?

> I haven't spoken to a cryptographer who believes the NIST curves are backdoored

From what I can infer, I would say this is highly unlikely. That still leaves enough room for things to be interesting, so… nope. I won't touch NIST curves willingly with my current knowledge.


No, the NIST curves simply can't use random strings as points at all: randomly chosen vectors can be points that aren't valid on the intended curve, but rather on curve of much smaller order; repeatedly submitting insecure curve points can allow attackers to recover private keys via CRT. It's a pretty awesome attack and not hard to implement in a test environment.

The unbiased randomness issue is with the ECDSA construction, which requires a totally, uniformly random k value for every signature. Bernstein's Ed25519 construction is deterministic, and doesn't have this problem.


Why Blake rather than Keccak? (Maybe I missed some reason why it's a good idea to avoid the latter.)


I have no good reason to avoid Keccak. As far as I know, it is safe. It did win SHA3 after all. On the other hand, all SHA3 finalists were deemed good enough, and the reasons for Keccak's victory was in part to promote diversity —which I can understand. I couldn't just ignore the other finalists.

I've heard that Blake was faster than MD5, and just as safe as Keccak. That decided it. I'm glad I did it, because the code is simple, and turned out to use the same ARX design as Chacha20.

Now I can imagine using Keccak instead. But then I'd be tempted to base the entire symmetric cipher-suite on the sponge construction, to share code and cryptanalysis.


Reminder, because the term "monoculture" has a powerful negative valence for those of us who remember Slashdot from the 1990s: Guttman isn't arguing that this is a dangerous development, but rather that it's sad that the rest of the cryptographic research community hasn't given Bernstein a run for his money.

I suspect the situation will look different 10 years from now: CAESAR will probably spit out a couple strong non-Bernstein AEADs, for instance. And remember, we're still not using a Bernstein hash (though BLAKE traces back to Salsa).


> BLAKE traces back to Salsa

Just reviewed some Blake2b, the round structure even integrates the Chacha20 improvements: the quarter-round has the same structure (besides the extra 2 parameters), and their application follows the same SIMD friendly column/diagonal rounds we see in Chacha20.


> What's more, the reference implementations of these algorithms also come from Dan Bernstein (again with help from others), leading to a never-before-seen crypto monoculture in which it's possible that the entire algorithm suite used by a security protocol, and the entire implementation of that suite, all originate from one person.

We've had crypto monoculture before. 15-20 years ago, we were using RSA, RC4, and MD5. Ron Rivest had a large hand in all of them. They weren't bad for the time, and the bad parts weren't bad because of common authorship.


I think the problem is ultimately the Bus Factor. If we're all relying on one implementation and that implementation is compromised (hit by a bus, a bug, NSA involvement, one bad 0-day, etc), there isn't anything to fall back on. That's my take given the history provided. This may be fixed as time passes and more implementations and crypto experts come together, but it does leave us with a Single Point of Failure nonetheless at this time.


When a crypto primitive breaks (like MD5 did), we're kinda screwed anyway.

On the other hand, bug-free implementations are possible. We have the tools to audit the hell out of them. That kind of pedestrian can be made immune to buses, so to speak.


Let's not forget that the designer of Rijndael (AES) is also the designer of Keccak (SHA-3).

RC4, by the way, was bad even for the time. It would be worth studying how it managed to survive as long as it did, because flaws in RC4 were well known long before the browser drama.


In the case of SHA-3 it's particularly ironic that the desire to avoid a monoculture in designs led to a monoculture in designers.

(The authors of AES had one of the few submissions that was very different from AES and this was a large factor in it getting picked)


It'd guess it was a combination of not being patented (they tried to protect it as a trade secret instead but it leaked), the general ease-of-application of stream ciphers, and the fact that RC4 in particular is so easy to implement you can easily memorise and bash it out in a few minutes.


RC4 was a state of the art stream cipher in 1987; that it took so long for a practical attack to appear, despite its wide use, is a testament to the fact that it was clearly not "bad even for the time"


RC4 was a trade secret unknown to the rest of the industry until it was leaked on Usenet in the mid-90a.


I have the fortune of working in a security group who's members are recognised themselves and have worked with Bernstein, Rogaway et al.

From my experience, too many cryptographers lack the applied skills (API design, knowing the issues faced by developers, designing and implementing performant crypto primitives). Conversely, too many applied folk lack the crypto experience: knowing the state-of-the-art of elliptic curves, MPC, lattices and so forth.

Bernstein has the experience to bridge both, which provides an enormous advantage. JHU's Matthew Green is someone else who's does both.


For those newbies like me wondering about AEADs:

https://www.imperialviolet.org/2015/05/16/aeads.html



FYI, discussion 10 months ago: https://news.ycombinator.com/item?id=11355742


  the single biggest, indeed killer failure 
  of the [GCM], the fact that if you for some
  reason fail to increment the counter, you're
  sending what's effectively plaintext (it's
  recoverable with a simple XOR).
Is this property particularly unique to GCM? I'd have thought all encryption algorithms would fail if you didn't give them the right inputs?


GCM is uniquely brittle. DJB's Chapoly also falls to nonce misuse (there are NMR schemes, like SIV, but they aren't in wide use), but if you so much as breathe on GCM the wrong way you get key recovery. A good example of this is Furguson's GCM short-tag attack, which is one of Sean Devlin's Set 8 cryptopals exercises.

GCM is so bad that when Sean and Hanno gave their GCM talk at Black Hat last year, they were able to inject their slides directly into the TLS session of a GCHQ website and serve them from there.


The ways in which encryption algorithms fail with bad inputs vary widely.

One older and real-world example is DSA vs. RSA: if you use the same random number twice when making DSA signatures, an attacker can recover the private key. Or if you use different random numbers, but you don't keep them secret, an attacker can also recover the private key. This is a real problem if you happen to use, say, Debian when it had the the OpenSSL RNG bug. RSA has no such flaw; the attacker would need you to have made a series of much-more-unlikely mistakes to recover a private key out of RSA.

See also https://www.imperialviolet.org/2013/06/15/suddendeathentropy...

If you're wondering why GPG, OpenSSH, etc. started defaulting to RSA in the past few years, that's why. (If you're wondering why they originally picked DSA, patents.)


Worth mentioning that ECDSA suffers the same problems as classic DSA if you reuse a nonce (for different messages). This lead to the extraction of a Playstation 3 signing key.

Why does DSA suck? Patents. It was designed to work around patents in the much simpler and Schnorr signature scheme (which is immune to nonce reuse due to the way message content and nonce are hashed together).


DSA also has much smaller signatures, which surely was a nice advantage.

On top of that, DSA was limited to 1024 bit keys and 160 bit hashes until the standard was extended and the extension propagated to implementations again. RSA signatures gained a lot of use while this was happening.


There are many ways an (authenticated) encryption algorithm can fail, including

1. attacker learns the key, and can encrypt and decrypt all messages

2. attacker can encrypt any message

3. attacker can decrypt any message

4. attacker can create a valid random encrypted message, but doesn't know what it says

5. attacker can decrypt only the messages that were wrongly encrypted

6. attacker can determine whether two encrypted messages are the same, but can't decrypt them

7. attacker can determine whether the two wrongly encrypted messages were the same, but can't decrypt them

Some of these situations are clearly less bad than others; I'm pretty sure at least #5 can be guaranteed.


Yeah, Chacha20 would fail the same way if we used the same key and nonce for different messages. I'm not sure what he was getting at.


With Chacha20 the worst that can happen is plaintext recovery, which is a lot less bad than full key recovery.


> In adopting the Bernstein algorithm suite and its implementation, implementers have rejected both the highly brittle and failure-prone current algorithms and mechanisms and their equally brittle and failure-prone implementations.

Right for the throat.


So djb writes crypto code that works and everybody switches to it but it isn't because his stuff is any good? Through the entire article he doesn't list anything wrong with the djb crypto stuff and ends up calling it 'brakish' and 'camel dung'? Really? It isn't because djb stuff is good, it is because everything else is crap? WTF?


Gutmann writes a bit colloquially, and the text was a message on a somewhat informal mailing list. So keep that in mind regarding the tone.

The overall point of the post was not to blast DJB, but, well, to point out the crypto monoculture, like it says on the tin. DJB's stuff is good, which is why it is popular. The problem Gutmann is highlighting (mostly) is that nobody's competing with him.

That problem? Monocultures are high-risk. A bug or a cryptanalysis breakthrough could render "everybody's" security broken at the same time. Or, DJB is eaten by a bear; who supports that code now?


If DJB is eaten by a bear I think we'll manage to use Ed25519 and Chapoly just fine

Moreover, you can't cryptanalyze Daniel Bernstein himself to break Curve25519 or Chapoly, so he's not a single point of failure. :)


To be honest, nonce reuse with Bernsteins authenticated encryption algorithms will lead to the same problem as those the author points out with GCM (i.e. plaintext recovery). However, the biggest issue with GCM isn't that the plaintext leaks when reusing nonces, it's the fact that reusing nonces leads to an attacker being able to forge arbitrary ciphertexts.


But… Poly1305 has the same "problem"…


Guttman's wording here is imprecise. GCM and Poly1305 are not comparably brittle. Both have nonce misuse issues, but GCM has additional problems. See:

https://news.ycombinator.com/item?id=13384762


> Compare this with old-fashioned CBC+HMAC (applied in the correct EtM manner), in which you can arbitrarily misuse the IV (for example you can forget to apply it completely) and the worst that can happen is that you drop back to ECB mode, which isn't perfect but still a long way from the total failure that you get with GCM.

It is not. As Dan Boneh stresses in his cryptography course, a cryptosystem is either secure or “terribly, terribly, insecure”.


> "This isn't just theoretical, it actually happened to Colin Percival, a very experienced crypto developer, in his backup program tarsnap."

Colin was using AESCTR+HMAC rather than GCM, but there was a nonce reuse bug six years ago:

http://www.daemonology.net/blog/2011-01-18-tarsnap-critical-...


(2016)


TLAs will love this. Only one stack to break and/or subvert.


Why personify crypto? Let the math alone be the driver in this domain. What is solid and true will eventually trickle up.


> Let the math alone be the driver in this domain. What is solid and true will eventually trickle up.

Unless you have some way of proving that one-way functions exist, which would imply as a consequence P ≠ NP, this is literally untrue (unless by "eventually" you mean "one day centuries hence, maybe we'll prove P ≠ NP"). There is not a single piece of crypto that can be proven correct today; all crypto that we use is safe because mathematicians have so far been unable to find a practical way to break.

This requires mathematicians to be actively trying to break the crypto, so personalities matter because you can't get a worldwide community of experts to study every random person's algorithm. This also requires mathematicians to have a sense of which ways a particular algorithm might be fragile, even if they can't break it quite yet, which requires some non-mathematically-provable trust in the cryptanalyst's good sense.

And, of course, if you believe that a certain cryptographer might be inserting back doors or prone to approving crypto that has back doors (see for instance Dual_EC_DRBG), you might distrust other output from that same cryptographer even if you can't prove anything yet.


Not only is One-time pad proven secure, it is also obvious without any knowledge of math.


It is not secure: it doesn't provide authentication (integrity protection) and it isn't nonce-reuse-resistant.


Integrity can be provided with a Wegman-Carter scheme, which come with a security reduction. The combination one-time pad + poly1305 for instance is provably secure. It's just impractical, and you have to rely on your chacha20 random number generator from /dev/urandom anyway.

Also good luck finding a nonce-reuse resistant algorithm.


Does the correctness of Poly1305 not depend on P ≠ NP? I guess this boils down to whether the correctness of AES depends on P ≠ NP, which it might not since it's symmetric?

The failure modes of reusing a nonce are never good, but some are far worse than others. The one-time pad's failure mode is particularly disastrous; it leaks the XOR of two plaintexts. There are a good number of algorithms that only leak whether the same message was repeated.


> Does the correctness of Poly1305 not depend on P ≠ NP?

Not that I know of. As far as I know the security reductions are unconditional.

Even worse than revealing the XOR of 2 plaintexts is the key recovery enabled by GCM (source: other comments in this thread). And if you're that scared of nonce reuse, just

Nevertheless, it doesn't matter. Avoiding nonce reuse is easy. And if you're really scared, just use a random nonce from a big enough space (192 bits, like XChacha20, are good). That way if you have access to a random source, you can turn any encryption algorithm into a nonce-less algorithm.


Security issues often come from engineering and culture, not math.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: