Hacker News new | past | comments | ask | show | jobs | submit login
Important security vulnerabilities in OpenVPN (guidovranken.wordpress.com)
266 points by guidovranken on June 21, 2017 | hide | past | favorite | 93 comments



Vulnerability #2 is a good example of why OpenSSL is a minefield even for a competent coder. For such a security-critical library it's pretty insane that the API is so unfriendly, bordering on hostile.

> The correct way to do this is to call GENERAL_NAMES_free. This is because sk_GENERAL_NAME_free frees only the containing structure, whereas GENERAL_NAMES_free frees the structure AND its items.

And later:

> Here, the code assumes that a return value that is negative or zero indicates failure, and ‘buf’ is not initialized, and needs not to be freed. But in fact, this is ONLY the case if ASN1_STRING_to_UTF8 returns a negative value. A return value 0 simply means a string of length 0, but memory is nonetheless allocated, so there are memory leaks here as well.

It mirrors my experience working with OpenSSL: you have to quadruple check each function invocation with the docs to make sure you got it right. You're never sure at a glance what's an input or an output parameter, what needs to be freed and how you're supposed to free it. What's the return value in case of error? 0? -1? <0? <=0? And then you have the macro soup with their STACK_OF "abstraction" (in the leakiest sense of the word) that just serves to make the code harder to reason about and is a huge pain when you want to create OpenSSL bindings for an other language.

Let me be clear: I don't blame the OpenSSL devs in any way. It's free, it's open source, they don't get a ton of money for that. Instead I blame all the big corporations who use OpenSSL "as-is", directly or indirectly, and don't invest some money to improve that mess. Maybe after a couple more Heartbleed-like vulnerabilities they'll take it a little more seriously.


The LibreSSL fork tries to do exactly that: Although they still provide the original OpenSSL API for existing applications, they add a much simpler and hassle-free API on top of it ("libtls" [1]), with the goal that all applications will be switched part-by-part to the new interface.

[1] https://www.openbsd.org/papers/libtls-fsec-2015/


Does it make sense to stop recommending OpenSSL and start recommending LibreSSL now?


Yes. Or tell people to use mebd or s2n or the other one that doesn't allocate. Don't use OpenSSL.


BearSSL


Yes, if only because LibreSSL is still a working drop-in replacement (which is nice to have as a first step of adoption), and because in LibreSSL they threw away lots of legacy code.

This throwing away of legacy code should not be underestimated. There were quite a lot of recent OpenSSL security issues from which LibreSSL had just a fraction, simply because the vulnerable code is no longer present at all in LibreSSL.


> you have to quadruple check each function invocation with the docs to make sure you got it right.

If the docs contain that information ¯\_(ツ)_/¯


Ow. The memories.

Back in 2004 I was writing a piece of networking code that had to do handshakes with parameters that were not exposed in OpenSSL API. After many days of code diving and mailing list digging, I came across an old thread.

Some kind soul had written up the man pages for the family of low-level functions I needed to use. He sent the doc update as a patch to the mailing list, and the response from the devs has seared itself to my memory: "NAK. If you use these functions, you need to understand all the code around them anyway."

That day I learned everything I needed to know about OpenSSL and its development.


There's always a trade-off between how complicated the docs and usage are and how many people will use it. OpenSSL is problematic because it has very complicated docs and usage, but people still used it anyway. This leads to a situation where people use it incorrectly, and for a crypto library, that's a very bad thing.

If OpenSSL wasn't the de facto king of OSS crypto libraries for so long (partly because it was the only one for so long), then people might have been able to make more informed choices about what library was better.


> If OpenSSL wasn't the de facto king of OSS crypto libraries for so long (partly because it was the only one for so long),

Netscape released the NSS library in 1998, and as far as I can tell it hasn't even changed its ABI incompatibly since 2000, but perhaps this is too recent for programmers to take notice.


I have no idea why NSS never got more use, but it didn't, for the most part. I assume there's a reason, but for the most part major adoption seemed to have been limited to Mozilla projects, Sun Enterprise Java, and OpenOffice (possibly because it also originated from a Sun project, and it seems to have enjoyed some popularity at Sun). If I had to make a wild guess from that info, it would be that it's more complex to work with, as odd as that sounds given the current context (although I guess it's possible OpenSSL got more complex over time, and was much simpler a decade ago).


For one, NSS is not the kitchen sync that is openssl. With openssl you can do anything with any file you can get from the anywhere.

Secondly, when using openssl-based servers it's usually easier to specify certificates and keys. You don't have to import them into a database, you only configure the filesystem path.


This mirrored my confusion when `AES_cbc_encrypt` modified my IV data when _decrypting_.


"The correct way to do this is to call GENERAL_NAMES_free. This is because sk_GENERAL_NAME_free frees only the containing structure, whereas GENERAL_NAMES_free frees the structure AND its items."

Holy moly. It took me about five attempts just to spot the difference in the two names. Can it even be fixed at this point, or does it need to start fresh?


> You're never sure at a glance what's an input or an output parameter, what needs to be freed and how you're supposed to free it.

Terrifying; would a more functional style of programming help here?


Using "const" consistently would be a good start. Better and more thorough docs would be nice as well.

If you're willing to break everything a more consistent API in general, especially when it comes to error handling, would make it less easy to shoot yourself in the foot.

It's the overall inconsistency of the OpenSSL API that's the main issue IMO. If the coder or code reviewer don't know all the function signatures and return values by heart it's very easy to write code that looks absolutely fine, might seem to run fine but is thoroughly broken. TFA is a good example of that.

That's why nowadays I much prefer using the Rust bindings over the raw C library. The error handling and ownership rules are checked by the compiler, it's like putting blanks in your footgun. It's liberating really.


Maybe a wrapper API for OpenVPN/SSL that attempts to Do It Rightunder the hood would be worthwhile?


Proper API design and documentation would be quite sufficient, breaking and expensive.


There's no reason they couldn't use more object-oriented return methods to make it easier to grok what's going on.


So I just installed ovpn on my phone just now. First place I go to test it is HN. And this story is literally top of the list.

Sigh.


I'm not sure these are so dangerous as all that. I see some server and client crashes, but nothing that'd allow transparent MITM, RCE, or the like. Perhaps there's something I have missed, and if so I hope someone more knowledgeable here will point it out - but right now I don't intend to stop using OpenVPN, because even if it's possible that a malicious network might crash the VPN stack on my phone or similar, that's still preferable to sending unprotected traffic over an untrusted network. If my VPN clients die, at least I know something's up!


A double-free can traditionally allow remote code execution. Maybe modern libc mitigation​s will protect you, maybe not.


Fair point. But it looks like the scope on that one is pretty limited:

> There are several issues in the extract_x509_extension() function in ssl_verify_openssl.c. This function is called if the user has used the ‘x509-username-field’ directive in their configuration.

That option, per [1], appears to exist in order to support odd TLS certificates that don't use the CN field as an identifier, and is also behind a configure #define that appears not to be enabled by default. So that, if I'm reading this right, is a pretty narrow attack surface in general - not inconsiderable, to be sure, but something of a corner case.

[1] https://community.openvpn.net/openvpn/attachment/ticket/124/...


Someone else trying out Protonvpn, I see :)


You don't need necessarily to be using a specific service.

You can configure your own OpenVPN server quite easily, and use the available vanilla client applications to connect to it.


The fun part is that a somewhat premium (read: more bandwidth) VPN option will cost around the same as a lower tier VPS. So, if you're willing to put up with the 10-15 commands, editing a couple configuration files, and copying config and cert files over, you can get your own VPN and a Linux server to use.


Right, but for certain classes of users, VPN as a service offers anonymity and endpoints around the world.


Yup. Most VPNs with decent bandwith cost $4/month. You can get pretty much the same service for $4/year if you buy a cheap NAT VPS server and spend 10 minutes setting up openvpn.


Could you point me towards a $4/Year VPS ? Would gladly buy it.


I bought mine from https://i-83.net/ It's £4.50/year, but you can probably find promotion codes that'll make it cheaper.

Have also heard good things about http://lowendspirit.com/

(sorry for the late reply)


Since it's not clear from the blog post or the other HN comments: the vulnerabilities are fixed in OpenVPN 2.4.3 and 2.3.17.

OpenVPN users are advised to upgrade[1] "as soon as possible."

[1] https://openvpn.net/index.php/open-source/downloads.html


I don't understand why comment out ASSERTs; wouldn't they actually potentially protect against some of the listed issues?

The article mentions they interfere with libFuzzer; but isn't a fuzzer expected to detect and handle crashes as part of its core functionality?


You have two kinds of fuzzers:

1) out of process, like AFL: The application is launched for each test, and there's no problem handling a crash, whatever the cause (asserts are ok). But for each test there's the application start-up (process creation, etc.) overhead.

2) in process, like libFuzzer: The application is launched once only. Then inside the application context the library iterates over the tests. So no application start-up overhead (nice), but a brutal exit is a problem. That's where asserts becomes a problem.

I guess that's what's at play here.


Why not change asserts (they are macros after all) to tell the fuzzing library that it found an error? Finding bugs is the whole point of the exercise, isn't it?


From a quick look at the doc (http://llvm.org/docs/LibFuzzer.html) this doesn't seem to be supported.

Aborting from an assert deep in the code has some challenges: unwinding the stack is ok, but how one would avoid memory leaks for heap data? It would require trapping all mallocs to track all allocated data and free it. Not impossible but it adds very significant complexity, and that doesn't seem the philosophy of libFuzzer. It seems to me that the goal of libFuzzer is to be very easy to use (just define one function for test harness, compile with CLANG and fuzzying flag, done) and efficient on an already reasonably well behaved library.

It makes sense to me: just start with AFL, which has no problems with crashes. When you reach AFL limits and need more efficiency, only then move to libFuzzer (both can shared state / test framework). Then the assumption that the app. is reasonably well behaved makes sense.

Caveat: I haven't used libFuzzer yet. For the reason above, I'm just using AFL for now. Maybe one day but I'm not there yet ;)


> But for each test there's the application start-up (process creation, etc.) overhead.

That's not completely true, for AFL you have __AFL_INIT(), which defers the initialization of AFL until after an expensive initialization of the program to be fuzzed, which greatly improves performance.


As I understand it he disabled asserts in places which would certainly fail, like doing system stuff, modifying system state and such, but not actual relevant parts.


You might want to preserve the test case as it's found an interesting code path. You could keep it around so tests generated from it cover similar paths, possibly without reaching the assertion.


It's good to see that 2 donations, totaling 0.80260293 BTC (~2000 euro) are made to Guido's wallet [0]. Probably not worth his time, but since he states it was "a labor of love" it's still a nice extra!

Edit: corrected BTC amount.

[0] https://blockchain.info/address/1D5vYkiLwRptKP1LCnt4V1TPUgk7...


Private Internet Access VPN provided $1,000 of that and thanks OP profusely [0].

[0] https://www.privateinternetaccess.com/forum/discussion/24191...


OpenVPN is very complicated and for that reason I use sshuttle[1] which is very simple.

It does require that you own an sshd running on an endpoint somewhere (like a VPS or an EC2 instance or your own server somewhere) - but if you can clear that hurdle, you end up with a very elegant and simple solution.

[1] https://github.com/sshuttle/sshuttle


For Mac users, Sidestep [1] offers a similar VPN over SSH experience. It's the only VPN product I've ever used that I've never had an issue with...it just works (no doubt due to the bulletproof nature of SSH).

[1] http://chetansurpur.com/projects/sidestep/


This interested me however it appears to be abandoned ... last commit was on 2013-10-25 and comments like the one at the end of this comment thread:

https://github.com/chetan51/sidestep/issues/62

indicate that it is abandoned ...


tinc is also a popular lightweight VPN, although it's targeted for embedded. It does nevertheless support the libreSSL fork mentioned upthread. https://www.tinc-vpn.org/


+1 for tinc


The first line on the sshuttle GitHub page says 'does not require admin'. Why even have installation instructions via 'sudo' then?


A benefit of installing using your system package manager is that you can rely on your distro to manage the security of the package. If you just `pip install` it, you need to personally watch for new security bugs and upgrade (or backport them to your current version, which is what e.g. Debian will do for you). You'd also need to do that for all of the dependencies.

Up to you if you trust your distro's security team, of course :). I trust mine.


I did not say anything about package managers. I trust my distro and its suppplied packages just like you trust yours and aim to use official packages as much as I can.

The instructions in README.md say "sudo pip install" and "sudo ./setup.py". To me, that is a bad idea. Things that are manually installed should be kept in completely separate directories. My preference is to install such packages in my home directory, which should in no way require sudo.


I agree. Looks like I only noticed the apt-get install line and incorrectly assumed that's what you were referring to.


I was thinking of switching to L2TP instead. Would there be any downsides?

I tried out this script once and it worked well:

https://github.com/hwdsl2/setup-ipsec-vpn


L2TP doesn't traverse NATs and other network firewalls very well. Which isn't relevant if you yourself control all of the hardware between endpoints (e.g. point2point) but if you plan on using it on public networks (e.g. Starbucks WiFi) then expect to run into it being blocked sometimes.

OpenVPN was designed for the "Starbucks WiFi" scenario, it can traverse a NAT, and uses standard TLS. It therefore appears like any other HTTPS traffic and will be very infrequently blocked.

There's nothing inherently wrong with L2TP/IPSec; it is secure and fast. Just nicer to blend in with other TLS traffic for reliability reasons.

For a specific example, GoGo Internet (in-flight WiFi), doesn't allow L2TP/IPsec unless you use UDP encapsulation. OpenVPN works out of the box.


Where does this idea that OpenVPN is TLS come from? No, it does not appear like HTTPS nor does it use TLS for the traffic.

OpenVPN has its completely own encapsulation protocol, and it stuffs a few TLS packets inside that to do key exchange, nothing more. (And it's key exchange is odd as they re-implement TLS PRF ... on top of TLS.)

t. Recently implemented an OpenVPN client.


The man page says it supports TLS


It supports TLS as an optional key-exchange payload, nothing more. It does not act in any way like "SSL VPNs" that just open a TLS connection and then tunnel data inside.


I would be interested to hear this as well. I favor L2TP/IPsec as it's built into iOS and MacOS and it was easy to configure my edgerouter-x to be a server for use on the go.


I’m gonna take the burden (someone would do that eventually anyway) and ask: how many of those issues would have been completely prevented by using a safer language such as Rust? How many would have been mitigated?

I’m not a system programmer and the article, while indeed interesting, can be a little obscure.


I really think this is the wrong way to look at issues like this.

Software has bugs, some of which are security bugs. Whenever someone goes through the effort of looking at software from a security perspective, issues might get identified and resolved and as such there is an incremental increase in security.

Switching languages requires a full rewrite, which is often not only impractical, it will introduce new bugs as the maturity of the software will plummet because of the rewrite. Then the cycle simply starts again.

Some languages have better features that may prevent some classes of security bugs, and Rust is just the new kid on the block. As such, new languages are unproven in the security area and there will be security bugs found there as well. Java and .NET don't have the same issues either but somehow nobody will ask about those :)

But then again, Rust introduces other bug types for which the future will tell whether they have a security impact or not: https://gankro.github.io/blah/only-in-rust/

My point of this rambling is that yes, C code is really hard to get right from a security perspective. But software like Apache httpd and OpenSSH demonstrates clearly it can be done. And switching languages is not the answer.


> Switching languages requires a full rewrite

Not true when those languages are ABI-compatible. E.g. I believe librsvg is introducing small amounts of Rust that can eventually become a gradual migration.

> it will introduce new bugs as the maturity of the software will plummet because of the rewrite.

Citation needed. I would expect a rewrite guided by an existing implementation (not the same thing as a rewrite aimed at achieving a from-scratch redesign) to result in fewer bugs as it would effectively be equivalent to a full code review, and that's before we take into account the difference-of-language effects.

> As such, new languages are unproven in the security area and there will be security bugs found there as well.

No. There are specific reasons C is so awful; it's not a question of maturity.

> Java and .NET don't have the same issues either but somehow nobody will ask about those :)

People are dumb. There's a popular myth that these languages are inherently slow, which combined with some genuine licensing issues means the OSS community doesn't use them, even though they'd be a perfect fit for OpenVPN-like processes.

> But then again, Rust introduces other bug types for which the future will tell whether they have a security impact or not: https://gankro.github.io/blah/only-in-rust/

No, read your link. Those issues happen only in unsafe code, and at absolute worst mean you're as badly off as you would be in C.

> software like Apache httpd and OpenSSH demonstrates clearly it can be done

Because those programs can go years at a time without a major security vulnerability? Oh wait, no they can't.

Switching languages is absolutely the answer, and the sooner we get over ourselves and get on with it the better.


> Citation needed. I would expect a rewrite guided by an existing implementation (not the same thing as a rewrite aimed at achieving a from-scratch redesign) to result in fewer bugs as it would effectively be equivalent to a full code review, and that's before we take into account the difference-of-language effects.

Humans are fallible. Rewriting the code introduces the possibility that the human coding it might make a mistake. This is true even when using existing code as a guide.


True as far as it goes, but I would still expect the bugs caught by the rewrite to outweigh the newly-introduced ones most of the time.


This has not been my experience in the last couple of decades.


How many rewrites-that-involved-no-redesign have you done?


I fully agree with you. Just a small correction on the rust bugs article that was linked: They're Rust bugs because they violate safety of code the compiler thinks is safe, so they could bite normal users.

The effect of such bugs is that the programmer operates under a false feeling of security, which is arguably worse than knowing you have to be mindful of security. Which is also why the Rust developers take these bugs very seriously.


Sorry yes, what I meant was: they happen only in the presence of unsafe code. Of course the symptom can show up outside and that's one reason they're taken very seriously.


> Switching languages requires a full rewrite ...

It's not ready for use in the wild, but I'll just mention that some good progress is being made on a "C to SaferCPlusPlus[1]" auto-translation assistant. (SaferCPlusPlus is essentially a memory-safe subset of C++.) At this point it can recognize arrays and "pointers used as array iterators" in C code and replace them with memory-safe substitutes. Hopefully an (incomplete but) usable version will be available before long and we can gauge the interest of developers/maintainers.

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus


How do Apache and OpenSSH demonstrate that C code can be done right from a security perspective?


By the lack of any serious vulnerabilities over the past years.

If you're willing to give it a go, an Apache bug would've fetched you 200K$ at the latest Pwn2Own.



Agreed. Otoh, last RCE in 2006.


Over the past years Apache httpd has had 187 vulnerabilities, 24 of which had potential remote code execution consequences.

I'm not saying anyone is ever going to find another RCE in Apache. I'll even go as far as to say it's not very likely.

But I don't think develop a server and then put it in the world's most hostile environments for 20 years until we're certain it's safe is a particularily efficient development strategy.

People should stop building stuff in C, it's that simple.


OK, you're not wrong here, but let's qualify your answer a little bit for the people that are less "into this" and to introduce some intellectual honesty in the discussion.

Apache has had 187 known vulnerabilities _since 1999_. The full list can be found at http://www.cvedetails.com/product/66/Apache-Http-Server.html...

Of the 24 remote code execution bugs you mention, only 4 of them were found in the last 8 years. The one before dates from 2007. Of those 4, only 1 of them was considered critical, and even that one is Windows only. The others don't even breach a CVSS score of 7.

So yeah, I stand by my comment that they have a pretty good track records over the past years.

And as to stop building stuff in C, I agree that if you start a new project today that depends heavily on network facing code, parsing of untrusted input and does not have massive speed requirements, there are other language choices one could make. But rebuilding Apache in a memory safe language? Oh well, the beauty of open source is that anyone can take a stab at it.


>People should stop building stuff in C, it's that simple.

I would attenuate this in one sense and amplify it in another: it's (almost always) irresponsible to write network-facing code in C.


Which is why although I dislike some decisions regarding Go's type system, I advocate its use by devops and UNIX userspace apps, the less C the better, regardless if Go has or not generics.


Using a more safer systems programming language like Modula-2 (1978), would prevent in regards to C:

- Out-of-bonds array and string access

- Implicit type conversions

- Accessing null pointers

- Using pointers for output parameters

- Casting integers to invalid enumeration values

- Incompatible casts between data types

- Allocating less memory than actually required

In Modula-2 anything that requires C like low level tricks requires the explicit use of the pseudo-module SYSTEM.

But I can go further to 1961 and ESPOOL and NEWP, Algol dialects for Burroughs B5500 systems programming, which already supported the use of then known as UNSAFE attributes for such low level coding outside Algol strong typing.

Although they would still allow for memory leaks or the error of freeing unallocated memory, but they would panic for the later, instead of the UB in C.


What would the performance penalty be?


Depends on the compiler.

What many younger generation seem to miss is that in the 80's C compilers were pretty lame, most junior Assembly programmers could easily out perform them.

The current state of C compiler performance is the outcome of almost 40 years of research in compiler optimizations for C, not anything of the language itself.

Also for those that prefer "Performance trumps correctness" way of thinking, some compilers offered ways to disable those checks.

But for anyone that thinks "Performance trumps correctness" is worthwhile, I advise to read C.A.R Hoare Turing award speech about the responsibility of software engineers, Algol use by the industry and law, done in 1981.


All of them. But the remaining question is: how many more would've been introduced by rewriting OpenSSL and OpenVPN in a safe language?

I think in the long run it's probably a good idea to do it, once things have settled a bit and there's a clear "winner" in the safe language front. In the meantime it'll be hard to beat the ubiquity and compatibility of a C library like OpenSSL. OpenVPN on the other hand, as a standalone application, could probably be rewritten right now in Rust, Go or Malbolge if somebody cared enough to do it.


Many of these bugs are specific to memory-unsafe languages. Writing in a memory-safe language would prevent many of these vulnerabilities.

It would not stop all vulnerabilities.


...and even introduce new issues: https://gankro.github.io/blah/only-in-rust/


OpenSSL's security is hard to beat. Constant code audits plus massive usage equals pretty secure software.

The vulns in the blog post are denial of service bugs: memory exhaustion and an assertion crash. While these good to fix, they have little impact on security. You're much more likely to introduce hundreds of bugs attempting to port it to Rust than just maintaining OpenSSL's C code.

Of course, you could try porting the codebase line-by-line to Rust. It'd certainly be an interesting challenge. But once you're done, you'll need to convince everybody to abandon a hardened, audited codebase for an untested, unaudited codebase written in an unfamiliar language, with a nascent open source ecosystem (mailing lists, CVE processes, etc).


Eh hmm OpenSSL is infamously bad at security?

There have been numerous critical bugs, and numerous criticism of its codebase, and numerous rewrites and migrations away from it!

This is why the wikipedia page for OpenSSL https://en.wikipedia.org/wiki/OpenSSL lists notable vulnerabilities and even a list of forks away from it!


Well, yes. And now those bugs are fixed. Meanwhile, it's one of the most popular security libraries on the planet. The fact that everybody uses it means you're unlikely to be burned by any given vulnerability since every vulnerability impacts a huge number of people.

From a security perspective, the worst situation to be in is where you're using some obscure library that has a critical flaw that nobody notices because it's obscure. At that point, a reasonably competent pentester will make your app sing and dance, and you won't know until too late.

If you stick with OpenSSL, you leverage the massive investment of resources being poured into it.


> And now those bugs are fixed

"Those bugs" are bad enough that they should have never been there. Moreover they exist only because the codebase was for all intents and purposes unmaintainable.

Given the above, I wouldn't be so sure there are no more bugs.


> It'd certainly be an interesting challenge.

See https://crates.io/crates/ring, which is a port of BoringSSL -> Rust + asm. It still has a lot of C in it, but it's slowly melting away.


.. Or in a language designed for proof of correctness? See e.g. https://mitls.org/ -- I think this is very important work. Imagine only dealing with bugs in the spec?

Maybe one day, when all the hearts are finally bled, something like this will become the default unless and until you have a proven^1 unmet performance requirement, and only then after you've spoken to your lawyers.

1. Pun intended.


I expected to be downvoted («the burden») but before you do please keep in mind I’m honestly trying to understand if those issues were preventable by using a different language. Just a honest curious question.


We can answer that question when (if?) people actually start writing programs in Rust instead of speculating.


Man, I'm eager for WireGuard to hit 1.0. It's very elegant, and I could build a policy layer on it in a couple days which doesn't suck.


Indeed, wireguard looks like it's going to be the bee's knees. It still has a lot of development ahead of it, though.


Shouldn't take too long, though. It's still about 10kSLoC and it's been more/less feature frozen for over a year. I get the feeling that it's already probably safer than most deployed VPN systems. Most of the recent work seems to be forward porting, reworking the ratelimiter, and tweaks to the treatment of randomness.


If by 10kLoC you actually mean less than 4kLoC, then I agree with you. :)


There's more than 10k lines of C in the repository, 8200 lines of assembly, and almost three thousand lines of headers. Your kernel module is definitely smaller than the whole repo. I feel like I've just called you fat. ;- )


A lot of such stuff can be avoided by simply using Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: