Hacker News new | past | comments | ask | show | jobs | submit login
MacOS command line version of the WireGuard VPN is now available for testing (latacora.singles)
333 points by zx2c4 on May 17, 2018 | hide | past | favorite | 193 comments



> It’s a little hard to overstate how big a deal this is. [...] Nobody trusts either codebase, or, for that matter, either crypto protocol. Both are nightmares to configure and manage.

Those seem quite the overstatements. I don't remember having any particular distrust in OpenVPN's codebase or protocol. I also didn't have that much trouble configuring neither the server nor the clients. I particularly liked how OpenVPN's configuration is basically putting command line arguments in a file. I could test configurations by simply changing invocations of openvpn, and when I found something I liked, I just put the options in a file and enabled the service to use that configuration file. Compared to the configuration of other kinds of services, OpenVPN is pretty simple.

EDIT: This is my first time hearing of WireGuard. I can't say how it compares, but the way this reads like a sales pitch and is so dismissive of SSH and OpenVPN, both of which I've had nothing but good experiences from, doesn't leave a good impression on me. I mean:

> Death to SSH over the public Internet. Death to OpenVPN. Death to IPSEC. Long live WireGuard!

Really?

This page would've been more effective on me if it objectively described how WireGuard was better than these technologies.


Here's the OpenVPN security announcements page:

https://community.openvpn.net/openvpn/wiki/SecurityAnnouncem...

There was a serverside remote code execution vulnerability in OpenVPN less than a year ago.

OpenVPN is large and gnarly. It's not one of the better understood codebases. The assumption that software security assessors have closely inspected it would not be well-founded. It has had "formal" audits, but audits are point-in-time snapshots done under timebox conditions.

The point of WireGuard, from an assurance perspective, is that it's designed to mitigate these concerns. It is a VPN with most of the mechanism of legacy VPNs stripped out. Less mechanism = less attack surface. It's two orders of magnitude smaller than other VPNs in terms of lines of code. Less code = easier to review = more reviews. It's not just small: it's thoughtfully small, as the post we're commenting spells out.

OpenVPN is built on TLS. TLS is a complicated protocol. It needs to support a variety of different users and the complexity of that task shows in the protocol; that complexity (and legacy support) is also a big part of the vulnerability history of TLS. Ordinarily, it would be worth assuming the risks of TLS's complexity rather than building a custom transport protocol, because secure transport protocols can be tricky to get right. But WireGuard has had formal academic analysis (in addition to a Tamarin model) and inherits cryptographic design from work Trevor Perrin did. WireGuard's protocol only has to serve WireGuard's use case, and is significantly less likely to have cryptographic vulnerabilities than TLS or the OpenVPN system as a whole.

The right way to think about it is that OpenVPN is like Sendmail, and WireGuard is something closer to qmail.


[flagged]


Please take this weird decades-long argument you've had with Bernstein to a thread where it makes sense. My comparison has nothing to do with the attributes of qmail you're talking about.

(https://cr.yp.to/qmail/knowles.html)


[flagged]


The first post you wrote like this was flagged off the site. If enough people actually see this one, it will be too. What you're doing here is abusive and says more about you than about anything you're trying to argue. Please stop.


Its underlying library, OpenSSL, had all kinds of issues. OpenVPN itself allowed many configurations of varying strength and weakness. OpenVPN users could set their own security up to fail while getting false assurances because they were "using a VPN." It was also a large, complex codebase which simultaneously increases number of incidental vulnerabilities and the work necessary for defenders to prevent/find all of them. When one government adopted it, the security firm had to do a lot of hardening (OpenVPN-NL) to reduce its risks. That shouldn't be necessary in a VPN.

On contrary, WireGuard started with strongest crypto it could making it default. There's also the protocol and implementation correctness to consider. WireGuard was formally-verified in Tamarin, kept small on purpose to reduce errors, carefully coded by project author focusing on security more than feature bloat, and reviewed with positive results by other security-focused coders.

It's better assurance than OpenVPN across the board. Even if I didn't have those details, the fact that WireGuard is usable out of the box in Linux while OpenVPN needs to be hardened and extra-carefully configured first would already tell me what I need to know. Use WireGuard, not OpenVPN, if you're able to.


Just throwing it out there, but WireGuard does more than just make strong crypto "default". Instead of separating, say, transport from encryption (transformation) they tightly couple them. It's intentionally limiting this flexibility for the sake of performance and administration simplicity...in their own words:

> It intentionally lacks cipher and protocol agility. If holes are found in the underlying primitives, all endpoints will be required to update.

(https://www.wireguard.com/papers/wireguard.pdf)

Just skimming their whitepaper it seems this is done to cram everything into Layer 3 which has administrative and performance benefits. But comes with a significant tradeoff to flexibility.

They follow up that statement with something that slightly confuses me:

> As shown by the continuing torrent of SSL/TLS vulnerabilities, cipher agility increases complexity monumentally.

But the general sense I am getting is WireGuard is strongly opinionated and limited...and one way of spinning that is less vulnerable to user error. Personally, I have trouble equating that to meaning WireGuard is more secure. For similar reasons I have trouble equating OpenVPN configuration to "requiring hardening".

That said WireGuard's approach and lower layer, like IPsec, has some serious performance advantages over OpenVPN. Their whitepaper also seems to indicate that it performs better the IPsec, but I would have to see more benchmarks from other sources before making a definitive conclusion on that.

Judging from this article and the comments in this overall thread though, it appears that WireGuard has some zealous (perhaps a bit over-zealous) evangelists though.


There's a growing consensus among cryptography engineers (by no means all of them, but a pretty significant set of them) that cryptographic agility in protocols is a design mistake.

That doesn't mean that we should assume modern primitives and constructions are unbreakable (though the trend has been for protocol designers to seriously overestimate the likelihood of _any_ primitive being broken).

What it means is that instead of designing and implementing elaborate handshake mechanisms, which inevitably succumb to downgrade attack bugs or, often, worse things, we should version entire protocols.

So obviously, if the ChaPoly stack is broken at some point, people will need to update WireGuard (just like they update OpenVPN when TLS bugs are discovered). But they protocol doesn't need to account for anything more than the fact that it might someday be replaced with a different protocol using different primitives.

The "agility" protocol designers are looking for has probably, historically, been implemented at the wrong layer. As usual, the closer something is to policy or prognostication, the further up the stack it should go. TLS bakes this stuff into the transport layer, which has hamstrung us and, ironically, not actually done much to protect us against vulnerabilities. WireGuard makes a different, better decision.

There's no tradeoff here; this decision is by itself a reason to adopt WireGuard.


"There's a growing consensus among cryptography engineers (by no means all of them, but a pretty significant set of them) that cryptographic agility in protocols is a design mistake."

And yet in 2018 we have proposals like this: https://tools.ietf.org/html/draft-ietf-ntp-using-nts-for-ntp...

To me it looks enormously over engineered with options, versions and flexibility everywhere.


There's an even growinger consensus among cryptographers that IETF cryptography standardization is a disaster, even with smart people chairing the WGs. It's just a bad process.


I'm on a team using Wireguard as a primitive in a complex system: http://altheamesh.com

We use Wireguard as an all-purpose tunnel to encrypt bulk traffic in two places in the system. OpenVPN's performance is unacceptable, and IPSec is impossible to work with. Wireguard is very fast, elegant, easy to understand, and well encapsulated. The zealotry is warranted and it's uses go far beyond your basic desktop VPN use case.


Your project looks interesting, but what are your plans to solve the "exit node problem", e.g. one end user downloads child pornography and two weeks later the Uplink node is swatted by the FBI?


We are running exit nodes which are no more risky than running a regular ISP. Other companies are welcome to run exit nodes as well once there is more demand.

EDIT: just to clarify, the exit nodes are in data centers near to networks, they are distinct from "gateways" which are the nodes selling internet bandwidth into the mesh network.


Ok, so the uplink gateways don't actually sink/source packets to the cleartext Internet, they have one more (encrypted) hop to your datacenters first?


How do Wireguard and ZeroTier compare?


My list shouldnt be seen as exhaustive. You're right that the project has even more traits with benefits.

"But the general sense I am getting is WireGuard is strongly opinionated and limited...and one way of spinning that is less vulnerable to user error. Personally, I have trouble equating that to meaning WireGuard is more secure."

The principles behind which systems are going to be more secure have been consistent from people like Paul Karger doing 1970's designs/pentests to folks like Bernstein later on. The number of vulnerabilities go up with complexity: handling state, potential interactions that are visible, covert/side channels that might not be, and too many options for users increasing odds of bad configs. We've seen these all happen to complex VPN's, too.

So, this isn't mere zealous recommendations: many of us have watched decades worth of software broken ignoring proven principles, WireGuard is one of few of any software following them, empirical evidence says it will be more secure in most (if not all) cases, and so we're promoting it.

A better metaphor would be security engineers and researchers promoting a security product since it has strong evidence of actually being secure. Which is exactly what you're seeing for at least four of us here. ;)


It does give extensive rationale: not only is it easy to configure (much easier than OpenVPN, let alone anything IPsec), the trusted codebase is tiny, and the cryptographic protocol is significantly more trustworthy.

Respectfully: I suggest you check out the research and design that went into wireguard first, because he trustworthiness of the two protocols is not the same just because you’re unfamiliar with it.


> Those seem quite the overstatements.

In addition to other comments, OpenVPN's default cipher is Blowfish. Since that sounds unbelievable, here's the manpage:

https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage

"The default is BF-CBC, an abbreviation for Blowfish in Cipher Block Chaining mode."

If you read just a little further you'll see that RC2, 56-bit DES and other obsolete ciphers are supported as well. To put this in perspective, DES is from the same time period as Pink Floyd's Dark Side of the Moon.

I'm very excited for WireGuard to be ported to other platforms.


When both sides are OpenVPN 2.4+, AES-256-GCM will be negotiated by default.


Emphasis on "negotiated".

The entire negotiation and key agreement stack is a larger codebase than all of Wireguard.


The difference between “it can be made to work securely” and “it works securely” is much more important than most people realize.


OpenVPN isn't that hard to configure; I have also done this on a few occasions. However, I do believe it's true to claim that to approach WireGuard's "standard" attack surface profile with OpenVPN you have to go beyond basic, naive OpenVPN configuration. Hardening OpenVPN requires additional effort and some amount of expertise and investigation to evaluate and apply configuration choices and the configuration you end up with must be aligned to by clients as well; a complication when you have limited access to or authority over the clients.

I agree that the page is a bit hyperbolic but don't let that dissuade you from investigating WireGuard; the people behind WireGuard are hard headed engineers and the work is worthy of your attention.


Wireguard is easier to configure and use and in my informal tests performed at close to line speed, around 870 Mbs on a gigabit network.

This is extremely fast for an encrypted network given most other solutions tend to take a severe toll on performance often operating at 140/180 Mbs on gigabit networks.

The only drawback currently is it's a kernel module and needs to be compiled, which makes setup across systems a bit involved, however there are ongoing efforts to merge into the kernel.

It's unfortunate innovate open source tools like Wireguard that add a lot of value to networking and clustering are not more well known.


> The only drawback currently is it's a kernel module and needs to be compiled, which makes setup across systems a bit involved, however there are ongoing efforts to merge into the kernel.

Do these things have to be kernel modules? Is this a kernel module on OS X?

I ask because 90%+ of the time my OS X system goes unstable the moment I add a .kext.


There is a Go implementation which runs in userspace, which is what the OSX version is. No .kext here.


OpenVPN can also be configured to operate a near line speed. It all depends on the underlying hardware, software, and configuration you're using.

Without being more specific in those areas and benchmarking the best reasonable configuration of one tool against the best reasonable configuration of the other tool, and then holding the rest of the factors constant, you've created a completely implausible and unsupportable conclusion.

I'm not saying that WireGuard isn't faster. I'm saying that you have done a poor job of describing how it is faster, and pointing to benchmark references to back up your claims.


> This page would've been more effective on me if it objectively described how WireGuard was better than these technologies.

Not to mention telling me where to find the code.


It's just pure marketing - create a problem where it doesn't exist, and sell the solution.


Would you care to rebut the arguments the blog post puts forth, such as argue that wireguard is actually harder to configure than IPSec or that the protocol is actually less trustworthy?

IPsec’s design era, implementation size... really nothing about the thing is even vaguely comparable.

Also: wireguard is free and open source software. So what precisely are we (people advocating for it) selling? We paid a hell of a lot more than the cost of an OpenVPN AS license so that everyone could get a macOS client, for free. I don’t make a dime off of you using wireguard.

So far 100% of the deployed OpenVPN I’ve seen has had bad crypto. Wireguard just categorically doesn’t have that problem.


Oh the problem definitely exists. And it shows in the fact that only more sophisticated teams use VPN.

And when they have to use VPN, too many teams use OpenVPN because it's seemingly the easiest to use. Also there are scripts like the amazing Algo that make it easier to run your own VPN server.

But this has historically been a totally unsolved problem from the perspective of usability.

I look at it as the difference between PGP and keybase. PGP was always too complex to use, and that's a bad security design. Keybase is a lot easier to use (and Signal even easier).

Wireguard is like that. Although I haven't used it yet, the configurations seem like "really? that's it? there's got to be more to it"


Stated with such confidence... Clearly you've never dug into the internals of OpenVPN.


Yes, as an everyday user of OpenVPN for years, I find the above "death chants" extremely off-putting and amateurish. It implies the author of the post has little elementary common sense to resist not to foul-mouth existing projects that are serving a useful purpose.

Speaking of WireGuard ... FWIW, we made a QEMU Advent Calendar[1] disk image[2] for it in 2016 December. Check that out if you want to give it a spin in a virtual machine.

Reading this thread reminds me to give it a try again now.

(I recall listening to the Wireguard talk at FOSDEM 2017, and a long-time Linux kernel developer, who works on memory management, was sitting next to me and remarked that he was not convinced of some details -- but unfortunately I don't remember the technical reason he mentioned as to why he thought so. Sorry :-( )

[1] http://www.qemu-advent-calendar.org/2016/#about

[2] http://www.qemu-advent-calendar.org/2016/#day-21


Hm, guess the down votes are for the useless remark in brackets at the end, which I already hesitated before hitting "reply"; was genuinely trying to recall. (And probably for the chiding of the post author in the first paragraph.)

Live and learn.


What's the worry about SSH over the public internet? Possibly influenced by my college's network setup, I've always been on team BeyondCorp even before BeyondCorp was a thing. Just use some auth mechanism like SSH keys that prevents brute-force attacks, and disable password auth.

Do pre-authentication OpenSSH vulnerabilities happen a lot, or something?


Can't speak about security aspects, but:

If you ever use SSH to tunnel/forward TCP traffic, you should be aware of certain performance issues [1], due to TCP's re-transmit mis-behaving (stacking) in a TCP-in-TCP scenario. Expect seemingly random delays, followed by bursts of transmission even well below the nominal capacity of the connection. This also causes overall increase in amount of data sent.

The same caveat applies to any other TCP-in-TCP tunnel (as opposed to the usual TCP-in-UDP setup), unless one or both tunnel endpoints specifically handle re-transmits properly.

[1] http://sites.inka.de/bigred/devel/tcp-tcp.html


SSH tunnels do not do TCP-in-TCP. They just take the content of a tcp stream (not the packets) and forward them over an ssh channel.


You are partially wrong.

OpenSSH, and the SSH standard in general, support "multiplexing SSH connections", where several remote shell sessions share single TCP connection. This is unaffected, as there's only one TCP layer in play. For OpenSSH this is options "-M" and possibly "-J" (not sure).

Another, unrelated functionality is a straight-up TCP tunneling over the SSH connection. This is an actual TCP tunnel, with all the advantages and warts of the setup. With OpenSSH, it's options "-L" and "-R". Those options take host:port arguments, and do proper TCP-level forwarding, encrypted and compressed like any other SSH stream, layered over any transport your SSH happens to be using. Which typically is also TCP, thus creating a TCP-in-TCP scenario.

There's also "-w" which tunnels tun(4) connection, over which you could end up sending TCP traffic.

Note that neither 'authentication agent forwarding' nor 'X11 forwarding' have anything to do with any network tunneling; those just pass small chunks of meta-data out-of-band.


> With OpenSSH, it's options "-L" and "-R". Those options take host:port arguments, and do proper TCP-level forwarding, encrypted and compressed like any other SSH stream, layered over any transport your SSH happens to be using. Which typically is also TCP, thus creating a TCP-in-TCP scenario.

I don't believe this is right - the thing tunneled over SSH is the data flowing within the TCP connection, not the TCP connection itself. If I do `ssh -L8080:foo:8080 bar.example.com` and then connect to http://localhost:8080/, then my browser's TCP connection terminates at my local SSH process, which decapsulates the TCP stream and sends it over SSH. Then the SSH server on bar creates a new TCP connection to foo.

Therefore there isn't TCP inside TCP. There are three TCP connections connected in series: my browser to ssh on localhost (HTTP inside TCP, ssh to foo (HTTP inside SSH inside TCP), and sshd on foo to web server on bar (HTTP inside TCP).

Therefore the "TCP-in-TCP" problem, where a delayed/dropped packet creates backoff in both the inner and outer TCP connections, doesn't apply. When my browser sends a packet, it is immediately and reliably ACKed by the ssh process running on my local machine. That ssh process might fail to get the resulting encrypted packet to foo, but that only affects a single TCP connection, the one from my laptop to foo:22. The browser sees a slow connection, but it sees it being slow at the application layer, not at the TCP layer.

So I think 'sneak is mostly right with the exception of ssh -w, which is a relatively new feature (and not what I was asking about at the top of the thread, in any case).


Yes, thank you for typing that out so that I don’t have to.

Also, -w is some new nonsense that nobody should really be using except in some super dumb and hacky situations where there is no other option.


As noted below, -L and -R do not have access to raw tcp packet data, only tcp stream contents. Those options do not do TCP-in-TCP. It wouldn’t work anyway as usually the networks you’re forwarding into have entirely different numbering and the bastion host isn’t usually the gateway (did you think sshd has a userspace NAT implementation?!).

This stuff isn’t that hard.

ackshually.jpg


Read the SSH documentation again. Especially the -w option


I wasn't asking about -w - I'm specifically asking about ssh'ing directly to target servers, not ssh'ing to bastion boxes and making an ad-hoc VPN. The BeyondCorp ethos is not that everyone should put their ad-hoc VPNs on the internet, it's that you should ditch your VPN and put your actual services on the internet and secure them because your internal network isn't meaningfully more secure than the internet (especially if you have personal devices connecting to the VPN).


Where can I read more about this ethos? I'm interested.


I'll try to blog about it this weekend! It is sort of like the BeyondCorp model (see the paper https://storage.googleapis.com/pub-tools-public-publication-... if you haven't read it) but taken to a bit more of an extreme - instead of having servers on a trusted network and clients, either on the office network or elsewhere, have some proxies before hitting internal services, everything is on an untrusted network.

This was the traditional model of the internet, from before NATs and VPNs were invented. This was the model under which Kerberos was developed - remote processes should only talk to each other over secured connections after authenticating each other, because their transport is the public internet, and there's no distinction between human-to-server and server-to-server in this regard. It was the model under which SSH (trust-on-first-use with no authorities), the HTTPS PKI (fully-qualified hostnames only), etc. were designed.


It's also sounds like a model I've recently advocated for and implemented in software I'm working on. And AFAICT it also lines up with the IPv6 model of the internet. It's great to know there's support for this style of identity management and I hope it proliferates. Looking forward to the blog!


Public internet SSH isn’t the worst thing in the world; but for some people that doesn’t mean a reasonable bastion story but rather “every host has a public IP” — and that’s just just an extra opportunity to put an unauthenticated Redis or Elasticsearch on the Internet by accident. I don’t think Tom’s death to public internet ssh was a reference to preauth ssh vulns or whatever. (But there’s a ton less code to trust in wireguard, so it’s still better :))


That's fair! I've still been on team BeyondCorp there - every host should either be configured correctly or have a local firewall, to the point where a public IP is safe, because an unauthenticated Elasticsearch on a VPN is still exposed to CSRF attacks against someone on the VPN, malware, etc. But I can buy that not giving them public IPs is an important defense-in-depth mechanism.


Yeah, I think we’re in violent agreement. (I worked at one of the impacted organizations during Project Aurora.)

Having a VPN is a pretty great control for the vast majority of organizations that don’t have the operational maturity to pull the public IP apart of BeyondCorp off. FWIW: we are helping customers with differential access controls (and I love Chromebooks despite the license purchasing experience). But even if you go full public-IP BeyondCorp you’re going to have some machines you’re not exposing (though it should be fine to expose them, as you mention), and occasionally you need to reach them, and VPNs remain great for that. VPNs being unnecessary (and giving a false sense of security) for human-facing endpoints in a company with a multi billion dollar security org? Sure, it’s hard to disagree: to your point Google is working on the proof by construction :-D


Aside from stuff like CVE-2016-0777, some people do password auth SSH over the public Internet, which isn't great.

(See also: People migrating from FTP to SFTP and haven't heard the good word of rsync with key auth.)


FYI AlgoVPN has Wireguard support in a PR right now. It should get merged within a week once it gets more testing: https://github.com/trailofbits/algo/pull/910


This is super cool. So if I understand right, I can:

1. Use Algo CLI to spin up a server on DO, AWS, etc specifically to pass through traffic.

2. Use `wg` CLI to connect to that server and pass my MacBook traffic to that new server

3. Dispose the server when I'm done using Algo.


Yep! Usually all you need is `wg-quick up myserver` and `wg-quick down myserver` -- the wg-quick(8) command wraps all the others for the most common trivial use cases.

By the way, I believe https://github.com/StreisandEffect/streisand has support for WireGuard (in addition to a billion other things...).


Just set up, this is amazing. Thanks for sharing.


In my experience, people use VPNs as an excuse to push weak host security through review. Eventually, everyone dismisses all security concerns with "it's behind VPN". And then they connect to the VPN with their malware-ridden computers. People are also surprised that whatever they run inside supposedly sandboxed VM actually gets full access to the VPN. And then there are configuration errors that can lead to surprising effects like forwarding all traffic from one's home computer through company's network filter/sniffer.

For these reasons, I always recommend running everything with public IP address. No more excuses. It's public, so security is a must. No more surprising network flows. Cloud-based monitoring tools can access the servers. Limited access can be granted to non-core team members and computer-illiterate staff.


For organizations with extremely sophisticated security teams, pretty far beyond the industry norm for "unicorn" startups (which themselves tend to have security teams with 10+ people on them, meaning millions of dollars of annual spend on security), this makes sense.

But for other companies, when they haven't been architected from the beginning to carefully support this access model, this is terrible advice. Just awful. Converting a company with a VPN/private deployment environment model directly to a "YOLO BeyondCorp" model will get that company owned up.

Further: if the primary things people need to get into private IP space for are (1) developer access and (2) access for non-technical staff to admin interfaces, the win just isn't there, even if done correctly, for most companies with the BeyondCorp model. It costs more to keep that model secure than it does to set up a system of VPNs for private access.


You and others here are assuming that VPN provides you with at least some security. In reality, VPNs grow large and open and someone will inevitably bring malware in. Furthermore, people run untrusted software in browsers and VMs that is capable of scanning the VPN and exploiting vulnerabilities. There is no security in a VPN. It's as naked as public IPs.

As for the cost of host security, you have to consider the cost of VPNs too:

- VPNs complicate everything, because all computers inside them are effectively on a crippled, partitioned Internet. Staff that could normally just login via web interface has to fiddle with network settings. High-bandwidth apps need custom configuration to bypass the VPN in select cases.

- VPNs block cheap cloud services. They encourage deployment of poorly secured self-hosted services with on-going administration costs.

- And then there's the latency. People will setup star topology for VPNs, because it's simple and easy to control, but then everyone spends time compensating for the increased latency.

Sure, migration between security models is tricky, but migration to anything is always tricky.


I'm certainly comfortable with WireGuard for machine to machine connections, but I don't see how it would replace traditional VPN w/ 2FA (e.g OVPN w/ Duo) for non-technical staff to access internal apps (at least without something similar to PAM)


When non-technical people get VPN access, it's almost always to access a small selection of specific applications. Lock their VPN connections down and implement 2FA (or, more realistically, 2FA-enabled SSO) for the applications rather than the VPN connection.

The 2FA VPN thing is, I think, a consequence of how hard it is to set up VPNs. Companies share VPN infrastructure between developers (who need relatively unfettered access) and non-technical staff, because maintaining multiple VPN configurations is so painful. WireGuard fixes that problem.

This is what I'm talking about when I say that WireGuard is a big deal. The current situation, with 2FA logins to very powerful VPN connections, is deeply suboptimal, and is what BeyondCorp was a response to in the first place.

If I was asked by a client today to design a remote access solution for customer support people to access an internal admin app, I might try to devise a site-to-site system (in which case I'd happily use WireGuard) --- deploying host-based VPN for support staff seems like a nightmare. But even if I couldn't, what I could do now that WireGuard is available is retain OpenVPN but drastically ratchet down what it has access to, SAML-ize the admin applications, and migrate developers to WireGuard environments.


Indeed.. There's no way at "OlderCorp" where I work would be able to move to this "YOLO BeyondCorp" public ip everything, we would be owned in seconds.


Secure hosts would be nice. I always say that VPNs and even firewalls are a band-aid. The problem is that the world is overrun with insecure legacy software and insecure-by-design new software that assumes it to always be running on a "secure network." This includes tons of database and orchestration software that has little or no authentication or where authentication is an obvious afterthought.


Saying that VPN is an incomplete control is not the same thing as saying VPNs are pointless. And the real argument (VPNs used as an excuse to not secure hosts) is even weaker. It’s often true! But that doesn’t mean a VPN isn’t an entirely reasonable control.


The one thing I like about a well-designed VPN is it's work done once on a relatively-simple component that can pay dividends protecting many, many, network-facing apps. Those will usually be highly-complex, not made by security experts, coded partly/wholy in unsafe languages, and could even change a lot increasing misconfigurations. A good VPN often reduces a hard problem of defending all that software from network attack to a simpler problem of installing the good VPN with a configuration forcing everything through it. This is, of course, not excuse to ignore other vectors. It just does a lot for user with minimal effort. Excellent ROI.

That's why tunneling proxies were used in high-security environments to protect messaging, voice, and video before apps were developed that had built-in protection. Just reuse the trusted, link encryptors or VPN's watching out for covert channels like traffic patterns. Fixed-rate, fixed-size with non-leaky, error messages covered that. Basically no effort to leverage them vs developing per-app solutions needing arbitrary protocols.


There's also a reason ipsec is in the ipv6 spec; and a reason Google (reportedly) turned on encryption/vpn of their dark fiber links after the Snowden relevations: there is no such thing as a "secure" cable, or "physically private" network.

You either encrypt your traffic, or you're vulnerable.

That doesn't mean a vpn is sufficient for security, just means it's a prerequisite.

People are pushing towards securing transport at the protocol layer - but it leaves issues like unencrypted dns, icmp etc.


What stops you from fronting the legacy or insecure-by-design software with authenticated HTTPS? You just run nginx/Apache on the same host and block direct access with a port filter on host's firewall. Setup automated updates to keep the HTTPS front secure. You could even front the app with 2FA, but that's usually an overkill. I don't see how this is any less secure than VPN.


> I always recommend running everything with public IP address. No more excuses. It's public, so security is a must.

Fully agree but depending on the culture of the company, you'll be treat as fool is you make such a claim. One of my clients is a Fortune 500, VPN access is done without 2FA, you will find all sort of company credentials available from a github search. Was surprised to see one of those credentials belong to a Thales contractor working in the security field ... Security is a good as the weakest link, you can't win against people madness.


> TODO: this should use scutil and be slightly more clever. But for now we simply overwrite any _manually set_ DNS servers for all network services. This means we get into trouble if the user doesn't actually want DNS via DHCP when setting this back to "empty". Because macOS is so horrible to deal with here, we'll simply wait for irate users to provide a patch themselves. [1]

So the setup script completely deletes all your manual DNS settings without warning you.

[1] https://git.zx2c4.com/WireGuard/tree/src/tools/wg-quick/darw...


Pointed out on the mailing list, too.

Did you make a concerted effort to link to the file revision that doesn't contain the fix for that?

Anyway, this is taken care of now, and will be released with the next snapshot.

https://git.zx2c4.com/WireGuard/commit/?id=b39298dfb540ad9c7...


Sorry I wasn’t trying to be a dick, all software has bugs, just thought people might want to know before they try it. Thanks for fixing it.


> “now available for testing”

Aka it’s alpha software, use at your own risk. Thus, yours seems like an unreasonable criticism.

Definitely good to know though.


Destroying user data without warning is acceptable as long as you say "this is for testing"?


Maybe I’m misreading it but I think it only deletes the DNS data while the wireguard connection is active?


Before I fixed it, it wouldn't restore the previous non-DHCP-set DNS servers when you took the interface down. Now this bug is fixed.


Is this a kernel implementation of WireGuard for MacOS, or is it one of the two ongoing userspace implementations? I probably missed it, but I didn't see a separate repo for it at https://git.zx2c4.com/ .

On that note, can anybody say how far along the userspace rust implementation is? How hard it'd be to port the MacOS version to FreeBSD?


The macOS client mentioned in the article uses the Go implementation:

https://git.zx2c4.com/wireguard-go/about/

It should be very easy to port to FreeBSD, and I'd like to make this happen. Care to help?


Hey! I was just sitting here grumbling to myself:"I whish they'd prioritise a solid portable user-space version" - and it turns out you did!

How realistic is it to get this running on Windows 10?


Maybe try running it under subsystem for Linux!


Not to diminish this important work at all, just a question for HN: do I have a minority viewpoint in thinking key-based SSH to a bastion host and forwarding traffic via SOCKS (ssh -D) or ProxyCommands serves 99% of the use cases for a sysadmin’s need for a VPN? (Corp networks and L2 privacy needs notwithstanding.)

Edit: If so, why?


I used such a setup at my last job for all my work (which had the advantages that I could use a Chromebook; I definitely couldn't install some random vendor VPN client on it). It was great. We primarily had the SSH bastion box for emergencies when the $network_vendor VPN acted up, but you were allowed to use it as your day-to-day VPN if you wanted and I definitely did.

The big downside in my eyes is that it's TCP-based, so your latency properties with non-TCP connections aren't what you want (e.g., you can't meaningfully use Mosh) and if the connection drops / your IP changes, your single connection goes down and you have to restart it manually. Wireguard is UDP-based and handles roaming, so even if you switch networks, your tunneled TCP connections are fine.

Also, it's a real network connection, so non-TCP/UDP things like ping work, you get an actual failed connection when the remote server doesn't respond instead of a successful one that drops immediately, etc. If you have root on both sides of the connection, ssh -w will also get you a real network device. (But also if you have root on both sides, you might as well set up an actual VPN.)


I run both Cisco and OpenVPN VPNs on my Chromebook and it works great.


It’s a fine way to get started; sure. You probably need ssh anyway and getting forwarding up for free is neat. It also had the historical advantage of existing; wireguard is new. Perf is sometimes not great, but at least you’re likely to get better crypto than most default OpenVPN installations I’ve seen. But it’s not a great long-term solution; whereas wireguard will last you for a long time :-)


Why isn’t it a great long-term solution in your view? I have been using it effectively for 10-15 years.


Sorry, I meant scale over number of boxes and services, not so much scale over time. Other people in this thread have given some rationale, but: perf, better baseline crypto (you always know what you’re getting), it’s a real network connection so you can talk to hosts with stuff like ICMP, smaller trusted codebase, more confidence in the underlying protocol...


I think the answer is mostly that it's hard to route most traffic over SOCKS, most applications don't obey the system-wide proxy settings, you might be leaking DNS queries, things like that.


Still not as thorough as a full-fledged "real" VPN, but for most ad-hoc purposes, something like sshuttle [0] works well and doesn't require application-specific configuration.

[0] https://github.com/sshuttle/sshuttle


I am very excited about sshuttle (and in fact, rsync.net has sponsored work on it).

The most compelling aspect of sshuttle, in my opinion, is that any host running sshd can be an endpoint - no software is required. As long as the sshd in question has python available, you can use it as an endpoint.


if you are willing to let a vpn client abuse your system, you might also want to set a network interface that routes all traffic to the ssh tunnel. might be trivial to add as a module if not available already.


In most configurations, it’s trivial to use the os native VPN client, with some minor caveats. This creates a tunnel device and does what 99% of end-users working remotely need.


the OS vpn still have the same problems. For example, you can run an application that asks the OS to not use the vpn. That is why most vpn clients will break your ability to set proxies (try to enable cisco enterprise vpn client on OSX and then open charles proxy and see how sad you will be)

my point is: on all setups you have traffic that might go around the vpn tunnel, that is an application defect not that ssh tunels are worse.


You aren't wrong. Our UNIX admins have been operating in this fashion for over 10 years.


Quite likely.


@lvh

What are your thoughts on the wireguard homepage effectively saying it's not ready for production use?

I've looked at using wireguard on a few occasions, but it's always sounded not ready for production usage reading through it. Looking forward to the point where I can though.


We're just conservative. At some point we'll change the webpage.


(Since GP asked me specifically: I completely agree with Jason here.)


I have not tried WireGuard yet. Is it more reliable than existing methods over "bad Wi-Fi"?

I find that one of the best ways to make bad hotel or plane Wi-Fi completely unusable is to use a VPN, and having something more reliable on bad links would be great.


WireGuard uses UDP and is effectively "connectionless" meaning there is no persistent connection to be maintained. It also allows for seamless IP address roaming (your peer doesn't care if two messages come from two different IPs so long as they are signed with the right key).

In short, I expect it to have no meaningful degradation when on a bad network (so long as they don't block the traffic or have overly hostile NAT).


That is exactly what I was hoping for.

Too bad it'll take forever before iOS can use it, because the current state of VPN on Wi-Fi is terrible (I understand that is not the first thing WireGuard is looking to solve, but it'll be great when it gets there).


ZeoTier, which also uses UDP, has an iOS app. So it is possible if the dev team wants to prioritize it.


It wasn't a whole lot of fun trying to connect code designed to operate at L2 (ZeroTier) with the L3 tun adapter they give you in iOS and Android. Fortunately that code doesn't have to be touched too often though :)


Fortunately WireGuard only encapsulates at L3, if I’m not mistaken :)


Is it open source?


I seriously recommend all to try zero-tier. Especially the fine-grained permissions control is fantastic. It is the only one that went so far as fool-proof.


The way WireGuard is designed should actually result in a more stable connection for the end user when the device is hopping between LTE and WiFi constantly.


There apps that can run as third party VPN providers on iOS. It's often used by corporate networks that use non-standard VPN protocols.

So there is hope. :-)


It’s great at roaming, but it is UDP; so if your bad WiFi doesn’t let you use egress UDP that won’t work. It’s pretty speedy though :)


I've found UDP over OpenVPN to work fairly well even on bad networks. Well-supported on iOS too with the OpenVPN Connect app.


It does until it doesn't. Then you have to wait X seconds for a keepalive (one of the 100+ configurables) to notice you're not passing traffic, and your client to tear down the dead tunnel and reconnect. Meanwhile many other applications have also lost connection to endpoints and need to do the same type of reconnection.

Stateless is the best feature of WireGuard IMO. I'm really looking forward to the mobile clients stabilizing.


The real competitor, which the article doesn't talk about is https://www.zerotier.com/ . Is is also open-source and have been available on macOS, iPhone, Android, Windows on top of Linux for many months now. It is easy to install and use. Deals with punching NAT and does smart routing when both devices are on the same network.

I don't know how well it compares in terms of performance and crypto, it would be nice to know.


Where is ZeroTier's protocol documented? All I could find was a document that said it was custom, and "like IPSEC". What could ZeroTier do as a VPN protocol that would make it better than WireGuard, which has a pedigreed protocol that, in its WireGuard incarnation, has had multiple rounds of formal validation?

I wrote the article we're commenting on. Why would I take the time to talk about something like ZeroTier? WireGuard solves the problem I'm talking about decisively.

I'm prepared to hear something about ZeroTier that makes it interesting in the WireGuard setting, but I am not interested in a long list of non-VPN non-access-tunnel features that ZeroTier might have that WireGuard lacks, because lacking random features is exactly the point of WireGuard.


No RFC-type document yet but quite a bit of commenting here:

https://github.com/zerotier/ZeroTierOne/blob/master/node/Pac...

There is also a higher-level description of the entire system here:

https://www.zerotier.com/manual.shtml

ZeroTier solves a broader set of problems than WireGuard and has SDN-type network flow rules, security tap capability, and is a full L2 emulation layer with support for multicast.

From what I've seen the WireGuard protocols are excellent and the implementation is very nice and clean. A core ZeroTier implementation to support a simple L3 VPN use case could be as short as WireGuard but that would be without all the rules engine, multicast, etc. stuff.


Just the .cpp code, not counting the .hpp code (which is full of code, including lots of memcpy's) in the node/ subdirectory of this repo is something like 4x the entire size of Linux WireGuard. The crypto protocol is undocumented, but the maintainer is playing games with things like the Salsa round count (which is fine, but I mean, speaks to a mindset). I might poke at it for sport at some point, but I don't see how I could ever recommend it.


I am the primary maintainer. :)

A minimal implementation could be much shorter, but the ref implementation is not minimal and includes a lot of features that go far beyond the scope of WireGuard.

Salsa20/12 is standard and according to the docs is considered secure. I recall that DJB initially specified the cipher with 12 rounds but added 8 more for margin. There are Salsa20/12 implementations in many crypto libraries. The best public cryptanalysis on Salsa breaks 8/20 rounds with a time difficulty of 2^251.

Personally I think concern over algorithm strength is usually misplaced as long as you're using an algorithm that is not known to be very weak (e.g. DES, RC4). Halfway decent cryptographic algorithms are almost never broken. Implementations and protocols are broken all the time.

One thing we did to guard against protocol issues was to implement what DJB calls "boring crypto." The cryptographic encapsulation is dirt simple Salsa20+Poly1305 constructed exactly as in the NaCl secret box functions, and nothing more. We've resisted adding more advanced features to the cryptographic layer because so far they haven't really been necessary and the risk is not worth the small benefit they may have.

At the time this was designed (started in 2010) Salsa20/12 was the best option to achieve high performance across a variety of devices including ARM phones and slow VMs with a clean and small code base. If it were written today it would probably use ChaCha20 (faster) or AES-GCM since almost everything worth mentioning has hardware AES now. We will likely rev the crypto at some point but would also like to upgrade the asymmetric cipher as well. Waiting to see if the Goldilocks curves (curve448) get added to the NIST standard. They are proposed but not yet approved. If they are we'd probably go to curve448+AES256-GCM with a longer IV.

That being said we're not solely oriented toward crypto and generally tell our users not to rely on network crypto as their only line of defense. Network crypto is sort of analogous to hard drive encryption. It protects you from people who don't have access to anything, but won't save you once someone gets any access beyond the perimeter. We advocate the use of encryption and authentication within networks as well if you care about security. This also guards against the discovery of a bug in any layer-- if someone breaks one layer they are confronted by another layer.

An RFC-style write-up has been on the to-do list forever, but the to-do list is very long.


None of this is wrong (avoid GCM).

I've been poking around for about 30 minutes and I'm still not sure I understand what the key agreement protocol is. I'm trying to follow starting from _doHELLO and you're losing me somewhere amidst the moons and worlds and stuff.


An identity is a public key. An address is a hash of that public key. Key agreement is just SHA512(Curve25519(my secret + your public)). Public keys are looked up from addresses via "upstream" nodes by the sender. An upstream node is typically one of our root servers unless someone has added their own.

Crypto here is fairly dirt simple as I said. It's just ECDH with two Curve25519 keys and then go.

The moons/worlds stuff is just some odd terminology for how to define upstream nodes. We are dumping that in favor of something more straightforward in the future. Most users don't need to care about it.

I'd like to avoid GCM but we also may have a need for FIPS compliance in the future. Yes I know that FIPS basically mandates weaker crypto... or at least crypto that is easier to implement wrong... but if it's not FIPS it isn't "enterprise" to some (clueless) people. Then you have organizations mandated to use FIPS crypto by forces beyond their control.


There are no ephemeral keys at all? It's just static-static DH?


Not in 1.2. This is planned for 1.4. It's fairly prominent in the manual.

It was left out of the original design since ephemeral negotiation means state and therefore latency and stalls if packets are dropped. The present design leaves it out as part of a design that prioritizes instantaneous connectivity.

When we do add it we will probably add a network level config option to select whether forward secrecy is required. If it's off it will work instantly and then lazily upgrade. If this option is on it will wait.

As I said though I don't see network level (L2/L3) encryption as being worth much more than whole disk encryption. Each really secure thing should have it's own secure authenticated session that would be secure enough over the open Internet. That way a network compromise or trojan is not instantly fatal. We pretty much tell everyone this.


Have you considered just, you know, adopting the WireGuard protocol and then building something on top of it to coordinate connections?

I built something like this (in C++, no less!) back in 2000. After the company failed, I always felt like we had made a mistake not just shipping the absolute simplest possible message forwarder, and then implementing the control plane for it in a reasonable high-level language. (In fact: an early team member pushed us to do that, and I shot them down.)


Yes I have considered that. It didn't exist in 2010 so it's worth taking a look. It already does some of what we want, but the devil is in the details.

Noise also did not exist in 2010. The cryptographic world is so much richer today than it was even 8 years ago. When I think back to the crypto dark ages I shudder.


This may come off as rude ... but tptacek there is a fine line between being enthusiastic and aggressively dickish. All over this thread you have been PUSHING hard... way harder than you should have to if your product is as good as you are touting. It's one thing to promote, but what I've read all over this post (not just this thread) is someone whose personality REFUSES to allow them to even give a small inch in a debate -- feels a drive to be right no matter the cost. That makes me question how likely you are to take input from people who can / want to help ...

Friendly advice -- roll yourself back down to an 8 or 9.


WireGuard isn't "my" project. I have no formal relationship with the developer, other than that we like his work so much we've contributed a bit to its development (less than others).

You're not being rude. But I have no plans to give an inch on this discussion. If that's problematic for you, you're welcome to use the HN votey-buttons as you see fit.


Just letting you know that your comments/attitude/etc are likely hurting the cause you are championing. Your volume of replies, the tone you take in them, and the way you react makes it seem like you ARE deeply part of the project... Thus I (and likely many others) associate you with it. I don't care what you comment - be a dick to the fullest if that's your thing - I was simply letting you know the impression you are leaving out there.


It may not be interesting to you. To me, and probably a lot of people, the ability to communicate directly between two hosts each behind a NAT (using UDP hole punching) is a big advantage for a VPN, since, compared to forwarding through a central server, it can be faster and lower latency, and it also means you don’t have to pay for that server.

Edit: As to why it’s relevant to the “WireGuard setting”, that depends on your definition of the WireGuard setting, but: (1) it’s easy to find examples of people trying to use WireGuard for peer-to-peer communication, the use case ZeroTier solves better; and (2) ZeroTier shares WireGuard’s property of being stateless/connectionless, a significant convenience advantage over traditional VPN protocols.


I think I could have worded that better. By "in the WireGuard setting" I mean "doing the job most startups will use WireGuard to do", which is, manage remote access to internal resources in the cloud.

I don't think overlay networks are pointless; I am enthusiastic about them. But for the remote access problem, what I want and what I'm writing about is streamlined VPN software, not a new overlay.


Fair enough, although… you know more about this use case than me, but wouldn’t you generally have a large number of servers in the cloud you want to provide remote access to? So if you’re using WireGuard, either you have to configure each client to talk to each server it needs access to – which could be good for security, but sounds like a pain to set up – or you have a central server that serves as a hub for all the VPN traffic. In your blog post you said you just had to “point the client at the server”, so I’m guessing you’re envisioning the latter. But then wouldn’t an overlay network work better in many cases? For example, if you have cloud servers in different geographical locations, and as the developer you’re near one of those locations but far from the VPN server, you’d have lower latency connecting directly…


I'm imagining simply replacing OpenVPN servers with WireGuard servers, retaining the VPN gateway model; one WireGuard instance acts as a routing gateway for any number of servers.


The only real complaint I would have over WireGuard is that with zerotier, I only need to connect to one server and then can talk to any servers on the same "network" as long as the acl permits it. The other servers are discovered automatically.

With WireGuard, I'm pretty sure I would have to set up multiple peers and custom routes. Although I'm assuming you could run BGP on top of the wg interface and that would solve some of it.

That said, I'd much rather something the simplicity of wireguard or tinc because it doesn't require a server who's only purpose is to manage clients.

I know it's not specifically related to the protocol either uses, but I think people would consider that as a drawback when comparing the two. If you're coming from openvpn/ipsec, then setting up any sort of mesh/routed network would be a similar process. I don't think wireguard would need to implement functionality like that either, I think it would be better if something like tinc used wireguard for the plumbing as someone else mentioned in the comments.


Is there any info on how many concurrent clients the WireGuard server can handle? I'd be interested in hearing if anyone has used this for management traffic to a large (>100) number of devices

Edit: ..and if so, how they handled IP assignment and key distribution.


Linux kernel module supports 2^20 (1048576) peers per interface. Go userspace version supports 2^16 (65536) peers per interface. Both limits can quite easily be raised if necessary.


Edited my question above for clarity. Any info on performance, key management, and managing peer address assignment in larger deployments?


> But the most important use case for VPNs for startups is to get developers access to cloud deployment environments, and developers use MacOS, which made it hard to recommend.

What’s that?

> … developers use MacOS …

Not all developers! I’ll believe that many developers use macOS. It’s even possible that most developers conducting certain kinds of development use macOS. But not all developers do. I don’t. Noöne on my last team did (we were Linux-only). Several folks on my current team don’t.


If your developers use Linux, they've already had a mature WireGuard implementation for months. It may even land in the mainline Linux kernel (I hope it does).

I don't have advice for developers on Windows right now, but we haven't run into any of those at Latacora; you can assume we're broadly writing for an audience of the kinds of startups we serve.


We'll have a Windows client in due time.


Is support for this in the Linux kernel yet? The lack of acceptance there is a major reason why I haven't tried this out yet.


Not yet, but I'm working on readying the v1 [PATCH] submission.


But why? Its still very easy to install. Is it just a matter of not trusting kernel extensions?


> But why?

Principle. I strongly believe that software which runs in kernel space must adhere to the same standards (for better or worse) as the kernel. For Linux kernel modules, this means being accepted into the upstream kernel, and demonstrating a history of maintaining compatibility with the mainline kernel.

I've been burned a lot in the past by companies releasing their own modules, and quickly failing to keep up compatibility with the mainline kernel versions. Not going to make that mistake again.


That's a fine and fair principle. I understand where you're coming from, and getting this mainlined is important to me too.

But for the record, we're maintaining meticulous compatibility with all kernel.org kernels back to 3.10 and up to whatever the most recent RCs are (at the moment, 4.17-rc5). We also support the frankenkernels from ubuntu, redhat, and suse. We have a build and run-in-a-vm CI box testing all these kernels for every commit and on a bunch of architectures: https://www.wireguard.com/build-status/ And because we're actively working on upstreaming this, you can be sure that this is going to continue working with upstream.

Also, this isn't a "company releasing their own modules" situation, providing some kind of throw-it-over-the-fence corporate crap code. I'm a kernel developer with a decent amount of upstream code, and the work I do is nearly always for upstream kernels.


Sure. But I am still looking forward to seeing it in the kernel tree before trying it out.


If you are using secure boot and distro-provided kernels, building and signing your own kernel modules is a PITA.


Out of tree modules are a huge PITA on enterprise deployments.

They also taint the kernel...


> Death to OpenVPN.

> ...nightmares to configure and manage.

This opinion seems a bit overdone. OpenVPN is one of the simplest and robust VPN protocols to setup and manage in my experience.

If you've had issues with OpenVPN, there are simpler options like Angristan's OpenVPN script [0] on GitHub to get you started.

I'm not arguing against WireGuard's use case as it's a nifty tool and viable option. While it's codebase is indeed smaller in size, denigrating a proven tool like OpenVPN or assuming that everyone is incapable of managing it because it's "complex" is both unnecessary and incorrect.

[0] https://github.com/Angristan/OpenVPN-install


> This opinion seems a bit overdone. OpenVPN is one of the simplest and robust VPN protocols to setup and manage in my experience.

I burnt 2 days earlier this month because `comp-lzo` is apparently distinct from `compress lz4` and fixing that required an OS upgrade (because it didn't exist before 2.4 and this VM isn't on the bleeding edge), which killed my VM's desktop environment, so I ended up needing to rebuild the entire environment. Even after that, the GNOME network manager doesn't respect the existence of this flag, so I have to work around the OS.

Changing the VPN server's configuration to disable compression wasn't an option either. (Compression oracles are bad, m'kay?)

If they were using WireGuard, this problem wouldn't have existed in the first place. Maybe compare/contrast them for yourself? You may find WireGuard even easier to get running.

I for one welcome our modern crypto overlords.


WireGuard is simpler to set up (it's hard to imagine something being simpler to set up than WireGuard, since its configuration is literally the minimum amount of information you'd need to build an encrypted tunnel), should be more robust in practice because of its transport model, is much faster, and is optimally secure by default.

I understand why people like OpenVPN --- their alternative before OpenVPN was IPSEC, which is a nightmare. What we're telling you is that there's something even simpler than OpenVPN, and it has the virtue of being materially more secure.

People should stop using OpenVPN as soon as they can.


WireGuard is easy to setup for particular use cases. A small number of dedicated links or a small number of mobile users.

WireGuard provides the foundation of a complete VPN service. It doesn't provide features such as user management and authentication, key management, ip address assignment, client configuration distribution, end-user friendly VPN clients, support for easily implementing complex split-routing and dns configurations for VPN end users. And most of those things aren't in progress. And it is likely there will be multiple solutions competing to provide these on top of WireGuard, so there may never be on true flavor WireGuard solution. Also once these are in place the overall attack surface expands to be closer to OpenVPN and IPsec.

Other issues include lack of support for multicast which means IPv6 which requires multicast is mostly supported but not completely which can cause issues in some circumstances.

Accomplishing some simple tasks like using a floating route to provide failover across two WireGuard links isn't possible. WireGuard interfaces don't go down unless you manually tear them down, so you either have to run another tunneling protocol over the WireGuard links and assign your routes to those interfaces which can go down or use a dynamic routing protocol. And WireGuard has a few caveats when being used with certain dynamic routing protocols which starts to make it as complicated as OpenVPN or IPsec at scale.

Currently there is an issue being discussed on the WireGuard mailing list regarding there currently being a requirement for time to be set on devices before establishing a first connection to a peer. This is because of an anti-replay requirement WireGuard has for initial packets which IPsec and OpenVPN don't. This is impacting a group that was hoping to use WireGuard to encrypt the traffic across a community wireless mesh network. The devices used in such networks often don't have a realtime clock. There are potential workarounds but again suddenly the simplicity advantage starts to slip away.

I think Jason has done great work, I like WireGuard and I use WireGuard, but it is not near ready to replace OpenVPN or IPsec in most circumstances.


I was careful when I wrote this to talk about who this is important to: startups that use VPN technology as part of their remote access strategy. Community wireless mesh networks might need to keep using OpenVPN, or whatever. But none of the problems you're discussing impact the core use case startups have for VPNs, which is getting support people to admin apps and developers to deployment environments.


IPSec is a nightmare on Linux. On OpenBSD it's a one-liner. On macOS, Windows, and Android it's a couple of checkboxes.

With the amount of effort put into Wireguard, IPSec could also be easy on Linux. Of course, Wireguard is a simpler, newer protocol. But if simpler and newer were all that mattered we'd never settle on any kind of a standard. Anyhow, if backward compatibility didn't matter we could make the IPSec stack substantially simpler, if not as simple, as well. But it does matter.

Wireguard may be easier on Linux, but that says more about Linux than the usefulness and viability of IPSec more generally.


You're talking about clients, and I'm talking about the total effort to set up clients and servers. You are not a "couple of checkboxes" away from setting up an IPSEC VPN. The simplicity of setting up a WireGuard client is nice. Watch the quick-start video on Wireguard.com and look at what the server setup looks like.


Well, I guess I didn't count the effort of pasting the secret.

But I am indeed talking about the total effort to setup clients and servers. I have two IPSec gateways running OpenBSD, one IKEv1 and IKEv2. And I've setup macOS, Windows, Android, and OpenBSD clients to use it. Others have connected to the former with Linux (including a MikroTik router), and that was by far the biggest source of headaches.

L2TP is more complex on the OpenBSD side: another couple of lines in /etc/npppd/npppd.conf. Allocating addresses over IKEv2 is much easier as its baked into the same layer, but last time I tried a couple of years ago macOS's IKEv2 support was experimental and Android was (and is?) still stuck with IKEv1. But WireGuard doesn't have equivalent functionality anyhow, so it's all irrelevant.

If we're talking about a static configuration that tunnels private IPs over public IPs, on OpenBSD it's the same one-liner. You can also tag packets coming out an IPSec flow and manipulate them using PF, but that's just as easy if not easier than with WireGuard because comparing PF with iptables isn't even a fair fight.

If you compare apples-to-apples--e.g. focus on the few encryption suites commonly supported by most IPSec implementations, and ignore things like X.509 certificate authentication which WireGuard doesn't support--then IPSec doesn't have to be difficult to setup.[1] The biggest headaches from a usage standpoint IME are (1) the horrible configuration and management story on Linux, and (2) all the shoddy firewalls that cause network hiccups, which WireGuard will also enjoy once it sees the same kind of widespread deployment.

I won't dispute WireGuard has its elegance. But it's most prominent on Linux. And it explicitly avoids addressing other complex problems that are commingled, for better or worse, in IPSec stacks; problems that often can't be avoided anyhow.

If I needed to setup a VPN for a cluster of Linux servers, then WireGuard would be a no-brainer. But to support non-Linux clients, why would I have them install some third-party application when there's a perfectly good and useful IPSec stack that comes natively? I hate SSL VPNs and OpenVPN for precisely the same reasons. At a previous job I refused to install the ridiculous SSL VPN client and instead dropped a tiny OpenBSD box on the corporate network that established a reverse IPSec tunnel to my own gateway. A single line in /etc/ipsec.conf on each end to setup the IPSec flow, and some simple PF rules for LAN routing. Easy-peasy.

[1] X.509 certificate authentication doesn't have to be hard, either. It's also rather trivial on OpenBSD, though the story on macOS and Windows is mixed.


Here's a quick question for you: by default, without changing the out-of-the-box configuration, what is the weakest supported set of crypto primitives OpenBSD IPSEC will negotiate?

(I don't know the answer.)


Some people I share a Slack with looked it up. It looks like the defaults include the 64-bit 3DES cipher in CBC mode, the SHA1 hash, and the 1024-bit DH Group 2. Does that sound wrong? They were working from man pages; maybe the real default configuration from the actual config files isn't 1990s crypto.


It depends on the server. On OpenBSD the default for IKEv1 is HMAC-SHA1 and AES. This is easy enough to change in the one-liner.

IIRC, all the modern IPSec stacks support at least AES and SHA256. However, the problem is that on macOS and (I think) Windows you specify the suites as fixed 3-tuples: MAC-CIPHER-DHKE, even if they could be independent (i.e. not mixed encryption/mac mode). So even though they all share strong cipher and MAC modes, the key exchange modes might not match as they only support a few combinations. I haven't tried changing my IKEv1 setup in the past several years (I'm not really using it much anymore except when traveling), but I have it configured as "auth hmac-sha1 enc aes group modp1024". I have a commented out block using "auth hmac-sha2-256 enc aes group modp2048" that says "OS X 10.11 supports SHA2-512 but only with group modp2048". IIRC that also worked on Windows but at the time I still needed to support macOS 10.10.

macOS and (I think) Windows support uploading specialized IPSec profiles, but like with security tokens I don't want to rely on a scheme that requires maintaining such things and never cared to dig too deeply. As far as I'm concerned it's not secure (in a larger, practical sense) if it requires complex manual configuration and software installation.

IPSec can be made simpler and has gotten much simpler and stronger in many respects as compared to the experience many of us had years ago. With only a tiny fraction of the effort needed to get native, vendor-shipped WireGuard support on macOS and Windows we could standardize a new, modern, cipher suite (as with TLS). For all I know that's already happened as a de facto matter.

WireGuard and IPSec don't have to be mutually exclusive. The notion that we can abandon IPSec is fanciful as a practical matter (nearly as fanciful as abandoning TLS), and so being fatalistic about IPSec is not constructive, not to mention a little unfair. Look at IPv6: some of the complexity of IPv6 has been shed as adoption has grown. IPv6 is an easier proposition today than it was 10 years as the way forward has become more clear.


I think the fact that you're still configuring IPSEC with 1024-bit DH modp groups pretty much makes the case I'm trying to make for me, but we can agree to disagree about this, too.

IKEv1 is a mess, by the way. Here's a good Cas Cremers survey of issues, circa 2011:

https://www.cs.ox.ac.uk/people/cas.cremers/downloads/papers/...

This stuff can't die off fast enough for me. Just so we know where I'm coming from here.


Yes: OpenVPN was one of the most robust and easy to set up protocols. Because the alternative was IPSec. Now we have wireguard, and it’s better on basically every metric.

100% of OpenVPNs we have seen deployed at startups had BFCBC as the default protocol. 100% (I think?) of IPsecs had aggressive mode IKE.

Wireguard doesn’t have bad modes. It benefits from decades extra of r&d in usable, secure cryptography.


Wanted to clarify that my parent comment was based on the adversarial way the article was written, not on the content as it related to the features of WireGuard (which are superb).

The subsequent replies are all correct in that WireGuard is in fact simpler than OpenVPN to install/administer, has a smaller footprint and better performance as well. In that same vein, WireGuard is therefore magnitudes more efficient than the older VPN protocols that OpenVPN once usurped.


The performance of OpenVPN is atrocious though.

Good enough to load some jira/confluence but that’s pretty much it.


Why death to SSH though?


It's not "death to SSH", it's "death to SSH over public Internet".

Instead, what you'll want to be doing is SSH over WireGuard.


But my SSH already uses Public Keys and ChaCha20/Poly1305 so that's really the last protocol I worry about sending through hostile territory


With Wireguard I believe you can go further and your servers can have only private IPs. And it's all seamless.


But the servers have to have a public IP to terminate the wireguard link, or be connectable from a machine that does. Exactly like SSH, be it via a bastion host or direct.


how is that a security issue if you use your own computer with its own ssh keys?


I think lvh's comment here is more appropriate for mature sysadmins who actually use key auth: https://news.ycombinator.com/item?id=17091990


I'm not as well versed as many here (I'm an advanced amateur) but have some understanding of networks and crypto and am trying to wrap my head around this.

Can Wireguard be described as an encrypted, stateless, roaming, keypair based UDP tunnel?


It's a routed tunnel, which is what distinguishes VPNs from encrypted tunnels.


A benefit unmentioned in this article is full IP roaming on both ends. That is good on its own but combined with the Android support has some very interesting possibilities.


Yes! Like `mosh` for your VPN. Pretty cool.


Hopefully this means its coming to FreeBSD/pfSense also.


Does anyone have detailed instructions/guide on how to set Wireguard up to allow my laptop to VPN back to my house through an always-on host there (basically two Ubuntu machines)? I'm not great at networking and I'd rather not spend hours breaking my entire network trying to set this up.

The "getting started" guide on the website already assumes you can set up your own routing across the different subnets.


Good news. It won't take you hours to setup. More like 10 minutes, including installation.

https://www.wireguard.com/quickstart/

It's the simplest VPN setup I've ever seen.


As I mentioned in my comment, I've read that, and connecting the two points is easy, but what I don't know is how to get my traffic from the second point to the rest of the house. I imagine that if I add an interface in the same subnet as the rest of the devices, I'll confuse stuff, and if I don't, I'll have to somehow relay the traffic from one interface to the other.


On the linux "server" you'll want to enable IP packet forwarding (essentially allowing it to function as a router). Packets destined for elsewhere on the lan will then be forwarded according to your routing table (so it should probably Just Work if the machine can already reach the rest of the lan).

http://www.ducea.com/2006/08/01/how-to-enable-ip-forwarding-...

On the "client" side you'll likely want a static route for your lan that ensures those packets go over your wireguard interface.


Ah, that sounds good, thank you! I should be able to figure it out with that.


Just follow the instruction on the video. That's all there is to get it running.

The archlinux wiki also give some tips but it is really simple


The video merely connects two hosts (IIRC). If you want what you expect from a normal client-server VPN acting as a gateway to a private network, you'll need to enable IP forwarding on the "server" and ensure routes are correctly configured on the "client".


Here is a blog post on this topic:

https://vincent.bernat.im/en/blog/2018-route-based-vpn-wireg...

Didn't tried it myself but it looks solid


Ah, I skipped the video in favor of the text guide but I'll watch it, thank you.


ISTM that enough people always "skip the video" that no information should be confined strictly to video.


Note that this is has not hit a stable version yet, and the wireguard developers even recommend not using wireguard: https://www.wireguard.com/#work-in-progress


Is there any hope for an implementation that can be run on a Windows/macOS machine as an unprivileged user, and exposed as a SOCKS5 proxy? I often find myself wishing for an isolated VPN experience which applies only to a particular browser, but to all traffic in that browser. Something like Tor browser, but with my VPN instead of Tor.


https://en.wikipedia.org/wiki/Back_to_My_Mac

For comment/discussion.

Not a "recommendation", nor a positive/negative opinion.

Believe it has never been discussed on HN before.


And for a VPN with multipath support, check out Glorytun: https://github.com/angt/glorytun

Used by OVH routers to efficiently aggregate multiple physical links.


Most of the time I do not need my whole ip stack to be forwarded over a secure channel but simply HTTP/s .. here is where SSH/socks proxying suits well for me in hotels etc..


Have you tried both? Particularly on bad hotel WiFi and roaming, wireguard perf is not comparable to SSH proxying.


ah gotcha.. ok I'll setup wreguard tonight as there is really no need to route everything via a "tun0" if I don't need to anyway, when I think about it.


it'll be very useful if they build user-facing enterprise security from the get-go.

How do you build SAML, Oauth/Google-Auth, 2-factor support into a VPN from the ground up ?


Elsewhere in the Latacora blog, Threat Modeling is said to be bad. I'm interested in a succinct explanation why it's not a useful activity.


Any experience with this in an enterprise / DoD domain? ChaCha-Poly algorithm is not FIPS-140-2 validated from what I understand.


Great news! I wish more smart people would put more effort to technically stop repressive regimes from blocking VPNs like WG.


So can we get WireGuard as a commercial service? Will anyone roll it out that way?



macOS


If WireGuard is so great (rethorical position, not actually doubting it is) why isn't there a Kickstart effort or some angel funding to speed up its development to all platforms ?


Development of security software is slow and careful, so we're not trying to rush anything.

But, you can indeed donate to the efforts, which makes a big difference: https://www.wireguard.com/donations/ https://www.patreon.com/bePatron?c=1100957


Wireguard has taken third party funds to expedite development, but as far as I know the author is pretty adamant all the implementations are freely available and open source, so maybe angel funds aren’t the best direct way forward.

Disclaimer: I’m a principal at Latacora and we paid the author some money to make native Mac clients happen. There’s also a VPN provider who paid money toward development.

I’d rather they did it right than rush it to meet a kickstarter date.


Private Internet Access was one of the vpn providers. https://www.privateinternetaccess.com/blog/2018/01/private-i...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: