Hacker News new | past | comments | ask | show | jobs | submit login
MsQuic – QUIC Implementation from Microsoft (github.com/microsoft)
153 points by mjsabby on April 29, 2020 | hide | past | favorite | 84 comments



Please fire away any questions you may have! I lead the team that built this library.

This blog has details on current development status and adoption within Microsoft: https://techcommunity.microsoft.com/t5/networking-blog/msqui...


Have you considered implementing any parts of this in F* (so they can be verified) and extracting back to C, as is being done for TLS?

https://project-everest.github.io/


Some work on verifying QUIC packet encryption using F* is happening at Microsoft Research: https://github.com/project-everest/everquic-crypto


Just to build on Catalin's answer. We are actively working on an implementation of QUIC's transport layer (i.e. packet encryption and decryption), along with a proof of cryptographic security. This is what Catalin linked to (https://github.com/project-everest/everquic-crypto). EverQuic-Crypto builds upon two previous projects: EverParse, a library of verified low-level parsers and serializers which we apply to the QUIC network formats, and EverCrypt, a cryptographic provider with agility and multiplexing, which we use for all the cryptography, e.g. packet number encryption, AEAD, etc.

This is not yet a full QUIC implementation, but we have plans for extending this codebase to cover more of the QUIC protocol.


We do work with the Everest team. We have unofficial support on top of miTLS (which they produce). We haven't looked into actually using F* for any of the QUIC code though.


Sorry, this might be a bit off topic but it's something I've been excited about for a while. From what I've heard Microsoft is sort of getting behind gRPC. Have you tested using QUIC as a transport layer for the gRPC client/server libraries that MS maintains? Are you seeing notable performance benefits with QUIC as the underlying channel?


I'm from the .NET Core team and we've looked at it a little from the HTTP/3 angle but not from a pure QUIC angle. This is because the gRPC RPC protocol is described in terms of HTTP/2 frames today.


gRPC encoded data is actually not tightly coupled to HTTP/2 frames. It's described as a stream of gRPC chunk encoded data on top of a HTTP stream. But the gRPC frames do not necessarily have to align with HTTP/2 data frames boundaries.

See https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2....

The specification makes gRPC sound rather tightly bound to HTTP/2 level details, but I don't think it really is. You should be able to speak gRPC just fine over any kind of HTTP. One main pain point for browser support however had been APIs to exchange trailers, which are necessary for gRPC.


> gRPC encoded data is actually not tightly coupled to HTTP/2 frames.

The headers frame, and the capability to have headers before AND after content is a requirement. gRPC requires trailing headers for the status code of the call. Trailing headers frame is a new concept in HTTP/2.

gRPC-Web supports HTTP/1.1 and browsers. It is able to do that by encoding the status into the end of the response body. However gRPC-Web is a different spec.

If HTTP/3+QUIC supports the same features of HTTP/2 then gRPC should work on it. There might be a HTTP/3 specific spec for details around the management of a HTTP/3 connection, but gRPC headers, message content, and proto contracts shouldn't need to change. Take what I say with a grain of salt because I haven't looked closely at HTTP/3+QUIC yet.


A headers frame before content is just equivalent so „sending headers“, since Http/2 also only allows to send headers once per Stream unless those are informational headers. In the same fashion, sending a headers frame after content is equivalent to „sending trailing headers“ - which are allowed to be sent at most once after the body (which may be empty).

Therefore the fact that there is a frame involved doesn’t really matter.

HTTP/3 doesn’t change the HTTP semantics: Peers are sending 0-N informational headers, 1 set of request headers, a stream of body data, and 0-1 set of trailing headers. Therefore gRPC should run fine over it at long as the underlying HTTP library exposes all those necessary features.


RPC/gRPC are certainly possible use cases for the QUIC transport protocol. But no, we have not yet explored the use of QUIC in this context. For now we are focused on workloads that will benefit the most from the tail latency performance and security improvements that QUIC brings.


Any strong reasons you chose C?

Maybe I skimmed too quickly but I didn't see this mentioned in that blog post.

Was there a requirement from other teams?


Since this had to run in kernel mode on Windows to power our HTTP stack, C was the language of choice. There exist other open source implementations of QUIC in C++ and Rust etc.


So what measures are in place to avoid being part of the 70%?

https://msrc-blog.microsoft.com/2019/07/18/we-need-a-safer-s...


Just to pile on here, running in kernel mode was the primary reason for using C. Windows kernel does support some limited set of C++ features, but we decided to go with pure C instead because of the confusion of which C++ features were available, especially in an open source environment, where not everyone is familiar with Windows kernel.

As far as what we do to keep quality high, we have a large number of automated test (> 4000 cases per CI run) automated on Azure Pipelines. Our code is deployed on several interop servers used to test with all the other QUIC implementations out there, and we do additional testing and fuzzing internally at Microsoft.


Since this is going in the kernel and is exposed to the network, what kinds of things are you doing to prevent security or reliability bugs due to undefined behavior?

Love the username, by the way :)


We do extensive testing including stress testing and make use of tooling that can catch bugs early. We also partner with internal security teams to do fuzz testing and security reviews for all networking code. That said, none of the networking stacks deployed widely today are completely immune to security vulnerabilities. Responsible disclosure also plays an important role.


Any plans to integrate or collaborate with Project Everest?


You can see some of the tooling they're using, .azure outlines CI & /tools has scripts like https://github.com/microsoft/msquic/blob/6fa51a42f69c59748dd...

There'll also be static analysis being thrown at it


I thought Rust was being considered for these use-cases. Is it (or was, at the time you started working on it) too early for that?


Isn’t the Windows kernel C++?


The Windows kernel is mostly C, but also parts in assembly and C++. [1] It also helps to keep in mind that back in the late 80s, when development work on the NT kernel began, C++ was still the new kid on the block. NT kernel work began even before ANSI C was done.

--

[1] https://www.reddit.com/r/cpp/comments/4oruo1/windows_10_code...


Just as historical note.

Microsoft C/C++ 7.0 was released in 1992 alongside MFC 1.0, which was a bit late to the race.

Microsoft was the latest C compiler vendor on the MS-DOS space to integrate a C++ compiler into their tooling.


Do you think you've adquently built something that can be used as a transport layer first, compared to googles attempt that seems more like a http snowflake first and a transport layer second.

QUIC has the potential to be helpful in game development but suffers from an overly specialized approach.


Yes, msquic should be a good general purpose transport. We already have usage from SMB (file sharing) and HTTP in Windows. Both are very different and provided good test cases for msquic.


Will BBR congestion control be supported? Without it QUIC performance cannot match that of TCP, in my environment at least.


It's definitely on the TODO list. We're looking into it.


Does QUIC or your implementation of it support application control of keying or is it always based on x.509 certs and CAs?

edit: the spec (https://tools.ietf.org/html/draft-ietf-quic-tls-27) seems agnostic on this. Also simple APIs especially in security are important, so supporting certs only is no flaw in my book, just curious about the edges of how the OS QUIC could be used in the future.


QUIC outsources this part of the solution to TLS (specifically TLS 1.3) so you just need a way to meet your needs in TLS 1.3 and it'll work in QUIC


Though it has been discussed that future versions of QUIC might allow other authentication/encryption protocols. Noise would be an interesting candidate.


Note that TLS doesn't necessarily imply certs either. TSL-PSK, TLS-SRP, anon DH, etc.


Sure, but, it's important to caveat that QUIC requires specifically TLS 1.3 (or potentially subsequent versions in the future) and so features which require older TLS versions aren't useful.

Pre-shared keys are a thing in TLS 1.3 though there are subtle differences you ought to be aware of before implementing, but as I understand it SRP is not (at time of writing) and neither is anonymity.

It isn't possible to "just" take an extension to TLS 1.2 that altered the handshake mechanism and have it work in TLS 1.3 because the handshake is very different even though it was camouflaged so that rusted-in-place TLS 1.2 middleboxes think it's just TLS 1.2 and don't freak out.


I know it's not your team, but now that it's going in Windows kernel, maybe you could ask around... Any plans to support QUIC for data transfers from/to Azure Blob Storage?


I recently had a project in my college course to implement MP-QUIC, but there was sever lack of resources on it. What do you guys think of MPQUIC vs QUIC?


Was't QUIC was standarized as HTTP/3 what makes it call QUIC not HTTP/3.


No. HTTP/3 is HTTP using QUIC as the lower-layer transport. QUIC itself allows for different protocols to be build on top of it, and was standardized on its own.


To expand on this:

• QUIC’s original implementation bound the transport protocol & app protocol to HTTP/2 - this was led by Google and is commonly referred to as “gQUIC”. This is what Chromium implemented and could be used via the Cronet library in other contexts. • A desire to use QUIC as a stand-alone transport layer (above UDP) grew, and is now standards-tracked as “IETF QUIC”. • HTTP/3 is a evolution (rather than a revolution) of HTTP/2 that requires IETF QUIC as a transport.


will it be possible to use the « sendfile » system call to do zero copy file transfers on a quick connection?


Will you add support for boringssl?


Not likely, unless we get a customer ask for it. But when we start accepting external contributions it shouldn't be too hard for someone else to add the support. We already (unofficially) support 3 different TLS libraries (schannel, openssl, mitls).


will this be integrated in some rpc framework like gRPC we all can use ?


is this using BBR congestion algorithm ?


Not yet. BBR is on the TODO list though.


MIT licensed cross-platform C. Ten years ago I wouldn't have believed it. Today it's not even surprising.

I really like this version of Microsoft.


I'm with you, and I'm happy that it seems like they are trying to become a good steward in the software industry. I am impressed by the quality of Windows 10, up to about a year ago I used Linux daily because it was a better experience than Windows 98 through 8.1, until recently. Lately, I find it's way easier to run Windows 10 with WSL rather than screw around with Linux trying to get things to work correctly that I just plug in to Windows.

I hope they start giving more control to us, though, because I'm tired of having to firewall block telemetry and forced to have Cortana installed or whatever other garbage. If Microsoft allowed me to install Windows like I do Debian, where I can pick my packages and leave out what I don't want, and they also allowed for replacement APIs, so I could swap explorer.exe for my own version for example, I'd never use Linux again. But that'll never happen, so I'll just use tools to block that stuff for now and hope the Linux experience catches up.


Even during the evil days Microsoft occasionally produced great software like Microsoft Money.

Like any large corporation, MS is not a single cohesive entity and I suspect the Win10 group pushing metrics and Cortana is not the same folks writing nifty quic implementations.


Exactly! I loved Microsoft money, BTW, I wish it was still a thing.


While not quite the same, they're releasing an Excel feature/template called "Money in Excel" soon[1], that uses a Plaid integration to pull live financial data into Excel to work with.

[1] https://support.office.com/en-us/article/what-is-money-in-ex...


While not in active development, you can still download the Sunset Edition for free.


>I am impressed by the quality of Windows 10

Weird, I must have some different edition of Windows. Totally inconsistent settings/control panel interfaces, updates taking ages, updates failing when you look at it wrong (and then stuck in update-revert loop every boot), driver setup taking minutes, and I constantly discover some new disk-hogging background process.


You're forgetting about the comparison to a linux desktop though.

I personally would really love to switch to Ubuntu full time, but I'm not going to forgive it soon for bricking my machine after a software update.

Unfortunately for Linux, the automated recovery tools just aren't there like on Windows - if a Windows update breaks the system, it will be able to recover itself 90% of the time.

Yeah, the UI is shitty and inconsistent and there's lots of nonsense in the background, but ultimately those don't matter as much as baseline reliability. No one hunts for WiFi drivers on Windows, at least not since Vista.


Compared to the last Linux GUI distros I used Windows 10 is a massive step up. Compared to Windows 7 it's a massive step down. Can't speak to Linux distros bricking my or my Customers' machines. Windows 10 update-induced issues have caused me a lot more headaches than Windows 7 ever did, though.


NixOS's rollbacks make the updating safer than Windows or regular Linux distros.


I'm with you on the licensing side.

But C? come on Microsoft.


Yeah, they are also using C in Azure Sphere, which for me kind of blows away the whole security sales story of the platform.

What use is to have Fort Knox security level if the foundations are built on top of quick sand.


People cheering on Microsoft embracing things. Ten years ago I wouldn't have believed it. Today it's not even surprising. /s


I'm just waiting for the sine curve to come down again. Companies who can change from that to this in 10 years because FLOSS became hip and popular, can change the other way around if it increases their profits.


I don't think that will happen as long as Satya is driving.


Maybe, maybe not. CEOs are replaceable as well. But if the society is dependent on Microsoft, we will remain dependent even after Satya. Look, I'm not trying to preach anything. I love this MS as much as the next guy. It's just useful to think that companies can change both ways because ultimately what a company wants is to maximize profits, it's naive to think MSFT is trying to accomplish anything other than this. It's useful to think this for the long-term picture.


Nice. For comparison, here is Mozilla's implementation in Rust, which is integrated into Firefox: https://github.com/mozilla/neqo


Great to see that SMB-over-QUIC is being trialled. I'd love to see more applications using QUIC as a transport - particularly if they're going to be on mobile - or switching back and forth between WiFi and mobile signals (which, on TCP, means dropping the connection and creating a new one).


Why everyone is making their own QUIC implementations. There are so many already https://en.wikipedia.org/wiki/QUIC#Source_code


Likely for the same reason everyone tries making their own web browser, even though the other guys' are all gratis (and many of them libre): When something is a platform, you are either a landlord or a tenant.

Google and Apple have seen what happens to Microsoft's tenants, so they decided to be landlords. Microsoft knows how awful a landlord it had been (and after decades of landlord-only status, suffered abuse as a tenant at Google's gmail and youtube platforms), so it also tries to be a landlord in every way it can; It couldn't attract it's own tenants to Windows Phone, IE11 and its own Edge, so it has to offer subleases on Android, iOS and Blink(=Edgium) these days.

QUIC looks more "behind the scenes" as a platform right now, but building your own is a very cheap hedge against ceding complete control of what could become a potentially fundamental platform to your competitors. So everyone does that.

I'm no fan of Microsoft, and I believe that Microsoft has been "beaten to submission" rather than "left the dark side", so to speak. But regardless of the overall technical quality or moral/values one assigns to Microsoft - they are a smart, politically and business oriented and savvy corporation. This is a "staying relevant and in control" move.


Possibly the result of an industry scarred by the experience of depending on a TLS library that everyone thought was secure because “hey everyone is using it”. Now they want to make sure they understand and control critical security infrastructure.

Multiple implementations of a protocol are by no means a bad thing. The opposite is a problem - too few implementations means that the implementations dictate the spec and creating something compatible requires implementing bugs of those implementations too.

That said, I’d be surprised if we still had this many implementations 5 years from now. I think at least some of those will become unmaintained and instead use what Google/Microsoft/Mozilla/Cloudflare have developed.


See here for other QUIC implementations: https://en.wikipedia.org/wiki/QUIC#Source_code

Maybe this one can be added to the list.


I'm glad the support linux straight out of the box


> MsQuic is shipped in-box in the Windows kernel in the form of the msquic.sys driver

Does that mean that a HTTP.SYS Webserver will also Support QUIC?


We are currently testing HTTP/3 support in IIS/http.sys internally. Cannot comment on any external product release timelines.


You mean IIS? HTTP.SYS isn't a webserver.


HTTP.SYS is the core of a Windows NT native web server; the async listen/accept loop, IOCP-based data transfer, etc. IIS is built on top of HTTP.SYS. I presume that it ships with Windows for ease of servicing; it is after all a kernel-mode driver (of sorts).


The kernel module HTTP.SYS contains a HTTP server. It's used by IIS, but you can also use its API directly.


Wow, didn't know that.

API for the curious: https://docs.microsoft.com/en-us/windows/win32/http/http-api...


In the FAQ it says this is going into the Windows kernel. Is there a sockets api emerging for QUIC or will each impl have its own api?


There is currently no standardization for QUIC APIs. You can check out the MsQuic API here: https://github.com/microsoft/msquic/blob/master/docs/API.md


The feature I most want in a new network protocol is multipath to enable seamless and automatic connection migration between wifi and cellular. I haven't been following QUIC standardization, is that feature in now or postponed to the future?


Connection migration is part of the current Internet-Drafts. The generalized support for multi-path (i.e. usage of more than one path at the same time) is postponed to a future version of the protocol. You can follow the standards work here: https://quicwg.org/


Good to hear, but it seems like using both paths at the same time would be necessary for connection migration to work well, as you're often not sure which connection is actually better. If you have to wait until you're completely sure one connection is gone before switching wholesale to the other, that removes a lot of the benefit of connection migration.


Wow, I noticed it was a Microsoft code. I think I would have noticed it without the information. Is there a Microsoft style guideline when writing C?


Will there be a Python binding?


Networking code written in C! I wonder what can go wrong?


Just like whole Linux, *BSD, Windows network stack, all these drivers for embedded systems... smh


"cross-platform" software that only runs on Linux and Windows.


Cross-platform means it runs on more than one platform, and you mentioned two. Those two are conveniently fairly popular as well.


It's MIT licensed, you can port it instead of complaining.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: