Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A minimal and idiomatic WebSocket library for Go (github.com/nhooyr)
99 points by _j7tr on April 27, 2019 | hide | past | favorite | 22 comments



Somewhat unrelated to the actual package but I find that properly written/considered comparison (why X package instead of Y?) sections are a strong indicator of library/package quality, and this library passes that test:

https://github.com/nhooyr/websocket#comparison


agree with your point, but i don't see any performance comparisons, at the end of the day, i'm looking at performance and stability as a leading factor for why i choose a particular library over another. obviously there are other factors like documentation, how well a library is maintained (i.e. any checkins in the last few weeks because there are bound to be bugs), popularity (i.e. github stars), and generally the interface of the library, is it methodical or cumbersome?


I updated the comparison, I think you'll find it more compelling in terms of stability and performance.

To summarize, they're about equal in terms of performance but I'd argue my library wins on stability and maintainability as the minimal API enables a tiny codebase, which means less docs, less tests and most importantly, less bugs.

Furthermore, the future of gorilla/websocket is uncertain. https://github.com/gorilla/websocket/issues/370

https://github.com/nhooyr/websocket#gorillawebsocket


What are the practical advantages of websockets instead of long-polling? As someone who has implemented both I've found much more success with long polling, but maybe because I haven't used a library as solidly written as this one.

Seriously, at short glance this seems like a well-written library, well-documented, solid looking table-driven testing, and a thoughtful write-out of comparable libraries. Only recommendation would be a link/badge indicating test coverage through a UI like coveralls.


Long polling can work great as a bidirectional communication layer (send and receive messages over 1 connection). A few things that it would be worse with:

* Request headers are processed on each request. Even with keep-alive connections the request headers are sent/processed

* Message latency can be much higher in poor conditions because multiple requests have to complete to complete a loop (I think it's 3 connections and each connection can potentially be re-sent which means it can take several round trips)

* Not fully duplex connection, you can't send and receive data at the same time on 1 connection

* Domain limits imposed by browsers apply

* edit: binary message protocol can't be used

It's better for some use cases:

* Easier to implement (standard HTTP). WS is definitely much more complex to implement

* Easier to deploy (standard load balancers / proxies should work well)

* Takes advantage of keep-alive


If i understand long polling right, then it one request which does not end (not connectio) and keeps receiving or sending information. So processing of multiple headers does not apply.


Keep-alive connections still require processing a full HTTP request for every request. This is documented in RFC6202 https://tools.ietf.org/html/rfc6202#section-2.2

The connection overhead is removed but that is only one of the various overheads that exist.

edit: I understand your question a bit differently after reflecting on it more. I think you may be saying that 1 connection never ends in long-polling. This is not the case and is only this way if keep-alive is used.

Each server->client data exchange happens by "completing" the request and the client is configured to immediately make a new request. This means every message (or groups of messages depending on your tolerances) is a completed HTTP request.


As sb8244 explained, long-polling typically refers to the server waiting before responding, and then responding with a complete response in one shot.

What you're thinking of, an HTTP response that the server keeps appending to and the client is expected to read and process incrementally, is also a real thing though. This is typically referred to as HTTP streaming. Examples of this would be SSE (text events) or MP3 streaming (audio).

(Not to be confused with HTTP Live Streaming (HLS) which is typically not based on any long lived requests at all!)


To send data using long polling, a new http request has to be made, since long polling is basically just a regular http request with a (possibly) very slow server reply.

The client can either terminate the connection and make a new one, or make a second parallel request for sending and leave the first waiting for a new server response. I'm actually not sure which is more common (despite implementing apps that can do long polling, I've never paid attention at that level), nor am I entirely sure how this works with keep-alive connections (eg, can you stop waiting and issue a new http request without closing the TCP session?).

In all cases, each request from the client involves sending http headers, and if the client is a web browser, there's going to always be a few hundred bytes at least (cookie, referer, accept, user-agent, etc).


Long polling doesn't allow for multiple polls on one connection. What it gives you is an reply only when there is something to reply with. Otherwise it just keeps connection open and waiting. It's a mechanism that mimics push, from the point of view of a client.

Here more info: https://stackoverflow.com/questions/12555043/my-understandin...


I'm waiting for a lib like Phoenix Channels with a js client that starts with WebSockets and falls back to long polling if things like firewalls get in the way. I actually found Phoenix when trying to solve that usecase in go.

I feel like because of Go's concurrency model, a standalone version of Phoenix Channels (without all the MVC stuff) would be possible to implement.

Phoenix: https://hexdocs.pm/phoenix/channels.html


Phoenix channels don't really depend on MVC. It's common to make use of them in MVC apps, but they can be used standalone.

I'm using Phoenix channels as a PubSub implementation in production, entirely separate from any web framework or HTTP request handling.

As for the transport itself, Phoenix channels do have native support for both websockets and long polling, but I'm not sure offhand if it automatically falls back when websockets aren't available.


Consider https://github.com/igm/sockjs-go/ or https://github.com/googollee/go-socket.io . I have minimal experience with the first one, which has Just Worked, but it's always been in a situation where websockets worked, so I can't say I gave it a good workout. No experience with the second.


I have had good success with Server-Sent Events in Go. It’s a pretty clean alternative to long polling and it didn’t need a library. In my case I was streaming JSON objects to the browser, very quick to implement.


I converted a program today from Gorilla websocket to this library.

The code is much simpler although I still have to clean up my error messages after the websocket shuts down and the context is Done.


Glad to hear it :)


Back when http was part of the standard library (I think it was eventually pulled out?), I was building a long polling server with Go and had to fix some stuff to make sure it worked properly. I can hardly believe I wrote some of these patches nearly 10 years ago[1].

With that said, I'm glad to see the Go ecosystem continue to grow. Unfortunately, I haven't used the language for much (corporate jobs were always Java, startup jobs were always JS -- and recently, it's been Python everywhere).

PS: I still brag about Rob Pike & Russ Cox approving my patches ;)

[1] https://github.com/golang/go/issues/93


http is still a part of the standard library, and having used it recently, I find the API good enough.


Although the package was moved from "http" (in the linked example) to "net/http" in commit de03d502 in Nov 2011, four months before 1.0 and its compatibility promise.


[flagged]


Damn. Savage.


any benchmarks?


Not yet but in comparison with gobwas/ws, this library is definitely slower but not by much, at least you won't notice until you're serving a million connections.

Compared to gorilla/websocket, they're about the same.

I opened https://github.com/nhooyr/websocket/issues/75




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: