Anyone want to take a stab on exactly what happened that made this test crash it? 4k requests / second would mean that if you had 100 people connected, it would spawn 400,000 outbound messages per second on the server side, or 4,000 for each user. That wouldn't overwhelm an http connection limit but I guess it would overwhelm the processors on the server side. Plus perhaps make some browsers crash from JS trying to process so much incoming information without disconnecting.
It doesn't seem to be doing a great job. User "hello" keeps sending several thousand message bursts. On the flip side, the rest of the backend seems to be doing a fantastic job handling the bursts.
I'm curious what the bounds are on the client-side use case. For instance, machines can handle tons of simultaneous http connections, but it's easier to reach that limit if you hold SSE connections open for a while.
I've used SSE in the context where we sent chunked pages of one output stream (where the stream was generated server-side all at once), but not where the connection is opened and then actually waiting for additional server information. Is it common for SSE connections to be held open for 30 seconds or even several minutes to be able to stream information?
Both Chrome and Firefox limit you to 6 simultaneous http connections to the same host. Try opening this app in 7 separate tabs, and you'll see the 7th one just spin and not even load the page's HTML.
At least in Firefox, the limit for websockets is much more generous (200, but shared across the whole browser, IIRC).
Both the Chrome and Firefox dev teams WONTFIXed my suggestion to increase the limit for Server Sent Event connections.
SSE lets you keep the connection as long as you want. I am not sure what you mean by client-side. If you mean Web Browser then I believe there is certain limit of parallel HTTP connections that you can have depending on the browser. http://stackoverflow.com/questions/985431/max-parallel-http-....
I'm more wondering about the number of open HTTP connections that a server can support. Googling around, I see mentions of 64,000 for vanilla servers, perhaps millions depending on the server (affected by # of processor...?)
So if your SSE connection is 10 minutes long and you might have more than 64,000 simultaneous connections to your vanilla box in those 10 minutes, you just need to start load-balancing I guess.
There's a common misconception about the number of TCP connections a server can handle being limited by the number of local ports (~64K). The real limit is in the number of LocalIP+LocalPort+RemoteIP+RemotePort pairs. If your traffic is coming from arbitrary Internet addresses, then you are not limited by ports.
Of course you'll be limited by other factors such as kernel fd limits (which are adjustable) and the CPU needed to monitor lots of sockets.
The issue with maximum open connections will go away with HTTP/2, and you can get around it today using SPDY since both multiplex connections to a domain.
SSE is awesome, and Gin seems to have a nice implementation. I definitely like the usage for realtime streaming stats since that's a one-way stream.
Chat, however, would be better served with websockets. If it's bidirectional, use websockets and avoid both upstream and downstream cost of setting up a connection.
SSE works, but you should consider it tech debt if you decide to go that way.
WebSockets will definitely provide the lowest latency for client->server transmission, but calling SSE "technical debt" for a chat service is a bit harsh. Many chat services get by just fine with HTTP POST for message submission. It's a case of trading low latency for simpler tooling.
It's not so much latency to me. It's the waste. There's both extra latency and extra byte overhead. It's a trivial implementation difference (in all the frameworks I'm familiar with) and as developers, I think we have a responsibility to be efficient and not waste energy and bandwidth.
At scale, you might be talking hundreds or even thousands more clients per server (depending on the server size). On mobile, you use less data which fires up the radio less which directly translates to battery life.
Assuming that your application gets any sort of traction, the 30 minutes of dev time you saved may have a much higher net impact on the world purely by the multiplicative effect. When your product affects that many people, you should remember that you have a responsibility to be sustainable.
Really you should have a queue anyway to account for those on less reliable connections (mobile, satellite, comcast)
EventSource natively supports this via the "Last-Event-ID" header (and can also be used by a shim to ensure it gets messages it missed). And in the case of a room style app you probably already have a room history that can be used to provide this data.
EventSource sends all events over a single connection which it keeps open indefinitely.
This shim receives a single event and then reconnects. In that window there is a period where there will be no connection to send events over, so they will be lost without a queue.
Edit: actually this looks to be a lot more sophisticated than the other shims i havr looked at and does indeed behave more like eventsource. Good job!
Ah, you're suggesting that the shims use long-polling. I don't think that's the case though. If I understand the library correctly, it really is keeping a connection open, so it's really very similar to the way EventSource works.
Addition: Verified. It doesn't disconnect between events (not long-polling).
```
var int = 1;
window.setInterval(function(){
int = int + 1; var data = { message: "test " + int, nick: "rl" };
$.ajax({ type: "POST", url: "http://sse.getgin.io/room-post/hn?nick=rl", data: data, });
}, 5);