Hacker News new | past | comments | ask | show | jobs | submit login

All these "X requests per unit time" posts are starting to make me want to break out some of my experimental code... I have some services that can process several million events per second. This includes: compressing the event batch, persisting to disk, validation of business logic, execution of all view updates (state tracked server-side), aggregation and distribution of client update events, etc. These implementations are easily capable of saturating NVMe flash.

If you want to see where the theoretical limits lie, check out some of the fringe work around the LMAX Disruptor and .NET/C#:

https://medium.com/@ocoanet/improving-net-disruptor-performa...

You will find the upper bound of serialized processing to be somewhere around 500 million events per second.

Personally, I have not pushed much beyond 7 million per second, but I also use reference types, non-ideal allocation strategies, etc.

For making this a web-friendly thing: The trick I have found is to establish a websocket with your clients, and then pipe all of their events down with DOM updates coming up the other way. These 2 streams are entirely decoupled by way of the ringbuffer and a novel update/event strategy. This is how you can chew through insane numbers of events per unit time. All client events get thrown into a gigantic bucket which gets dumped into the CPU furnace in perfectly-sized chunks. The latency added by this approach is measured in hundreds of microseconds to maybe a millisecond. The more complex the client interactions (i.e. more events per unit time), the better this works. Blazor was the original inspiration for this. I may share my implementation at some point in the near future.




> The trick I have found is to establish a websocket with your clients, and then pipe all of their events down with DOM updates coming up the other way. These 2 streams are entirely decoupled by way of the ringbuffer and a novel update/event strategy.

Could you detail this, please? I don't get it. What is the flow?

1. Browser is sending events to web server via web socket, instantly as the event is occurring (?)

2. ? (what exactly does the server do?)


You got 1 correct. Everything that happens gets sent immediately as an event to the server (e.g. KeyDownEvent). These are pushed without blocking for a response to each - The websocket guarantees delivery and ordering.

Upon receiving an event from the client socket, it is immediately inserted into the LMAX ring buffer for processing.

Updates to the client are triggered by events+state determining when a redraw is required and issuing a special "ClientRedraw" event into the same queue. These events are grouped by client so that we can aggregate multiple potential updates in a single actual redraw. These result in view updates being pushed back down to the relevant clients. One performance trick here is that the client redraw is dispatched asynchronously from the server, so there is no blocking on processing the subsequent batches each time.

You can think of an E2E client view update as always requiring 2 events - the client event that triggered the change to domain state, and the actual redraw event(s) that result. For applications where the client should update at a fixed interval (e.g. game), a high performance timer implementation injects periodic redraw events. Because the upper bound of the ring buffer latency is around a millisecond, this allows for incredibly low jitter on real time events. Scheduling client draws as simple domain events is feasible.


Thank you!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: