Hacker News new | past | comments | ask | show | jobs | submit login

Main source of latency will be network. The main problem are synchronous GET requests as then performance == latency. Better go async instead of reducing latency with hardware accel.



Not necessarily if the request is coming from within the same data centre. Then the network can introduce less latency than disk access do.


I have to agree with moru here. The latency on memory access will be negligible with respect to the latency of any io operations, even within the same data center. In my experience anything that involves the OS is >> 1us. Also beware of anything that declares 10x performance improvement.


You are assuming that a high performance, latency tuned system is using the OS network stack. This would be a petty naive implementation. Offerings from Solarfalre (OpenOnload) and Exablaze (ExaSock) transparently offload and bypass the kernel stack. Quoted performance numbers are around 1us, of which 500ns is spent getting up and down the PCIE bus. Offloading to the nic makes a whole lot of sense in this space except that it has already been done, and with more compelling perfomance. The authors compleltly failed to take into account existing work.


nope RTT even for these kind of network hardware is >=10 microseconds (I deal with such stuff professionally). Still a big gain going async :-)


Surprising. >10us sounds pretty slow to me.


round trip. one way 5 to 7 micros. some special cards go down to 3 one way, however with rtt, software usually adds some 2 micros overall


Less than 1us, RTT to software (http://exablaze.com/exanic-x4)

Less than 400ns per switch hop (http://www.arista.com/en/products/7150-series) excluding congestion.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: