Money is no object to HFT firms when it comes to reducing latency. A recent technique involves laying optic fiber cables with hollow interiors, so that the internal refraction of light spends more time in a vacuum and less time slowed down by the pesky glass medium.
Which is why the regulators really need to get sharper with them vs. what they are doing to the economy. They are front running, destroying value and spending the money on a whole load of tat - a massive case of misallocation of capital. They need to be shuttered.
Also, radio towers; because, the fiber (or whatever) cables usually zigzag following the roads or utility poles; meantime, the two radio towers can exchange radio waves traveling in a straight line.
There's another key issue: peering. Minimum rtt between nearby ISPs can be huge, when nearest shared peer is a few hundred km away. That's so for parts of Hong Kong vs mainland China. Also Salt Lake City and Zurich.
There are now fibre that has hollow core instead of light travelling through glass. Assuming we could do that in long distance, that is close to 0.98c, we should have +40% latency improvement right?
i.e Someday if we needed more capacity and this tech is available, we could lay this new fibre along side the same router as the current fibre. And enjoy some improved latency?
How much latency does each network hoop adds? 0.05ms?
How much latency does our own machine add? Are the 70ms actually all spent within the Network? Or would there be 0.5ms just coming from the Ethernet Controller, buffer or what ever?
Serious question: for a transatlantic network, would it ever make sense to do something along the lines of buoy + helium balloons + laser or microwave? Or in a similar vain, would it make sense to attach microwave 'towers' to shipping vessels?
I could see something like that work really well for the HFT world, where some clever hacks could make use of an unstable, but extremely low latency connections.
For inter-landmass traffic, my gut feel is bandwidth is the bigger concern.
The latencies we see now (as noted in TFA) are not that far from theoretical limits. I'd suggest responsiveness is best solved further up the stack -- and it's what most organisations have done / are doing.
Basically you don't want to run chatty protocols over high RTT networks, and people are slowing getting this ... but many orgs are stuck with HTTP, MAPI/MoH, SMB/CIFS consuming most of their WAN bandwidth.
I doubt that would be lower latency. Even forgetting the physics and curvature issues, you'll end up with many more hops which is going slow it down due to processing and retransmission delays.
Depending on how your Bond-villain vacuum tunnels and repeaters are set up, it's possible, i surmise, to create a tunnel way that can repeat without adding an interminable delay node or nodes. Imagine the One Way Valve Nikola Tesla patented, except with microwave repeaters shelled and pocketed into the sides of the tunnel passage instead of water-holding empty pockets.
Although I think that this is very nice, I always like to go to the "ultimate level" and consider what we would do if there was a way to send informative electrical signals using the earth as a resonator, and that idea chiefly brings me back to Wardenclyffe.
Did you know that the magnetic field of mars is weirdly shaped? It is only magnetic on the "Southern Half" of the ball that is Mars and has a slight gradient. So, assuming we can create some sort of information share that could function on the dynamic magnetic shielding of the earth, we'd have to rethink it for Mars, and any planet with a non uniform magnetic field from pole to pole.
Thanks a lot for your creativity. Keep writing, together we got this!
That's an interesting one. Maybe complicating instructions might take too long to encode, but making it a simple buy/sell trigger for one particular security could work? But also going over too long a distance means the stock markets are closed in that part of the world.
There is also a satellite company LeoSat that was trying to build a low latency long distance network of low flying satellites with laser links between them. Guessing that's a "tad" :-) (few billion) more expensive than a pair of shortwave transceivers. But if money is not an issue they say they can do 1.5x faster than terrestrial speed:
Those are microwave which basically need line of sight? I was thinking of lower frequencies which can sometimes go hundreds of kilometers without repeaters.
The lower you go in frequency, the less bandwidth is available.
The entire "HF" spectrum is ~30 MHz. My Wi-Fi at home uses a wider channel than that.
The lower frequencies -- that will travel across the ocean, unaided -- don't have anywhere near the bandwidth necessary (to support the throughput that they need).
PSK31 is 31 bps, but that's with ~100 Hz between channels. DXing is common in the 14 MHz band, so you could probably get to the megabits-per-second range before Physics (or the FCC) stops you.
What about all the hops your signal has to take to make the round trip? Couldn’t those be optimized or sped up? Is copper latency really equivalent to fiber?
Notwithstanding differences between specific types of cables and transceivers, yes, the rule of thumb is that they're roughly equivalent. The big advantages with fiber mostly come with being physically smaller for a given bandwidth (e.g. more bandwidth per square inch of conduit cross-section, more bandwidth per cable weight for vertical or suspended runs) and needing less active equipment between any two points in the network (e.g. PON, long-haul single-mode runs).
Well, we already have tech that can provide us with virtually lowest latency physically possible without having to dig any tunnels - high frequency radio waves :)
Of course bandwidth could be a problem but for pings it should be enough.
The point isn't to optimize for pings. It's to optimize for real use cases - video chat, web browsing, music and video streaming, etc. Most of these applications need at least a moderate amount of bandwidth.
My favorite part: “I diffed it against the sendmail.cf in my home directory. It hadn't been
altered--it was a sendmail.cf I had written. And I was fairly certain I
hadn't enabled the "FAIL_MAIL_OVER_500_MILES" option.”
I'm not sure if this is what he's referring to, but light (and the passage of time) is slower in a gravity well. A geodesic in spacetime will minimize the time it takes for light to travel between two points. In the case of going through the Earth, the geodesic will be only slightly deflected from what you would consider the straight-line path through a sphere in a flat spacetime.
He mentions an overhead of about 11ms in a cross-country network, which may be insignificant in the 35ms overall latency.
But, much of the infrastructure is communicated to regional servers or local CDNs, not cross country. A game server might split loads across east-coast servers and west-coast servers to improve responsiveness. In those cases, it's not unfeasible to see a 2ms-5ms speed-of-light latency. And that's where those 11ms really starts to affect user experience, since now you're looking at latencies that can cut into your 60fps frame rates. (or local video-chat services or even websites)