Hacker News new | past | comments | ask | show | jobs | submit login
A deep dive into Internet infrastructure, plus a visit to a subsea cable site (arstechnica.com)
251 points by matan_a on May 29, 2016 | hide | past | favorite | 32 comments



Neal Stephenson. "Mother Earth Motherboard" WIRED Dec 1996: 66 pages. Print.

http://www.wired.com/1996/12/ffglass/

Without a doubt my favorite issue of any magazine ever. I still have my copy somewhere.


Yep. That's the first thing I though of when I started reading this article. Twenty years ago? Wow. Time flies. And while I enjoyed this Arstechnica article, I quickly came to the conclusion that Stephenson is a better writer. Dormon didn't even mention the sharks that plague the undersea cables! If I recall, for some reason, they like to chew on the cables.


They do! It is theorized they detect the electromagnetic fields given off by the cable.


Bears like to chew on armoured Teck cable too. Thought it was the taste of smell of the rubber insulation.


That's a great article ... I was heavily involved with terrestrial fiber systems in the '90s and, while speeds and the isolation of DWDM channels have both improved, it's amazing to me how familiar it all feels. When they talk about sub-sea amplifiers (and many of the terrestrial ones), they're not talking about a device that amplifies the signal electronically. An Erbium-Doped Fiber Amplifier (EDFA) uses a laser to optically amplify a signal. This is why there's not a lot of latency added in the sub-sea cable. If you take the same optical signal, convert it to electrons, amplify it and convert it back to an optical signal you'll see the latency added as discussed in the article.

One small nit ... dark fiber represents unused capacity. There are several places where the article says something like "dark fibre signals" which is incorrect. Dark fiber has no signal, while lit fiber does.

The last thing I'll mention is that these systems are obviously single-mode fiber. The laser powers feeding each channel are probably around 12dBm in the 1550nm spectrum (per channel). If you look into the end of one of these fibers that's lit from the other end, you'll end up with burned spots on your retina. Wiggle it around a bit and you'll have squiggle shaped burns. So if you're ever around fiber equipment, don't look directly at the ends (or into the connectors). You can get laser safety glasses cheaply ... save your sight!


A sticker we put on the cabinet for our long haul DWDM system gear:

"Do not look directly into fiber with remaining eyeball".

http://cdn.instructables.com/FQT/MNV9/I2KBQQ2E/FQTMNV9I2KBQQ...


And those who damaged their retinas always claimed they checked to see if there was light first. You obviously can't see either 1310nm or 1550nm which is why every one of our fiber benches had a phosphorescent "target" that you could check a fiber against.


I'd love to see a similar article on the economics of these steps. The only paying customers are in the datacenter and the end users, but I was under the impression that peering agreements were free as long as the bandwidth is balanced between the two parties. But surely someone must be paying for these massive infrastructures. Is it a system of back to back recharge of bandwidth?


Plugging into the switch where the networks come together is free, but you must build, buy or lease connectivity to the folks you're peering with.


10 terabits per second on a single strand.

Amazing. I think if people back in 2000 would have realized the capacity coming along in fiber, then a site like youtube would have been obvious to many more people. Back then, I think a lot of people thought it would be cool to have a video distribution site... but how the heck would you pay for it. I remained amazed and confused by how Google could somehow afford to embed video on every random website -- becoming the video provider of the entire Internet. But I guess for them it was a simple formula of using their excess capacity for something.

Amazing.


About video streaming (Youtube, Netflix) - aren't they based more around keeping caches "local" to the user rather than relying on inter continental communications to stream? (honest question, as I can't understand if those 10 terabits would scale enough to be the decisive factor on the feasibility of Youtube)


What you are describing are so called "edge" servers, and this is indeed a technique for content distribution, but it relies on partnerships with willing ISP. You essentially set up connection "peering" with the ISP, in which you bypass expensive internet uplinks.

However, I can also imagine that quite a non-neglectible proportion of Youtube/Netflix traffic cannot be retrieved from such a cache.


That's correct, this is why CDNs exist, you don't want to backhaul this volume of traffic.


In 2000, George Gilder wrote "Telecosm: How Infinite Bandwidth Will Revolutionize Our World"

http://www.amazon.com/TELECOSM-Infinite-Bandwidth-Revolution...

One blurb from the book that stuck with me was the notion that in theory, accessing an external hard drive half-way across the world via a pure fiber connection could be quicker than accessing an external hard drive on your desk via copper. True? I don't know, but he thought so at the time.


That's the current rate, theoretically the bandwidth of fiber is limitless. Each generation of transmission equipment generally increases the bandwidth fiber can carry.


Slightly meta comment, but this is the kind of native advertising I'd like to see in future. A genuinely interesting article that happens to be sponsored by an ISP.


And quite targeted native advertising at that - without being British and having no experience with the ISPs in the article, I actually can't tell who it's advertising for. Possibly Tata, in which case it's even more targeted (advertising to ISPs and big corps?)


Seems pretty fast:

"Talking of which, John looked up the latency of the two Atlantic cables; the shorter journey clocks up a round trip delay (RTD) of 66.5ms, while the longer route takes 66.9ms. So your data is travelling at around 437,295,816 mph. Fast enough for you?"

Too fast. Like breaking the law fast.

Edit: oops my bad. That's mph and c is 186k miles per second. So this is like 0.66c - nothing to see here, move along!


There's another interesting (at least to me) calculation there.

From the article:

6,500km cable length

148 (& 149) amplifiers

66.5 (& 66.9) ms of round trip latency

Wikipedia says the refractive index of typical optical fibres is 1.44

So the light travel time down 6500km of fibre would be 6,500x10^3/(3^10^8*1.44) = 0.0312ms, and twice that for the round trip = 62.4ms

From that we get that the total latency of 66.5ms is 62.4ms of light travel time plus 4.1ms of (presumably) inline amplifier and terminating equipment latency.

That means each of the 148 amplifiers are doing their thing in something less than 28 microseconds, possibly way less since it'd be easy to assume the terminating equipment at each end is doing a way more time consuming task that just amplifying the signal, so could easily be taking up the bulk of that 4.1ms non-travel-time latency.

Anyone know how those amps work?


The wikipedia article is quite good: https://en.wikipedia.org/wiki/Optical_amplifier

Basically, the repeater is another laser, but the emmission of new photons is stimulated by low-energy photons exciting the partially-excited atoms in the amplifier.


Yeah, it's basically a laser without the mirrors (or more transparent mirrors)

Or, telling it the other way, a laser is like a sound amplifier that has mic feedback. (high gain and return causes feedback)


There are different types of amplifiers, you can look it up on wikipedia. All those amplifiers introduce way less than 28 micro of latency. I would attribute those 4100 micros mostly to landing sites.


EDFA (Erbium Doped Fiber Amplification) is often used, its done right in the fiber domain.


The amps are not active electronic repeaters (retiming), they don't "speak" sonet/sdh or Ethernet, they simply amplify the optical signal.


Formula 1 is mentioned in the article as caring a lot about latency. Why is this so? Is it really essential that race information is distributed quickly?


The article just said that Formula 1 "appreciates the need for speed", which I read as more of an offhand joke than a serious statement that F1 has unusual bandwidth or latency requirements.


Probably a lot to do with getting telemetry from the cars back to their servers for real time analysis.


Looks like something of an update to Stephenson's Mother Earth, Mother Board (and way overdue). Hoping the arstechnica article is as good.


Exactly my thought. Stephenson's article is my all-time favourite Wired piece. Here it is, a great longform read: http://www.wired.com/1996/12/ffglass/


Thanks for this reference. Thoroughly enjoying reading this.


Brains in jars?!?


Not the only baity thing about the title, so we've replaced it with the subtitle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: