Hacker News new | past | comments | ask | show | jobs | submit login
The Dunant subsea cable (cloud.google.com)
262 points by johannesboyne on Feb 4, 2021 | hide | past | favorite | 252 comments



Old timer story - years ago, Demon Internet (my old employer) was beginning to enter super-growth and wanted to really set itself up for trans-atlantic connections. So they decided to buy a T1 - 45Mbps link across the atlantic. Now it turns out that BT had only ever resold fractions of T1s - they had never actually had anyone want a whole one. And as such their sales commissions did not cap out.

So, we rang up, a sales guy picked up the phone and got a million pound pay day, and resigned that evening.

But our customers were happy so thats what counts :-)


As a Future Sound of London fan, I was so proud to be an early Demon dial-up customer when they name-checked their email address on Demon on their ground-breaking ISDN radio transmission on Radio 1. (To be read in a monotone female delivery) "For further information, please access the following code ... F S O L .. ACK ... F S O L ... DEMON ... CO ... UK"

Good times - they were a wonderful company, thank you


For further information on any aspect of this broadcast contact PO BOX 1871. London W10 5ZL. Copyright has been retained in the sound and visual.


I was wrong - initial section I meant starts "For internet connection, please access the following code ...."

https://youtu.be/_8SBdkru4IY?t=128 At 2:26


T1 maximum speed 1.544 Mbps T3 is about ~45 Mbps

Reference: https://en.wikipedia.org/wiki/T-carrier


My memory is very hazy - too many beers in North London pubs far too many years ago to remember clearly :-0


And it was probably an E3 (Uk/Europe is E1/E3, US/JP is T1/T3)


E3 is ~34Mbps, so it probably wasn't. It's true that the E1/E2/E3 hierarchy is used outside the US, but links to the US can be either depending on carrier preference.

The T1/T2/T3 and E1/E2/E3 hierarchies join at the STM-1 level: An STM-1 can be subdivided as 4 x E3s or 3 x T3s.

This means that on a EU<->US SDH link, an STM-1 can be demuxed into either E3s or T3s, so you can have both standards on the same fiber.


In internetworking a Tier 1 carrier is a carrier that is so interconnected other parties pay to receive traffic from them.


T1 and Tier 1 are not the same thing.


That’s true. However at least in my memory back in the day people would call the 10 megabit directly connected university connections t1 lines, because of the tier 1 thing.


On a slightly later time scale, I still remember the time I first saw a OC192 linecard in a router in person, at a major IX point, and how incredibly impressed I was. This was in the era when a transcontinental, or submarine transatlantic OC192/STM64 circuit was an astonishingly huge amount of money every month.


The remember seeing one of those in a Uunet data center in north Dallas in the 90s. I was amazed at the tech and also amazed how ordinary it looked.


Great story!


I know a lot is focusing on the Bandwidth. But are we making any progress on Long Distance Subsea Cable using Hollow Core Cable, achieving close to maximum speed of light for theoretical lowest latency possible? Imagine cutting latency from West Coast US to Hong Kong by 50ms!

Light is only traveling at around 2/3 of speed within Fibre.

The previous decades have been around Bandwidth. Is time we shift out focus to latency. 5G is already part of that , and 6G is further pushing it as standard feature. I wish other part of network start thinking about Latency too.

May be not network, but everything. From out input devices to Display. We need to enter the new age of Minimal Latency Computing.


You gotta bore for that sweet latency win. A chord tunnel between San Francisco and Hong Kong would save 1300 miles (20% improvement right there), and if you drill it straight enough, you won't even need a cable.


Please don't give the HFT's ideas, they'll probably do it and cause a half dozen tsunamis in the process.


If you don't like HFTs, this is an idea you'd probably like to give them. Nobody has ever drilled through the Mohorovicic. It is unclear whether or not it is possible to do so.

The most-likely outcome is a few happy geologists/geophysicists and a number of very-sad HFT underwriters.

https://en.wikipedia.org/wiki/Mohorovi%C4%8Di%C4%87_disconti...


It says it's 10 to 20 km below the ocean floor. Are we as humans been digging in the ocean floor that deep, or even at all? That sounds like sci-fi to me, I'd love to learn more if that's actually feasible.


USSR, land, 12,262 metres: https://en.wikipedia.org/wiki/Kola_Superdeep_Borehole

US, sea, 183 m below the sea floor in 3,600 m of water: https://en.wikipedia.org/wiki/Project_Mohole


Apparently the deepest oil well at sea was 10km deep, in the Mexico Gulf, drilled by the Deepwater Horizon rig



You know, if HFT's will be the agents of future MegaInfrastructure spend, so be it.

I long for the days when the US loved building infrastructure.


Anyone interested in the topic, this is a great movie — fiction but very close to reality:

https://en.wikipedia.org/wiki/The_Hummingbird_Project


...and create a supervolcano.


We already have one of those on land in the US.


We have two. I suspect that you are thinking of Yellowstone, but there is also the Long Valley Caldera in Eastern California. The Mammoth Mountain area.


Heh baby steps maybe? The existing cables aren’t even short paths along great circles. The Oregon-Japan cable google owns is 12000 km along a 7500-km path.


Need to have lava shields for that.


Who cares? Would be totally worth it because of unlimited geothermal energy!


The photons go through fast enough that they won't get very hot.


Remember: You want your photons crispy around the edges, not charred.


I take my photons medium-rare.


Not to make light of these puns, but they aren't very coherent...


On reflection I think we could all use a bit of light relief at the moment!


Medium-rare with a light salad


I only use recycled photons, so my light is green.


How deep would this bore tunnel be at the centre most point if it was perfectly straight? Would it go into the mantle?


d=r(1/cos(s/(2r))-1)=3958(1/cos(6906/7916)-1)~=2198 mi; Yes, into the lower mantle with only 692 mi to go to the outer core. What I would love the internetz to explain is how to justify the h8 for HFT, yet the luv for Musk since he is the one in the driver's seat at the moment for this stuff with StarLink and the Boring company.


One of the important components of online hate is that it requires zero justification or logical consistency.

It just needs to feel gratifying.


Because HFT helps the rich AF get richer but Starlink benefits normal people?


Satellite internet already exists; StarLink's defining feature is lower latency both by being in a lower orbit and inter-satellite links. It does benefit consumers by introducing another satellite internet competitor, but how many "normal" people want to rely on satellite internet or, if they do, care about a few extra 100s of ms of latency? (Inter)-National wireless companies have a tendency to consolidate and lobby out smaller companies and municipalities who have less incentive to build out fiber and landline companies, if any, have further justification to cut cords. StarLink is the now the leading solution for global low latency connections for HFT by being in a near vacuum in low orbit. The case for HFT benefiting non-professional trader Mrs. Mainstreet is that she no longer has to eat the larger spread offered by the big bank market maker every pay cycle when the 401K contribution hits with HFTraders providing liquidity. The opportunity for smaller traders to make the market is no longer there, but the odds that they would had a chance to begin with have been stacked against them for a long time with the cost of the fastest connection being marginal to now near insignificant.


We used to manage a remote branch over geo stationary satellite, it was an excersise in pain. We used to check the local weather forecast to see if it was raining before doing any work on the servers. Geo internet is awful, I think you are underestimating how much usability difference there would be between LEO and GEO latencies and bandwidth (because geo bandwidth was awful too)


Current satellite internet is really bad. ~600ms of latency is very noticeable even just loading webpages, and the throughput isn't great either.

Also multiplayer gaming is rather popular, and that's just not possible with that much latency.

VOIP is a pretty terrible experience with that much latency too.


" care about a few extra 100s of ms of latency?"

Well, as someone who grew up in the modem era(90's) and was trying to play online fps games. I cared quite a bit about latency. Normal people also like things to be quick you know :)

Based on that experience in the 90s to this day i want my internet connectivity to be as fast as possible and i'm willing to pay.

Low latency enables video/audio chat amongst other things and just a better experience.

A quick google search gives 600 ms latency for satellites(not Starlink) thats quite a alot. Also bandwidth is a issue with existing providers i think.


The ping on satellite internet is usually around 640 ms as it’s a ~45,000 mile round trip, worse the bandwidth is terrible. That kind of latency breaks a lot of assumptions in the modern web. Dropping to ~20ms and dramatically upping the bandwidth is a huge win for rural internet users.

PS: I am on the waiting list for starlink.


And tell me how one would service any problems that arise either by tectonic movements or breaching an alien/breakaway civilization hollow earth chamber?


Repair it like any other tunnel that requires maintenance?


> if you drill it straight enough, you won't even need a cable.

Well, yes and no. I recall they wanted to pursue hollow cables in the early days of optical cabling, but it turned out solid fiber was the answer.

(sorry, can't find a good reference)

So FTTC (Fiber Through The Core) is what you want.


couldn't you start experiments using the Alameda-Weekhauken tunnel?


Alameda enterprises excited by rumours of new chord tunnel Thursday, low latency burrito delivery futures up 5%.


Currently posting from Alameda. If you cause a local surge in demand that means I have to outbid a Manhattanite for a good burrito, I will cut someone.


BagelRail was dual use before the fiasco


And they are heated along the way!


We have the technology!


More feasible would be transmitting neutrinos or some other signal that would not be blocked by the Earth.


> More feasible

If they don't interact with the thousands of miles of earth between the source and the destination, they probably also won't interact with the receiver! :p Imagine the retransmission rates!

https://en.wikipedia.org/wiki/Neutrino_detector


The MINERvA experiment at Fermilab already demonstrated communication with neutrinos, admittedly over short distance: "The link achieved a decoded data rate of 0.1 bits/sec with a bit error rate of 1% over a distance of 1.035 km, including 240 m of earth."

https://arxiv.org/abs/1203.2847

Anyone from an HFT firm who wants to look into a partnership researching a neutrino link to the CME data center feel free to reach out :)


There has been a lot of progress in the past 20 years on antineutrino detection. Antineutrinos are produced by fission and so there's been a fair bit of interest in detecting them to detect covert nuclear tests as well as potentially a new modality of detecting nuclear submarines.

I think it could become possible before too long to use this to transmit data. It would probably be a ~billion dollar project, but the HFT arbitrage market is essentially winner-take-all, and may be large enough to support this size investment.


And you'd also have to ignore all the insane amounts of noise coming from regular neutrinos wizzing about in the universe.


If you've built a reliable detector, you've already built something that can intercept them. You just need to make a shroud around your detector and a tube facing your transmitter out of the same material.


There are ~65,000,000,000 neutrinos from the sun passing through each square centimeter of your hands every second as you read this. There are no materials on Earth that can reliably stop any given neutrino. For that, one needs densities greater than those generally found in stellar cores.

Neutrino detectors work by maximizing dumb luck through being both very large and very, very clean (low radioactivity). The transmitter-detector systems work by sending oodles of very energetic neutrinos at a well-defined time and looking for a rare coincident flash in the detector.


Any detector useful for communication is also an interceptor. The way we detect neutrinos now is not useful for communication.


If you're sending neutrinos at a known energy from a known location and in a narrow time-coincidence window, you can hammer most backgrounds way down.

The low detection rate isn't so terrible either -- one only needs the bits that are detected to be tradably-correct almost-all the time.

The hard part is arranging to make enough money to fund the accelerator and detector.


Dear god the packet loss.


My bet is on tech like Starlink with inter-satellite communication. Starlink should have lower latency with space lasers compared to fiber.


OP is talking about photonic bandgap fiber I think, or perhaps another kind of photonic-crystal fiber. At any rate, whereas in regular fiber guiding light via differences in refractive index the speed of light is only about 70% c, photonic bandgap fiber can reach something like 99.7% c, which is close enough to c in vacuum as to essentially eliminate the difference vs a free space EM link (particularly for space-based ones which face an extra minimum RTT distance penalty). Last I checked though 3-4 years ago they needed fairly frequent repeaters, were harder to mass produce, etc.

I don't know of any being deployed long distance, though in principle they'd be really valuable for intercontinental backbones. Starlink fills a huge gap in existing infra, and there are places that won't see any sort of fiber, let alone fancy microstructured fiber, for the foreseeable future (or ever, obviously in the case of ships/aircraft). But the bandwidth isn't great. Each current sat does I think 20 Gbps, and though no doubt that'll increase over time that's literally orders of magnitude from this single cable alone. Having the sats support direct ground optical links for backbone usage might be interesting someday, but weather attenuation will never stop being a problem with that. Starlink is filling in the gaps for fiber infrastructure, not replacing it. They're complementary.

So I agree it would be great to see more advanced fiber deployed long distances and start to shrink latency for everyone, and interesting to know what technical obstacles remain if any (maybe a lot remain?). A 40% speed boost while still having massive bandwidth isn't nothing.


Now...

How do you splice a hollow optic fibre?


Starlink satellites are in orbit 550km high. So any journey would add at least 1100km. Moreover not sure that a single satellite would be able to hit another one across transpacific distances and may need to go through multiple hops to get there.

Each hop will add latency since signal needs regeneration. So it’s not clear to me a swarm of satellites is a real winner from a latency POV. Furthermore, given costs to put the constellation up there, it’s extremely expensive on a $/bit basis and not sure how it could compete against fiber.

The value of Starlink is providing service in areas lacking existing broadband infrastructure where the cost to provide service exceeds the cost of Starlink.


>> Starlink satellites are in orbit 550km high. So any journey would add at least 1100km

Might want to check with Pythagoras on that one..


Meh, he said at least. There could be cases where you beam up then down nearly vertically (same city).


So, "at most" then, right?

The further you are from the other end, the less additional distance the satellite adds on.


But the correct statement is "no more than" not "at least".

Consider a right-angled triangle with base length d and height 550, corresponding to transmission from a base-station to a satellite. The hypotenuse has length sqrt(d^2 + 550^2), so the difference in length between the hypotenuse and base is sqrt(d^2 + 550^2) - d.

This has a maximum of 550 when d=0 (i.e., shooting straight up), and decreases as d increases: https://www.wolframalpha.com/input/?i=plot+sqrt%28d%5E2+%2B+...

Alternatively, consider the triangle inequality: the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This directly implies that the difference in length between the hypotenuse and base is less than or equal the height [base + height >= hypotenuse implies height >= hypotenuse - base].


Er, no, "the difference in length between the hypotenuse and base is sqrt(d^2 + 550^2) - d".

The hypotenuse is cos(angle)*base.

If you think about it at a minute if a sat is 500 miles up directly overhead that's the closest it ever will be, as it flies off the hypotenuse gets longer, not shorter.

So ideally you bounce off a sat overhead, (distance of 1100), any single hop will be longer, and to get across an ocean you'll likely need more than one hop.

Basically the sin(beam path) will will never be less than 550 and the length of the beam will never be less than 550.


~~D'oh - yes, that formula is true only if the triangle is right-angled, which is true for only a single base length.~~

Edit: Actually, this is always true: we are considering a right-angled triangle where the base is the horizontal distance from the ground station to point under the satellite, the vertical part is the 550 miles between the point under the satellite and the satellite, and the hypotenuse is the line joining the satellite and ground station.

> if a sat is 500 miles up directly overhead that's the closest it ever will be, as it flies off the hypotenuse gets longer

Yes: as the horizontal distance d increases, then the length of the hypotenuse (sqrt(d^2 + 550^2)) increases.

However, the difference between this and the horizontal distance (sqrt(d^2 + 550^2) - d) decreases.

-----------------------------------------------

If the angle from the horizontal to the line between the satellite and base-station is theta, then:

sin(theta) = 550/hypotenuse => hypotenuse = 550/sin(theta)

tan(theta) = 550/base-length => base-length = 550/tan(theta)

difference in length = 550/sin(theta) - 550/tan(theta)

[which simplifies to 550 tan(theta/2)]

We are interested in angles between 0 degrees (horizontal - corresponding to the limiting case of infinite horizontal distance between the satellite and base station) and 90 degrees or pi/2 radians (straight up): https://www.wolframalpha.com/input/?i=plot+550%2Fsin%28x%29+...

This is always between 0 and 550. The triangle inequality holds: for a single hop from base-station to satellite, the increase in length is never more than 550.

But as you point out, there may also be multiple hops.

> So ideally you bounce off a sat overhead, (distance of 1100),

This is the shortest total ground-satellite-ground distance, but as you cover 0 horizontal distance it is the worst case: the difference between the ground-satellite-ground distance and the length of the direct ground-ground line is maximised.


Are all base stations directly underneath a satellite?

I think this is an over-simplification if we are chasing pedantics; There are cases where it will be more and others less so the slightly more precise wording might actually be "about 1100km."

To the larger picture: it seems we often lose that order of length on the ground due to existing network topologies and geographical limitations.


Yes, this is an oversimplification: the original statement seemed to be based on getting a fact about trigonometry backwards, and I was just trying to resolve the underlying confusion.


1100km / c is 3.7ms. In free air, light speed is 50% faster. So long as the distance you are covering is more than 2200km, you'll overcome that. Of course, there's also the consideration that there can also be a lot of hops in terrestrial links and it's often very far from a straight line path.


Are you sure about the necessary regeneration? Let me hand wave from the dark skies here for a moment:

1.) Think of the precision mirrors in the so often mentioned EUV-lithography equipment from ASML for latest generation chips from TSMC.

2.) Now imagine something like that on board of a satellite, maybe smaller.

3.) Have 2.) moveable with sufficient precision to bounce the rays from satellite to satellite in realtime, without having to regenerate them in any way for about 4 to 5 hops.

4.) problem solved by purely 'optical' mesh while signal is 'in orbit'.

kthxbaiiii!


Those 'precision EUV mirrors' achieve a reflectance of about 70% i.e. they aborb ~30% of the EUV light that reaches them. :)

More seriously, those mirrors are special because they use bragg reflectors to handle 13.5nm light. They're not special for their precision, nor their reflectance.

Setting that aside, the major problem with your proposal is that laser still have significant bream spreading. So the mirrors would need to be large enough to encompass a spread beam at every step, which adds weight and volume for both the mirror and the tracking mechanism. The tracking mechanism is particularly problematic because moving mass on a satellite affect the attitude, so you either need precision counterweights to null it out, or large reaction wheels.

Using MEMS mirrors instead would solve some of the mass issues, but MEMS mirrors have very limited tracking (typically limited to a single axis) which would probably render them impractical.

Far, far easier to just send and receive the signal at every step.


Hrm. Taken from https://www.asml.com/en/technology/lithography-principles/le... :

> Flatness is crucial. The mirrors are polished to a smoothness of less than one atom’s thickness. To put that in perspective, if the mirrors were the size of Germany, the tallest ‘mountain’ would be just 1 mm high.

edit: What I meant to say was rather something with that precision reflecting whichever wavelengths are used for laser communications. Which would be infrared, I guess? Or are we talking Maser?


While an interesting idea, I think you’ve greatly understated the problem. First, lasers and coherent light beams diverge, light cannot stay perfectly collimated and it’s not really possible to collimate well over such long distances. So the receiver, >10,000km away, will “see” only a small cross-section of beam. The efficiency of this is defined by something called the overlap integral between the areas of the beam and the detector. Think of it like the amount of light from a flashlight that gets through a pinhole in a sheet of paper. This reduces the available signal power significantly. If you introduce mirrors you have the mirror loss plus the vignetting losses for each bounce. This is likely much worse.


But the reciever won't be be > 10,000km away in the configuration I mentioned. 4 to 5 'hops', remember?

edit: arrgh, forget it... one beam, reflected multiple times until 'end of the line', got it...(sigh)


So slightly concave mirrors?


Would have to be adaptive as in shapeshifting, since the distances between the involved satellites would differ, depending on the path?


Won't they be at low enough altitude that they'll need more hops than fiber to get around the globe, where each hop adds at least some delay?


Not sure what you mean by "hops" here? The current beta sats mostly act as "bent pipes", where they relay directly between user terminals and ground stations which then go to out to the regular net from there. But the final deployment sats are intended to have free space optical links between satellites (these are currently deployed and testing on the most recent polar orbit ones), so a connection can go entirely through the mesh in space until it reaches the nearest physical ground station (probably with some weighting for congestion and priority of course). The orbital RTT penalty will only be paid once, and with tens of thousands of sats the optical route will actually be much more direct for many people when crossing oceans than going through whatever undersea fiber links there are. Compared to regular fiber, final Starlink will definitely win on latency over sufficient distances.

But Starlink will never match the bandwidth and reliability that fiber can do, nor is it meant to. So it's not a replacement, just another awesome option.


Also just to run the math on an example for "actually be much more direct for many people when crossing oceans": say someone is somewhere on the southern coast of Alaska, be it more towards King's Cove or back towards Newhalen, and want to talk to someone in Sapporo Japan. As the bird flies that's something like a 2500-3000 mile distance. But in practice there is no undersea cable direct linking Alaska and Asia (unless that's changed in the last year or two). Instead a connection probably has to go to Anchorage, then to Seattle, then probably to Tokyo, and then out to the rest of Japan from there. This could easily turn a 2500 mile path into a 7300 mile path. Starlink satellites in the current plan AFAIK are going to heavily be in shells 214 to 350 miles high (including Ku/Ka band current ones and future V-band ones). At 350 mi orbit, so maybe a 700-1000 mile up/down penalty, total distance could still be half the cable distance in this example, even before latency advantages.


When you're traveling at the full speed of light in vacuum, compared to 2/3rds in fiber, even a few extra hops can leave you with significantly lower latency.


Right, if they are using standard OTN framing, the hop latency should be ~3 microsecond (which is <1km of light propogation)


I agree, but we can start on our local machines first. Most of the latency of modern computers isn't related to the network.


Yes. Keyboard, Mousepad, Display, Sound, Graphics.

I mean input lag [1] is easily 50ms. But some of them requires software changes. And any thing software is expensive. The cost of this new Cable is only $300M. Hardware innovation is getting faster and cheaper than Software.

[1] https://danluu.com/input-lag/


Latency reduction like that would mostly be relevant for traders.


And games.


You could load a modern Javascript-powered website in less than a minute with that.


Don't worry, as hardware advances roll out, software bloat will always expand to fill the vacuum.


Especially the linked blog post, as it doesn't render unless you allow JavaScript from gweb-cloudblog-publish.appspot.com (per uMatrix).

Webdevs: is there a reason why a page would be designed so that JS being on is mandatory? Especially for something as prosaic as a couple of paragraphs of text.


Laziness, baroque toolchains, and (most probably): Shipping your org chart.


Ever heard of React, Vue or Angular?

If you meant mandatory in terms of the actual medium requiring it, I can only hint to interactive applications, aside from that I don't think it would actually be mandatory.


All of those can be rendered server-side if needs be, and in my experience, can actually lead to a superior browsing experience compared to plain server-side served HTML. But it has to be done right.

Gatsby + Netlify with a CMS-as-a-service like Contentful or Prismic will lead you to a good result. We made e.g. https://fox-it.com/ using that, its back-end is Wordpress but it's drained empty to rebuild the website. Note how it works without JS, the dropdowns don't work but they fall back to full page navigation page. Note how with JS enabled, all the content shows up instantly. This is how it's supposed to be done.


Absolutely the can, yes. I wasn't saying they couldn't. I just answered the question. And those frameworks really introduced the idea of loading JS in order to load content to the broader masses. Things have evolved, sure and it can be done right, but nonetheless, those frameworks are a reason to force JavaScript on the user.


I got a blank page when I opened the web site. So as usual I looked at HN comments to see what it was about.

Here's an idea: add some HN logic to automatically move a comment that begins with "TL;DR" to the top of the thread.


That will just be abused


Perhaps not - if it required a high karma threshold


Now it’s really going to be abused…


> Webdevs: is there a reason why a page would be designed so that JS being on is mandatory?

I think in the case of Google, it's because they've been told they are the best developers, the top 1% of SWE's, they went through rigorous interviews, are paid a small fortune twice as much as they would get at a regular coding job, etc.

So it's dick shaking. They need to show to the world that they're better than plain HTML websites, that they have a massive schlong, that they out-chadded the vast majority of software devs. Plain html? Psht, we can invent our own language, gonna put those six years of uni to work! Wordpress? This is beneath us! It has to be a client-side rendered JS-pulled-through-GWT behemoth because on my system it's... wait it's slower, but nevermind that it's technologically ALPHA.

edit: actually looked at the source, looks like a Polymer / Web Components website. I've had to work in that once, it was dreadful compared to libraries used by real people.


Polymer is used by real people... even more-so if you include the spiritual successor to Polymer, LitElement. That's not to say either are incredibly popular, but still, that seems intentionally demeaning.

[0] https://www.npmjs.com/package/@polymer/polymer

[1] https://www.npmjs.com/package/lit-element


My browser loads that page like forever, until I remember to _allow_ javascript on that page, like why on earth they render everything except the content at all.


Yeah, it's one thing to build a page that wont render without Javascript, but making the only part that does render be a never-ending spinner is just rude.


This is what I encounter more often than not lately.

Is this due to more and more content simply generated by javascript frameworks?


Yes and because developers do not have time for the very few people who decided to disable javascript and not enable it when necessary.


But can I download a car with that speed?


if this were reddit, I'd be throwing gold at you.


That would be very kind but I'm quite glad it isn't.


I know there are many of these cables that have been around for years, but I am curious how are they physically secured? Especially where they transit from ocean to land? Is there some long underground/sea tunnel of conduit that the cable is routed through to the basement of some building? Or if you are walking along the beach somewhere is there just some cable running out of the ocean along the beach to some building near the shore?

I also wonder what kind of permissions and licenses you need to seek to run a cable across the ocean floor?


I'll take any chance I can get to link to this wonderfully sprawling 1996 article from novelist Neal Stephenson about the laying and landing of transoceanic cables: https://www.wired.com/1996/12/ffglass/

Yes it's long, but it's so worth it!


From back when Wired was a really great magazine. I threw out all my 1990s Wired magazines. What a shame.


A number of the first trans-atlantic cables landed in the tiny village of Heart's Content, Newfoundland. I drove through there on a road trip about a decade ago & stopped at the excellent museum in the old cable station, and was excited to make my way across the highway to the beach to see this exact thing. It turns out that the old cables are just.... left to rust on the beach. It's really amazing that these cables, originally a technological wonder & a bridge between entire continents, are just left to the elements once their useful life is over.

https://goo.gl/maps/Ku9FtfbMupApZthJA


Pulling such a cable from Atlantic would be hard, expensive, and potentially dangerous to other cables.

And there are likely laws about not obstructing the shoreline.


You’re spot on. Most of the cables are either laid on the sea floor and up to a beach, or buried underneath beaches. The following link has some good background on what goes into laying cables and how they terminate.

https://www.cnn.com/2019/07/25/asia/internet-undersea-cables...

As for physical security, there isn’t much on the sea floor. There are various instances of nation states tapping cables due to the ease of access when it comes to actually “listening” to the data. Obviously the issue there is getting to the undersea cable.

https://www.theatlantic.com/international/archive/2013/07/th...


> how are they physically secured

Multiple nations have specialised subs to tap into them. I doubt you'd find anyone willing to make such a guarantee. They are impossible to secure in any way and you need to rely on security assurances at different layers instead.


This was done in the 70s/80s, but I doubt it's worth the effort now. It only worked because the Soviets assumed the cables were inaccessible. End-to-end encryption is a thing even for the general public now.

Now we just compromise the servers/routers. https://gizmodo.com/the-nsa-actually-intercepted-packages-to...


> I doubt it's worth the effort now

It's very much still happening. Metadata is enough for intel purposes, storage is ridiculously cheap and post-quantum breaks of key exchange is forever 20 years away like fusion.

https://www.nytimes.com/2015/10/26/world/europe/russian-pres...

https://www.theatlantic.com/international/archive/2013/07/th...

https://www.zdnet.com/article/spy-agency-taps-into-undersea-...


The first link describes attacking cables - severing internet access for entire continents.

The second is from 2013; Google and others encrypted those comms shortly afterwards after Snowden revealed those taps. https://arstechnica.com/information-technology/2013/11/googl...

> "The traffic shown in the slides is now all encrypted and the work the NSA/GCHQ staff did on understanding it, ruined."

The third link is twenty years old, and no longer very doable for the same reasons as above. Anyone still sending unencrypted stuff along these cables deserves to get stung.


You're not gonna get any useful metadata out of it since the entire pipe is encrypted/decrypted at each end. All you'd see from tapping it at the middle is an unbelievably vast stream of random ones and zeros, the encrypted version of all commingled traffic.



As for physical security, some are only protected by simple manhole covers before reaching the actual technical building, here's an example in Marseille, France where SeaMeWe-4 (and others) lands: 43.261938, 5.37276 [0] but the landing points aren't exactly secret [1]. Here are some photos [2][3] and a video [4] of Dunant's arrival in France.

[0] https://www.geoportail.gouv.fr/carte?c=5.372494831681247,43....

[1] https://www.sigcables.com/index.php/cableliste/fiche_cable/5...

[2] https://twitter.com/jlvuillemin/status/1238414261774401537

[3] https://twitter.com/jlvuillemin/status/1238433769935319042

[4] https://twitter.com/jlvuillemin/status/1238479381145751553


>> Or if you are walking along the beach somewhere is there just some cable running out of the ocean along the beach to some building near the shore?

It is generally buried either under the sand or inside concrete. But yes, there are places where you can get very close to these things if you know what you are looking at.

https://en.wikipedia.org/wiki/Cable_landing_point

Here is a pic of the landing for the US base in Cuba.

https://www.dvidshub.net/news/186633/uct-1-unit-choice-gtmo-...


I’ve wondered the same, you can image search and find some pictures. They just sort of come out of the water and go up the beach (I guess what else would happen? Hah)

https://media.wired.com/photos/59546c71be605811a2fdcfd0/191:...



What's more amazing than laying a transatlantic cable today? Doing it in 1858! https://en.wikipedia.org/wiki/Transatlantic_telegraph_cable


There's a lot of documented cold war era espionage stuff about tapping undersea cables. Eg https://nationalinterest.org/blog/buzz/intelligence-coup-how...



I'd imagine that the permissions/licensing/whatever only applies to the ends of the cables, when you're out of international waters.


Wow, talk about a barrier to entry. Google already has Curie from North America to South America and Equiano from Portugal to South Africa. They're also working on Hopper from North America to UK and Spain:

https://cloud.google.com/blog/products/infrastructure/announ...

I presume that the other trillion-dollar companies are getting in on the action too.


Welcome to the new form of Colonialism!!


Because Google owns the sea and no one else can lay cable...


This is super-cool! I found "enough to transmit the entire digitized Library of Congress three times every second" to be a really weird comparison though - I'm used to text being really small and compressible, and I doubt many people have an intuitive grasp of how much One Scanned Library of Congress is. How many hour-long Netflix/YouTube episodes per second, on the other hand...


As of March 2019, Netflix is reported [0] to have 60 petabytes of data. Google tells me that 60 petabytes / 250 terabits per second comes out to 32 minutes. I’m not sure that translates to the layperson who might not appreciate what a petabyte is, but in the space of a single show you could theoretically transfer the contents of Netflix’s entire library over this pipe. So basically almost enough bandwidth to transfer the average user’s porn stash in a single day!

[0]: https://zeenea.com/metacat-netflix-makes-their-big-data-acce...


I agree, but somehow a LoC became a well-established unit of measuring data transfer[1]. So maybe less-technical readers are used to hearing that comparison.

1 - https://blogs.loc.gov/thesignal/2012/04/a-library-of-congres...


This can join the others - Olympic swimming pool, football fields, London bus, Empire State Building.


War and Peace is a few MB. A “YouTuber” gossiping in HD is more bits. Luckily video services rely heavily on on-net caches.


How long to transfer all of Youtube? ;)


The article mentions the number of fibres in this cable is 12, and that new technology was used to increase that number.

What is the limit on how many fibres can go in a cable? Should we expect future cables to have 50 fibres, or 100, or 1000, or more?


Problem is powering the repeaters. More fibers are not going to help if you can not use them. They mention in the article the improvements in the repeater design to cut down power draw to allow more fibers to be used.


I think the limitation is based on repeater and laser-pump equipment to repeat the signal along the length of the cable run.

I suspect that the repeaters and associated power equipment along the line is pretty big stuff. So the fact that this cable is able to "share" that equipment across the 12 fibers is a breakthrough in technology.


I know nothing of this type of engineering. How do you even start a project like this? Map the bottom of the ocean, figure out all the danger zones? What is the cost of doing something like this?


"What is the cost of doing something like this?"

Their Oregon to Japan cable, 9000km and laid in 2016, cost $300M.

https://www.computerworld.com/article/2939316/googles-60tbps...


That is at least 1 order of magnitude cheaper than I would have guessed. Mind boggling that it's cheaper to do that then buy like the 4th best meal delivery app in Canada or whatever.


It's probably cheaper than people would expect because the long run across the deep ocean is a lot more straightforward than most people would expect.

1. For the deep ocean parts of the route, cables and associated equipment (such as repeaters) are simply spooled out from the back of the cable laying ship, to settle on the ocean floor.

2. For shallow waters, the cable is buried. This is done by dragging a plow along the bottom which cuts a furrow and puts the cable into it. The plow has an altitude control and a camera so that an operator on the ship can control it, and a magnetometer to check if the cable is properly buried behind it.

3. For areas where burying isn't practical but they anticipate ships will anchor, they use armored cable.

For #1, the costs are going to be the cost to operate the ship while it slowly spools out the cable and the cost of the cable. For #3, same thing, but with more expensive cable. For #2 I'd expect it is similar, except the ship goes a lot slower (about 0.5 knots when using the plow, compared to about 5 knots when laying surface cable).

Finally, there is this.

#4. At the shores, they need to avoid damaging reefs and other habitats, not wreck the beach, and things like that. The cable needs to be in conduits that are buried or anchored. And building those conduits needs to be done in a way that does not mess up the environment.

So what you've got then for a long cable project is two ends that present underwater construction projects, the shallow waters near the two ends where you have to bury the cable, and then the long deep ocean stretch where you are just spooling the cable out.

This suggests the costs are going to have a component that doesn't really depend on how long the thing is (the two ends and the shallow waters near the ends where burial is needed) and a component that is proportional to length (the long run between the two shallow waters near the ends).

At 5 knots, it would take about 1000 hours to lay the deep sea part of the cable. If the ship costs $50k/hour to operate, that would be about $40 million. (I have no idea what it costs to operate these ships, but Google tells me that big cruise ships cost about that much to operate, and I'd guess that a cable laying ship is cheaper).

Assuming the underwater cable itself is 10 times as expensive as regular cable, its about $150 million for 9000 km.

That's brings us to about $200 million for the deep ocean part.


> Assuming the underwater cable itself is 10 times as expensive as regular cable, its about $150 million for 9000 km.

Still sounds really inexpensive when I consider it contains a large number of repeaters and is meant to stay at the bottom of the ocean.

Edit: Forgot to write, I haven't run the numbers myself but I enjoyed your reasoning here, you put a smile on my face :

> At 5 knots, it would take about 1000 hours to lay the deep sea part of the cable. If the ship costs $50k/hour to operate, that would be about $40 million. (I have no idea what it costs to operate these ships, but Google tells me that big cruise ships cost about that much to operate, and I'd guess that a cable laying ship is cheaper).


Fortunately you only need repeaters every 80 km or so, so you'd only need a bit over a hundred repeaters across the 9000 km span.

Repeaters aren't terrible expensive, so they only add a few million to the total cost.


Checked your profile now, I belive it :-)


And how are potential repeater unit failures accounted for?


Repeaters are designed to last for the lifetime of the cable plant. Design lifetimes are 25 years or so.

Repeater design is inherently very, very conservative because if the repeater fails, the cable fails. This results in an outage lasting days, if not weeks, as a cable ship is dispatched to the failure location.

The cable ship has to trawl for the cable and pull it up to the surface. Then the cable is cut and replaced with a new section that includes a new repeater to replace the failed one. Expensive.


I figure the hard engineering challenge is the repeaters. How do you build repeaters and power them, considering you can't really service or replace them ever over the lifespan of the cable (the deep ocean bits anyway)? A repeater every 80km is a whole lotta repeaters.



> Assuming the underwater cable itself is 10 times as expensive as regular cable, its about $150 million for 9000 km.

Looking at what I can find, it looks like way more than 10 times the cost.

https://i.imgur.com/7Dm7EEp.jpg


My estimate came to around $22k/kilometer for the cable itself plus the laying it in deep ocean. I didn't estimate the costs of repeaters.

The Google project was $33k/kilometer, so I don't think I could have been too far off on the cable itself. Looking at other undersea fiber projects, that seem about typical. For example, this one [2] estimated $27k/kilometer [1].

Here's an Alibaba seller with submarine fiber for $2000-9000/kilometer [2].

The submarine cables have an aluminum or copper tube around the fiber optics, an aluminum water barrier, and a sheath of stranded steel wires, and an outer polyetylene layer, with various other layers of mylar, polycarbonate, and petroleum jelly in between.

I'd expect the metal layers to be the most expensive parts. Looking at the cost of tubes or cables of those materials, it looks like each of those would be in the $1000-2000/kilometer range.

[1] http://infrastructureafrica.opendataforafrica.org/ettzplb/co...

[2] https://www.alibaba.com/product-detail/Submarine-Fiber-Optic...


This's so freaking cool!


10x is a fair estimate of cost vs regular armored fiber cable.

Source: I've laid subsea cable.


Just to be sure, you mean 10x between subsea cable and regular armored fiber cable, not cat6 I can get from best buy.


Yes, not that there's a large difference. Best Buy has pretty large markups, especially on short CAT6 cables.

You can buy subsea cable for $10-$20 per meter.

EDIT: the cost depends on how many layers of armoring you require. Deep sea cable requires less, shallow sea cable more.


That's is what I thought, years ago back when I worked for a big telco we actually had a small fleet of our own cable laying ships.

The fun thing was the company hand book had a whole other section of T&C allowances etc if you worked on a ship.


Yeah that's what I was thinking too. It doesn't sound like money well spent, it sounds like a bargain to me, like "you'd be stupid not to do it" cheap for something the size of Google.


Indeed, California spent 20 times as much for a 3.5km bridge.


That sounds like money well spent, and a good deal considering what it enables. It would be incredible to see the multiple levels of govt around the world collaborate to create a publicly funded (bond sales) project for laying fibre optic across the planet which could not be sold to a private corp, and that guaranteed access to it based on population proportion, not GDP.


I have no idea about how this stuff, but this wired article from 1996 written by Neal Stephenson about undersea cables is a fantastic read.

https://www.wired.com/1996/12/ffglass/


The article is now almost a quarter century old and the cables have gotten better. In fact, even that cable probably got a lot faster after optical coherent detection was introduced, i.e. much more capable modems. But the way the cables are actually laid and especially the details of the shore landings and the issues of terrestrial runs, are as current as ever.


Came here to recommend the same. I reread it every 5 years or so for inspiration.


Pretty much, you do surveys, probably based on existing ocean floor sonography, and then contact out a cable to someone like NEC, TE SubCom, Huawei, etc… Load it up on a cable laying vessel, and use software like Makai Lay to optimally place the cable on the ocean floor. [This is the basic idea, I wouldn’t treat this as an authoritative answer, I’m just loosely adjacent to this industry]


While not being super technical, there is an interesting miniature "The First Word Across the Ocean" in the "Decisive Moments in History" book by Stefan Zweig. It tells the story and circumstances of how the first trans-atlantic cable (back then for telegraphs) was laid in the late 19th century.


The funny thing is, that when you realise that they just lay it down on the sea floor and you start to think through all the potential issues with throwing a very thick special cable on the ocean, you will realize that it already just works as it is for a while.


The article says this cable uses SDM (space division multiplexing). Which, for fiberoptics, means that you have multiple fibers. Of course they HAVE TO put many wavelengths on each fiber, each wavelength carrying a signal.

The "state-of-the-art" AFAIK is to use many wavelengths per fiber, each one carrying ~192 wavelengths each wavelength transporting at up to 100Gbps (this is known as DWDM).

So so with SDM, you just have more fibers? So what? It seems like I am missing something here? Why is "SDM" the key concept rather than "DWDM"? Why not just say DWDM with 12 fiber pairs?


I thought the same thing, but they really are sending N completely separate signals spatially separated at the transmitters, then deconvolving them (sort of) at the other end. Relies on very complicated structure inside the glass of the fiber.


You can send spatially-separated signals down a single multi-mode fiber.

https://www.nature.com/articles/s41598-019-53530-6


It's not the case here. On their website, google states: ... Dunant is the first long-haul subsea cable to feature a 12 fiber pair space-division multiplexing (SDM) design ...

Multi-mode fibers are not feasible for long distance transmission. For long distance communications, using suggested approach, may be better to use multi-core fibers.


That's interesting! But multimode fiber isn't feasible for thousands of kilometers? This is transatlantic. Wouldn't that have to be singlemode just for the distances involved?


Even single-mode needs repeaters along the length of the cable to get across an ocean. I guess you could use multimode and a lot more repeaters, but that seems more expensive and more failure-prone.


When I was first learning about fiber, graded-index multimode was the "hot new thing" with corning promising the modal-dispersion of single-mode fiber with the light-carrying capacity of multimode, which should reduce repeaters compared to either. Since these are single-mode fibers, I assume those promises were overstated?


Yes, and the SDM as described in the nature article in parent^2, it would require a something far more complex than a repeater (which in most cases is actually just a purely optical amplifier).

Current practice is to use erbium doped fiber amplifier or raman amp for boosting optical signal at long intervals for transoceanic runs. Given the complexity of spatial signal, I don't think a regular optical amplifier will work? I could be wrong, this tech is changing but submarine fiber-optics tech is necessarily conservative and slow moving.


Do those come with pre attached NSA listening devices [1] ?

[1] https://siliconangle.com/2013/07/19/how-the-nsa-taps-underse...


Most certainly. You don’t land a cable in either the US or France without a classified annex to the license that provides for interconnection to their intelligence services.


Let's say they do tap the cables (I reckon they do too but in case they don't) what can they actually see if the traffic is encrypted?


I recall seeing a story in the past about how the NSA planted a misleadingly weak encryption library.

https://golem.ph.utexas.edu/category/2014/10/new_evidence_of...

If the encryption used is flawed, they could see whatever they want.


> what can they actually see if the traffic is encrypted?

They can see who is talking to who, and when:

* https://en.wikipedia.org/wiki/Traffic_analysis


Source, destination, message length, etc. Lots of interesting meta data to play with.


I’d love to see wireshark running on a saturated 250Tbps link


assuming there isn't a point-to-point encryption layer for the whole cable.


Sure, but the NSA tap probably sits just after that point.


Considering they do intercept, as widely documented, question is how do they see if traffic is encrypted?


Pretty trivial. If it looks like random noise, and doesn't have a compression header then it's probably encrypted.


And at 250tb passing by every second


Nobody knows. Probably not.

But Snowden showed us that a lot of it is scooped up and warehoused. Maybe they can see your traffic in a decade or two?


Forget encryption every second so much data passes through at a time can they even isolate particular data let alone encrypted data. Can they process all that data in real time if not how are they storing it to process later. This is just 1 cable from 1 company there are now dozens of cables of different companies.


> so much data passes through at a time can they even isolate particular data let alone encrypted data.

They can't. That's why they call it "bulk collection."

https://en.wikipedia.org/wiki/MUSCULAR_(surveillance_program...


There's a reason the vast majority of undersea cables have at least one end in a Five Eyes country. No need to tap it in the middle of the ocean then!


Even if this is true, I think the simple reason might be that one of the five eyes country is US, which is probably the global hub for data and services used throughout the world. Also, Britain being near entrance of Europe from Atlantic, and Australia being near Asia would make economical sense for the cables to take that path.


False. Snowden revelations clearly indicated that tapping undersea cables is (unsurprisingly) difficult to detect.

A lot of surveillance is done both *illegally* and secretly.

Forcing carriers to install black boxes next to their routers is not always the preferred choice.


You are missing the fact that most undersea cables get tapped multiple times. Five Eyes normally inspects the data on land, but enemies will do undersea taps.

While a cable is being tapped, there will be a suspicious change in signal strength, and various signal reflections will tell the cable operators where the tap is. Thats bad for a spy agency who want to remain undetected.

Instead, they break the cable in three points deliberately. The middle point is where they put the tap, and the spy agency will repair it. The points either side are simply so that the cable operators don't know where the tap has been inserted, and have to be repaired by the cable operator. That gets expensive, since it will typically happen 3 or 4 times for a new cable install (3 or 4 countries want access to the data).

Cable repair operations are typically public knowledge (they require specialized ships), so anyone who fancies can crunch the data and see how often a cable breaks in multiple places before being repaired to know how often it's tapped... Mediterranean cables seem to see the most taps.


> You are missing the fact

Please don't make guesses. I'm aware of the tapping process.

> Thats bad for a spy agency who want to remain undetected.

Yes, this is inevitable and it's still extremely more stealth that plugging network taps in somebody's else NOC. Especially if the tapping is done illegally.


The cable itself? Almost certainly not. They don't need to. It terminates in the US.


Excellent related Ars Technica article related to deep-sea cables if you want to learn more: https://arstechnica.com/information-technology/2016/05/how-t...


Mother Earth Mother Board is one of my all time favorite articles, which chronicles the laying of a cable.

https://www.wired.com/1996/12/ffglass/


Another recommendation in this vein is Arthur C. Clarke's "How the World Was One", it provides some fascinating historical context for how we got to where we are today (or rather, where we were in 1992).


Ah the old "entire digitized Library of Congress" per second metric


I always find comparisons using text data incredibly worthless.

I'm sure a Shakespeare play or The Great Gatsby are barely a few megabytes.

But if you asked Joe Shmoe on the street "In Great Gatsbys, how big was the last picture your iPhone took", they would rightly have zero idea.

It's so useless.


Agreed, number of books stacked end to end to reach the moon is much more intuitive.


Easy! It's just three olympic sized swimming pools worth of dollar bills stacked to the moon in bits.


I think all of Shakespeare was like 450,000 LOC.

I used to use that metric when folks ask why it took so long to debug. Like, our project is 600,000 LOC and more complicated than any of his works. He didn't have it all memorized and neither do I. It's a metric PMs can understand.


I think this says more about the minuscule (on Google scale) ~10TB size of the digitized library of Congress.


If anyone enjoys this topic, I would recommend reading "A Thread Across the Ocean: The Heroic Story of the Transatlantic Cable" by John Steele Gordon.


“ enough to transmit the entire digitized Library of Congress three times every second” the engineer in me: compressed? With images? Or just raw texts?


From 2016: "THE LOC’S DIGITAL COLLECTION currently comprises over 7 petabytes (7 million gigs) with more than 15 million items, including 150,000 print books. In an ideal future scenario, the LOC estimates that it could digitize a further 3-5 PB a year" [1]

So only the raw texts, probably. 10TB sounds about right for that.

[1] https://nplusonemag.com/online-only/online-only/the-library-...


This is a video from google about how laying undersea cables works: https://www.youtube.com/watch?v=H9R4tznCNB0 I've always wanted to know! Super cool!


No mention of latency improvements?

Seems crazy since oversea transit (tcp & single-channel) is usually latency (or loss) bound.

I would expect it's better than going over public transit and legacy subsea fiber, but it would have been useful to see some comparison tests between POPs.


Google invests a lot in TCP congestion control, mainly through BBR. I believe they do bulk transfers with centrally-scheduled fixed-rate UDP transmission. I also assume they have better control of buffers, loss, and queueing algo’s to prevent/control loss/drops.


I'm not sure how easy it is to increase the speed of light in glass without some sort of new breakthrough.


Not travelling through 20 routers in the process tends to help. Again it would be good to get a tangible idea of how much better this is vs just stating the obvious about peak theoretical throughput.


Is Google using this for consumer services (Gmail/Search/YouTube/Stadia) that don't run on GCP or is this only for GCP? If it's only for GCP, they are betting big, which is good.


Google uses gcp.


For everything? I was under the impression they still ran all the big stuff on their internal cloud with Borg and all the other infrastructure tooling they built.


Yeah, you should think of it more like GCP runs on Borg, not the other way around, although the description is not perfect. Also Google's cloud services like Cloud Spanner and Cloud Bigtable run directly on Borg.

What's terrifying is that Google described each of their B4 sites as having 60tbps uplinks in 2017, growing at 100x per 5 years. So a 250tbps undersea cable is nice but when you think about it probably not enough to make intercontinental transfer too cheap to meter.


My understanding is that GCP is essentially selling off extra capacity in those data centers, so for example your VM running in GCP is scheduled by Borg under the hood. So it's more like GCP runs on Google, rather than Google running on GCP.


I see a lot of these fiber lines pop up on tiny islands throughout the pacific. What’s happening at these places? Are there people who work there and if so what are they doing?


Tax evasion, err 'financial optimisation'.


For those of you who haven’t read Neal Stephenson’s Wired article on submarine cables from 25 years ago.

https://www.wired.com/1996/12/ffglass/


> will deliver record-breaking capacity of 250 terabits per second (Tbps) across the ocean—enough to transmit the entire digitized Library of Congress three times every second.

Damn. Anyone else just agog at this figure?


It's not that much in the grand scheme of things. A couple of data centres will saturate the link easily.


TechCrunch story about this posted yesterday:

https://news.ycombinator.com/item?id=26017592


My Firefox Developer Edition (86) doesn't load the page completely, one of the resources (https://gweb-cloudblog-publish.appspot.com/api/w_v2/pagesByU...) has an untrusted certificate (SEC_ERROR_UNKNOWN_ISSUER). It is issued by "Cisco Umbrella Secondary SubCA".


It's issued by GTS CA 1O1 for me. Umbrella sounds like a security thing on your network: https://umbrella.cisco.com/


Turn off your work VPN!


Oh dear! how is NSA going wiretap that ?


Is this only for Google traffic?


Is this a private cable that only connects Google datacenters? If so, too bad for open, neutral Internet.


Anyone who has some clue wants to take a shot at what could be investment cost for such a cable?


This cable is not just to be used by Google right? Or am I misunderstanding something? Fundamentally, infrastructure should be publicly owned and then rented by companies to use it, in this case it seems like Google physically owns the cables and infrastructure which would be a massive waste.


Feel free to convince your government and fellow citizens to use tax money to pay for such infrastructure. Google laying down their own cable isn't stopping anybody from doing so.


Why is it a waste? If Google has enough demand to fill the cable, then how is it waste?

(And I assume that Google has enough demand: If it didn't, why would they build such a large cable?)


>Fundamentally, infrastructure should be publicly owned a

No. Good market socialist solution in situations where network investment (electric grid, railway, telecom) creates natural monopolies, is forcing separation of the network and content.

For example electric grid owner must allow other sell and buy electricity trough the network. They can only get maintenance fee determined so that it cant be used to distort energy markets in favor of the company owning the grid.

In telecom it usually applies only for the last mile.


I'm delighted by all the speculation in this thread about whether the cable laid by the global surveillance company is somehow being spied on.


Well we assume any important Internet choke-point is used for surveillance. If I just started surveilling anything sent en clair my first stop would be Internet backbone connections.


Sigh... Removed because people don't seem to want to see it.

Not a big deal, but...sheesh. It's not like it was a troll comment; just a relatively lighthearted poke.


I don't get it.

Whats the issue?


Check the second link.

On another note - the third link captures the back button and doesn’t let you get back to hacker news (at least on mobile). What a shitty site.


Here's a trick from the old days (works in all my mobile browsers[1]):

long click the back button, a popup will show your navigation history and you can click the last link before entering the broken site.

That said, the behavior is absolutely unacceptable.

[1]: And in Firefox desktop you can also do this but I can't remember if it is long-click or right click.


Yeah...remember when SlashDot was Hacker News?

How the great have fallen...


Is this why they're losing billions?


Your comment is both snide and wrong. lovely combination. I will respond anyway.

They are losing billions because they are paying for growth. It is the proper strategy.


only on hacker news could you get downvoted for asking legit questions and dumb answers by apologists...

This is what happens when a marketing company starts a cloud right? turns it into a loss leader and everyone who buys it becomes and apologist at all cost.

i don't get it.


ah, indignant as well.

GCP is not a loss leader. the unit economics are fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: