Hacker News new | past | comments | ask | show | jobs | submit login
Google Fiber: 151Mbps down / 92Mbps up (geek.com)
235 points by 11031a on Aug 23, 2011 | hide | past | favorite | 150 comments



The obvious question now is, if Google can do this, why aren’t network operators banging down the company’s door to get involved and roll this out everywhere. Yes, there’s big investment involved, but these companies are kidding themselves if they think waiting and dealing with a lack of bandwidth in the future is going to work for them. Whoever jumps first and starts investing in these fiber-to-the-home initiatives is going to be rewarded with a lot of new customers in years to come.

They don't need Google to do it. FTTH/G-PON solutions are available to any ISP in the country and have been for several years. They haven't done it yet (besides Verizon FIOS who provisions at much lower speeds) because the demand really isn't there yet. Most customers do not pay more for faster speeds even when they are available. You've got a lot more customers who downgrade to slower speeds to save money than customers who pay more for faster speeds. More competition would certainly help but until you can convince normal people they actually need faster Internet connections it's not much of a competitive advantage. (as long as you can match or slightly exceed your competitors)

Once you hit about 20Mbit/5Mbit faster speeds are almost completely unnoticeable by most people who are just doing some normal browsing, IP video, e-mail, FaceBook, etc. That's what most people are using their Internet connections for these days. We really need a killer app that pushes the limits of most Internet connections and you'll see more companies betting on (expensive) upgrades to deliver those speeds. I don't see much on the horizon. You can already do pretty decent 1080P video at 10-20Mbit/sec. What type of apps are going to push the demand?


> Once you hit about 20Mbit/5Mbit faster speeds are almost completely unnoticeable by most people who are just doing some normal browsing, IP video, e-mail, FaceBook, etc.

People always say this, some years ago it was 6Mbit, before that it was 2Mbit etc. The services change and with it the demand for higher bandwidth. In a few years when everything (TV, phone, internet) runs in super high quality through your broadband you will need more again. That said, i am currently happy with my 64/5Mbit at home, still have the option for 128/10 but dont need it right now.

Only have 32/2 in the office though and thats a bit on the low end when 6-7 people use it all the time.


How about just convincing Verizon to sell the higher speeds to those who want them, at a higher price? The hardware's there, but their most expensive FiOS business connections are far short of Google's numbers.


Verizon does offer up to 150Mbit/35Mbit for their business packages. They probably don't see a market for it on the residential side. CableVision, Comcast, TWC, etc do offer 100Mbit+ connections for residential use in some limited markets. From what I've head they are not big sellers. (less than 1% of customers upgrade) That has probably slowed down wider deployment.


Is this so different from 10 years ago, when no one realized that they wanted high-definition TV? I can't imagine that the demand for HD television was very high then either. Once it was available, manufacturers began making all of their equipment HD capable.

The fundamental cause of poor broadband network performance in this country is actually not a lack of demand, it's a lack of competition among ISP's -- otherwise, these speeds would come to market naturally.


Over here in Helsinki, Finland there's a cable operator that offers connections that go up to 200/10Mbit (54,90€/month) and one of the biggest ISPs is putting together a 1000Mbit residential connection trial (they already offer 100/10Mbit fibre connections).

I think the demand is there if the pricing is decent.


I don't know if I'd call $80 reasonable. I pay $35 for my 25/5 with FIOS and am very happy with what I get and wouldn't pay more just for fast speeds.


To be honest, I have their business service at home to avoid trouble when I'm running VPN servers, etc. and the occasional Web service from home -- not to mention the usage cap issue.

And it's nice to know that they're up to 35 mbits upload (much faster than what I have), but their hardware is capable of much more (assuming it's on par with Google Fiber's).


Dunno about others, but the reason I'm not yet on Cablevision's 101/15Mbps plan is that they're charging a non refundable $300 fee to get you hooked up. I'm a software developer that works from home. If I won't do it, what are the chances your average Joe would?


Co cast 100Mbit+ is $200/month in SF. Steep.


Why are people comfortable with paying for tiered connection speeds, but not comfortable with a monthly data cap (that can be removed by paying more for a business connection)?


I think its because paying for a tiered connection speed is set-and-forget. Paying with a monthly data cap effectively overlays an economic decision over each interaction you have with the internet. "Do I click this link? How much bandwidth will that use?" Even though I think most people probably would end up not thinking about it on a link-by-link basis, they do have to devote some mental power to monitoring how much of their data cap they've used, how much they have left in the current billing cycle (especially under billing regimes that charge way higher marginal costs when you exceed the initial cap - e.g. most US mobile phone voice minute plans).

It's just annoying. Even though it seems like tying the price of something to your use of that thing is the most efficient way to bill, it just sucks because then you have to think too much about every use: text messages, AOL hours, phone calls, until these things got so cheap that they weren't worth metering, all were in that category.

Personally, I suspect there's some behavioral economics ish at work here that is measurable, like...you know how some hotels and cruiseships have all-in prices? I bet people are actually willing to pay more for the same amount of consumption with the 'unlimited' plans because they enjoy not having to worry about it. This would also be pretty easy to test with cruiseship or all-inclusive resort data where they also sell other a la carte packages.

I have data cap plans on my phone and iPad, and I don't really mind them, but its only because I know that they are effectively unlimited: I never come close to using up the cap level of data. I don't watch Slingbox on my phone, etc.

I tell you what really burns me up though: the fact that even though I am paying per-gigabyte transfer fees on my iDevices, AT&T wants me to pay more money to tether to my laptop. Why the fuck should they care? The data is the data. I understood when the idae was that a tethered laptop would use assloads of data on the nominal "unlimited" plan exceeding their modelled costs, but...dude, I bought data transfer. why do you give a shit which device the packets end up on? I HATE THIS. It honestly annoys me more than if AT&T just built in the tethering to the price and I just had to pay it.


There's two flavors of monthly data cap.

I have a Linode with a data cap of 300GB per month; if I use more than that, I pay a reasonable amount ($0.15/GB) for the excess. So if I use a lot of data one month, I get a charge on my credit card, and life goes on. I'm completely comfortable with this.

A residential Comcast connection has a data cap of 250GB per month; if I use more than that, they turn off my connection and ban me for a year. So if I use a lot of data one month, I find myself abruptly and semi-permanently screwed. I am absolutely not comfortable with this since the consequences are so severe and long-lasting.


I don't think this is the case at all. The problem people have with monthly caps is that they're frequently done with plans that advertised "UNLIMITED X!". There are also people who don't mind paying per GB. The problem most people have is when ISPs react according to content, such as capping your torrenting but leaving youtube untouched, or capping Netflix but not Facebook, etc.


I thought there was a law against doing that (net neutrality).

I've heard that ISPs discriminate between "types" of content (video vs text). But I wasn't aware they were discriminating based on the company (Netflix vs YouTube).


I thought there was a law against doing that (net neutrality).

There is no law enforcing Net Neutrality.

And really, the concern isn't that a company like Comcast will meter Netflix, but not Facebook. It's that they will meter Netflix, but not "Comcast Video Streaming" if they were to offer such a service.


Because ISPs are keeping prices the same when introducing caps, effectively providing less service for the same money.


Because when you pay for speed X you want speed X, you don't want speed X-n after you download Y GB of data.


The first productive applications that will utilize these speeds will be grid-based computing projects. Initial iterations of artificial intelligence will require large computing resources (GPGPU) in addition to high bandwidth, and low latency.

Projects like 'Test4Theory' that utilize Monte Carlo simulations will help us to accurately visualize and better predict the fundamental nature of our universe. The best contributors to these kind of projects will have high bandwidth fiber connections.

The first unproductive applications will be the next generation of first person games.


"Grid computing" is going to need more than speeds similar to 100mbit Ethernet. "AI" has been around practically since computers were invented, so I am not sure how the "first iteration" of AI will be created thanks to these higher speeds.

Test4Theory is typical distributed computing setup with "small" datasets that require high processing time/simulation to get results. It's not like they are streaming live data from the LHC to your computer. They rely on an actual distributed/grid computer to do that.

Also next gen FPS will certainly not require or take advantage of 100mbit connections because most people don't have them. No one is going to make a game that comes near needing 100mbit/sec.


I think a lot of passion behind this comes from Googlers having this kind of connection at work :)

http://www.speedtest.net/result/894208160.png


Almost makes me want to apply at Google.

I have a[n up to] 25 mbps line, Shaw is rolling out 50 mbps but upstream isn't really improving so big deal. Downstream is not my bottleneck for most things but online backups and such are painful. I'd blame the technology but DOCSIS 3 allows for more than that, and Telus has fibre in Canada with similarly castrated upstream bandwidth.

sigh

http://www.speedtest.net/result/1447275168.png


Which makes me wonder what kind of connection Speedtest.net has.


Perhaps this measures capability of SpeedTest, not capability of Google :-)



It says a distance of 1200 miles but a ping of just 3ms, which means they're breaking the law. Speed of light in a vacuum: 299.8km per millisecond.


No, the location is taken from the RWHOIS data which just happened to not list the IP's actual location.


Sorry, that 3ms ping suggests you are inside their network. Try getting that over the internet.


Incorrect, I've gotten pings of less than 1 millisecond from over the internet.

PING a152.g.akamai.net (204.0.5.50) 56(84) bytes of data. 64 bytes from 204.0.5.50: icmp_seq=1 ttl=59 time=0.852 ms

PING yahoo.com (209.191.122.70) 56(84) bytes of data. 64 bytes from ir1.fp.vip.mud.yahoo.com (209.191.122.70): icmp_seq=1 ttl=57 time=3.06 ms

PING google.com (74.125.73.104) 56(84) bytes of data. 64 bytes from tul01m01-in-f104.1e100.net (74.125.73.104): icmp_seq=1 ttl=55 time=8.38 ms

PING e3191.c.akamaiedge.net (184.86.157.15) 56(84) bytes of data. 64 bytes from a184-86-157-15.deploy.akamaitechnologies.com (184.86.157.15): icmp_seq=1 ttl=60 time=1.15 ms


In the coming 2-5 years most 3D games will be ray-tracing-based and streaming technology would not only be very suitable for it but very beneficial, allowing huge game worlds with unlimited detail. Highly detailed or scanned 3D objects take easily terabytes of storage and streaming everything from the servers is really the only viable option.


If that is true, than most games in production and pre-production now are using ray-tracing based engines.

Can you cite examples of major studios using ray tracing engines today?


In that case you will still end up with a latency problem.


There's local cache(s) too to help with latency. Basically, you have to keep your GPU memory full, cache on RAM, cache on SSD, cache on HDD and continue all the way down to the cloud servers. Check out this demo for example, which streams data from disk http://www.youtube.com/watch?v=HScYuRhgEJw


Game devs are not interested in making games that no one will be able to run. Regardless of the benefits of raytracing it is currently inefficient when compared with raster tech currently on GPUs. Hardware tessellation makes inefficient voxel tech moot except for scientific purposes. We can already stream data from the hard disk to the system ram into GPU memory.

I'd put mainstream voxel/raytracing tech as more like 10 - 20 years in the future.


> We really need a killer app that pushes the limits of most Internet connections and you'll see more companies betting on (expensive) upgrades to deliver those speeds. I don't see much on the horizon.

If bitcoin gets very popular, running a node will require very fast download and upload speeds to get and rebroadcast the transactions.


P2P live video.


That's already a real thing and it doesn't require any higher bandwidth than even conventional cable/dsl provides.


What if you want to steam high quality P2P video ( ≥720p )?


x264, 10mbps or less, there's no need for 100mbps or more for such things.


what if you have a video conference with 8 participants each at 720p?


For some reason this comment put thoughts into my head of people having multiple large screens positioned around their house instead of windows.

You could have beautiful HD panoramic views from around the world in your cheap wall-facing apartment. Of course the technology would need to come down in price but we all know how quickly that tends to happen.


veetle.com does a really good job at this. I have a six year old media center box that doesn't run anything that well besides veetle and 1080p at that.


As per below here, 5ms ping [rather than 1Gig down] seems to be the central focus of this kind of optic social venture.

It provides some insights on where BigG sees some future here.

These are apparently times where synchronicity vs. asynchronicity begins possibly to be a way hotter topic than definition vs. pixelation.

There is all the space to ponderate for a while on this as a social index: who would have said that only 3 years ago?

I wouldn't.


To me, the 5ms ping is the most impressive part. I might not use 100Mbps on a daily basis, but the low latency is nice all the time.

Edited for clarity.


A lower bound for latency to the 'other side of the earth':

(20 000 km) / the speed of light = 66.712819 milliseconds

http://www.google.dk/search?sourceid=chrome&ie=UTF-8&...


Ping measures the round-trip time, so it would really be twice that.


The 20k figure is an approximate circumference of the Earth.


24,000 miles, not km.


its 40k, not 20k.


The circumference of the earth is ~40k km, which would make "the other side of the earth" ~20k.


Yeah, that's the point.


Since we're talking hypothetically, the lower boundary would really be the diameter of Earth.

    12,756.2 km / c = 42.5501031 ms
http://www.google.com/search?q=diameter+of+earth+%2F+the+spe...


> 12,756.2 km / c = 42.5501031 ms

It's worth pointing out that we're looking at more like 0.65c, through optical fibre. This would also be ignoring routing infrastructure/processing time and network prioritisation.


not with a molton core, unless I misunderstood you...


Fiber optic cables may not be able to penetrate the molten core of the Earth, but there is no law of physics that prevents information from traveling through the core of the Earth. Compare this to the speed of light, which is an absolute upper bound on how quickly information can propagate. There is simply no way, in this Universe, to do better than the straight-line distance through the center of the Earth.


You could transmit magnetic waves at about the speed of light.


Around the same latency as I see on FiOS.

Which residential fiber products have huge latency?


DSL has up to 110ms latency.


There are no upper boundaries when it comes to latency, but at some point your browser/whatever will start to resend the information, thinking it's lost. Typically you should have around 20-30ms within the same country, depending on peering and such.


"Small"-country standpoint. In the States, ping could be upwards of, or over 100. I imagine China has a similar issue.


Your country may be big, but it's not that big.

Physical coast to coast distance is 15 light-milliseconds.


15ms, one way, in a vacuum, going directly.

When you start to factor in things like the fact that light travels 1/3 as fast in a glass fiber as it does in vacuum, routing overhead, and the fact that ping is a round-trip measurement, 60-100ms is about normal.


15 milliseconds * 2 (ping = round trip time) * 1.52 (~index of refraction in fiber) = 45.6ms straight line. Add in routing dilays and less than optimal paths and your quickly hit 60-100 ms.


Those are the latencies I expect, though. And literal coast to coast distance does not take into account the structure of the network.


My DSL in SF is pretty good then:

--- google.com ping statistics --- 26 packets transmitted, 26 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 9.077/15.996/34.578/7.464 ms


The high latency with dsl usually comes hand in hand with interleaving enabled.


> The high latency with dsl usually comes hand in hand with interleaving enabled.

Interleaving doesn't add huge amounts of latency. To the LNS with the ISP I'm on, I usually see about 11-16ms to the first hop, maybe 5ms to a game server on their network a most. Without interleaving, I see about 5-7ms less on a good day.

Now, saying that, there are different levels of interleaving that can be applied on modern DSLAMs, but we're not really talking tens of milliseconds here.

It's also worth keeping in mind that your pings won't be truly representative of latency. Plenty of ISPs de-prioritise ICMP traffic.


My pings are really similar to my latency measured with different sources.

My isp (Qwest) uses interleaving and it adds quite a bit to my first hop (it usually is about 60ms) and from what I've seen those people actually living in a more populated area the interleaving latency added is about the same (>30ms first hop and usually closer to 60). So of course it depends on the ISP and qwest is notoriously bad for this but is my only option, and occurs with plenty of other ISPs with interleaving.


Well, ping is relative. In Norway we generally have around 6-8ms to Norwegian Internet Exchange, and typically end up at around 30-40ms to most parts of Europe. At some point distance will get you. Regular cobber is not actually slower, but the equipment connected to it is, and that is why you usually see at least 30ms+ just leaveing your ISP network.

For instance, if you're using WLAN you might end up doubling your response time. Something to keep in mind if you're a latency freak like me!


All that means is that Google's Stanford fiber is topologically close to where speedtest.net's SF server is hosted, which is apparently an ISP called Unwired. The fact that it didn't pick speedtest's Palo Alto server probably means that Google and Unwired share the same upstream fiber loop. Also interesting from the image is that speedtest identified the ISP as Tata Communications.


These 151/92 Mbps numbers are almost certainly flawed results because speedtest.net is a very poor tool to measure high-bandwidth connections.

Let me show you why.

I was in Tokyo in August 2009 at my sister's house. She had 100Mbit/s to the internet. I could easily reach close to the maximum theoretical bandwidth by downloading multiple kernel images from jp.kernel.org, at 95+ Mbps. However speedtest.net would only give me download speeds of 50-60 Mbps. My system was running Linux, so I ran top, and saw speedtest.net's Flash app (npviewer.bin) at 100% CPU... it was being bottlenecked by the CPU! It was an old Pentium 3 1.2GHz, but still. Even a modern 2-3GHz CPU core is only about 3x-4x faster.

Bottom line, speedtest.net, due to the high overhead of the Flash VM, is a very poor choice to benchmark high network speeds, as it is CPU-constrained at about 150-200Mbps even on a modern CPU, which is exactly the number reported by this Google fiber user. I am shocked that no one knows this.


Sorry but I'm having a hard time believing that any modern CPU would only be 3-4x faster than a Pentium 3 1.2GHz. Just generally looking at the difference between levels of modern processors in something like 3D mark vantage http://www.tomshardware.co.uk/charts/desktop-cpu-charts-2010... shows massive performance differences.

If I'm missing something obvious here let me know


The flash app is probably only running in one thread and therefore only one core without any hyperthreading and doesn't take advantage of modern vectorized instructions. Moving from 1.2 GHz to 3.6GHz will give you a 3 times speedup. So sounds about right.


Exactly. Speedtest.net does not take advantage of the multiple CPU cores.


Yeah. In my experience, speedtest doesn't even run properly on netbooks. It's one of the worst pieces of software ever written.

I use iperf: http://iperf.sourceforge.net/


Cologne, one of the bigger/biggest German cities, has a provider that tries to give fiber to every household. You can get a 100 MBit plan from them (not sure about the upstream right now. Probably much less, maybe 10?).

The call it the most modern network in Europe (are they? No clue), market it as CitynetCologne. It's affordable and available for a good part of 1.000.000 people.

Is Google Fiber really big news? And - tin foil hat on - would you want Google to be your ISP..?


If I had a choice between Google and Comcast?

Yeah I'd take Google.


You're arguing service (I guess. I've no clue about Comcast).

I'm arguing that your ISP knows _everything_ about you, unless you are really making sure that you're using encryption everywhere.

Now - you'd opt in for a plan that gives you great access to the network where all traffic is handled by the company that stores everything forever and ever and already owns most of your online activities? Really?

Maybe I'm _seriously_ paranoid, but I'd never fall for that. Free, like in the article? Maybe. With precautions. But that won't last forever, so they will be announcing a plan. And at that point I cannot imagine supporting G anymore. You're feeding the beast..


Google is not going to look at your traffic, because if they did they'd be hit with something ten times worse than the street view fiasco.


Your ISP likely looks at your traffic. What makes you think Google will not?


Unless you live in a country with basically no privacy laws your ISP will almost certainly not.


Yeah, you're seriously paranoid.


The two largest cable providers in The Netherlands offer 120/10MBit internet[1]. I think they offer it in most areas, so they have a reach of millions.

So, it's not really that big news, except for the larger upstream. However, for most consumers upstream is probably not that important.

[1] 120MBit is what they advertise, I usually get 130MBit downstream.


Amsterdam also has a city-wide open fiber network: http://arstechnica.com/tech-policy/news/2010/03/how-amsterda...


Sweden has had 100/100Mbit connections for ages.


Why is the up speed so much slower than the down speed? I thought that fiber was symmetric by design. Is google throttling?


I'm wary of speedtest.net's results. Comcast operates a speedtest server nearby, and it consistently reports exactly 20Mbps/4Mbps, which (no surprise here) is exactly the service I pay for. Another nearby server, not operated by Comcast, hovers around 12-13Mbps/1-2Mbps, which I suspect is the reality of my connection.


That may not be completely inaccurate, if the bottleneck is on the link leaving comcast. You get your full bandwidth for internal network stuff, but get less once you leave the internal network. So, technically, you are getting what you pay for, even if that is meaningless.

That said, I've often wondered if Qwest (my provider) prioritizes packets for speed tests.


That's funny, I've usually had the opposite experience with Qwest (in Seattle): inconsistent results in speed tests, but very consistently good download times for large files. Tis strange.


Internal network will get you that speed... random parts of the internet is always a crapshoot. comcast may route you halfway around the country until you get to that other close by server, who knows. Try downloading http://cachefly.cachefly.net/100mb.test and see what you get, or a few other speedtests randomly in different parts of the country.


Do you know if PowerBoost factors into the speed?


I wonder what kind of router they put in at the home. I doubt your standard Linksys WiFi router will be able to cope at that speed.


http://routerboard.com/RB750GL current retail is $59.95, I am sure they would make Google a deal ...

Actually there are many single-chip, 4x Gigabit ethernet solutions on the market. These do wire-speed gbit "in hardware" as it were.


According to the very page you linked its maximum routing throughput is 631 Mbps which is respectable but hardly wire-speed.


Indeed. I have a Soekris router with a gigabit card, but the best it can do is around 400Mbps. Packet filtering and NAT require some computation, and 1Gbps is a lot of data.

Then again, I can actually use Bittorrent without having to reboot the router :)


Specifically, here's a list of home routers sorted by performance: http://www.smallnetbuilder.com/lanwan/router-charts/view Looks like there are only about 20 routers that can handle 150 Mbps or more.


I can't tell from that chart.. is that Megabits per second or Megabytes?


Megabits


Dual-band, double-width-channel 11n might be able to do that speed. But even if not, the gigabit wired ports ought to be able to.


Some routers' CPUs simply aren't fast enough to forward traffic from the WAN to the LAN. Before I replaced my router with a higher-end model, my 50mbit/s cable connection ran at 30. Now it peaks at 60.


Any gigabit router should be able to handle that, even Linksys.


It's not enough that they 'support' gigabit ethernet; most cheap routers will fall over long before they hit that speed.


Right... especially on the WAN port. Most cheap routers aren't expecting to have to route gigabit traffic, hence the initial question.


The 151mbs down might be a limitation of speedtest.net and not google's fiber.

Speedtest's minimum requirement for a test server is 100mbs.


What I would like to know is how much of that speed is due to being 'on-net'. I know that speed is what the connection is capable, but if the server is onnet, then it is nowhere near realistic.

Speedtest tells me I get 75mbps up/80mbps down and a ping of 5ms, but that's with the server it automatically gives me... This is using my uni's on-campus internet, which is connected to AARNET, a massive network. How much of that speed is due to me never actually 'entering' the 'internet'?

Is there some sort of way to test all of them automatically and get the average?

Edit: My speed test results over 15 test and multiple servers http://cl.ly/9aNO


No, I can get 400 from speedtest.net occasionally. I get inconsistent results sometimes though: (300mb/200mb on the first one)

http://www.speedtest.net/result/1447015910.png

http://www.speedtest.net/result/1447014970.png

http://www.speedtest.net/result/1447013001.png


Nice that refutes my point. I was basing it on this link http://speedtest.net/support.php under "What are the requirements to be a testing host".

Edit: But now I noticed that they follow it up with "[in some countries] we are now requiring gigabit connectivity."


It depends which server you choose. I haven't been able to reach even 100 Mbps. If in doubt, it's better to try just downloading something from fast servers.


I've gotten speeds of over 1Gbps on Speedtest, although once you reach 1Gbps the image doesn't save and you don't get a link to share it.

http://www.speedtest.net/result/1285238984.png


It seems like giving all of the people in the test area Chromebooks would be a really great move by Google, because people would associate Chromebooks with "lightning-fast" once they advertised the two products together.

Thoughts?


Based on my month with a Chromebook, the only thing lightning-fast about it was its bootup. It was pretty choppy at rendering pages, while Chrome on my desktop was in fact lightning-fast.


You have to wonder at what point the speed test server itself becomes the bottleneck. I bet the connection is actually faster and I'd try running a few of these simultaneously to different locations.


The server is probably not a bottleneck; this test is actually quite slow compared to the Internet at the 'plex: http://thenextweb.com/shareables/2011/03/29/how-fast-is-the-...


I tried to read this article on a few occasions today, but all I get is an empty (what may be) OnSwipe page from both my iPhone and iPad. All I see is the TOC graphic in the corner and the gear at the bottom, neither of which go anywhere.


This is not accurate. He is testing a server which is 50 miles away.

I use a 3G internet connection, location: Tunisia

Server: San Francisco: (6450 miles) Latency: 264ms. Down (0.37mbps) Up (0.32mbps)

Server: Tunisia: (50 miles) Latency: 133ms. Down (0.93mbps) Up (0.39mbps)


It does make a difference, but there's a very good chance a lot of traffic goes to nearby servers when you live in the bay area.


Unfortunately Google Fiber at Stanford does not allow torrenting.


No offense, but [citation needed]. After all their net neutrality lobbying this would be hard to imagine.


I have a friend who lives at Stanford, where his mother is a professor. After he torrented some legitimate software (but used a Pirate Bay tracker), he was contacted by Stanford authorities. That was the first warning he received, he did not try to appeal the decision or break it again.


Stanford gives out automated warnings if you have a lot of torrent-y network activity (I got a few back in the day after downloading some linux distros), but as I recall they are only to notify students that if they are pirating music/movies/etc, they will be held accountable if someone from the RIAA or some other group contacts Stanford about it. If you're downloading legitimate software you shouldn't have anything to worry about (except for some occasional automated spam)


I'm not sure what my uni's stance on torrenting is, but it definitely has not stopped me from downloading over 1TB of... "legitimately vague" content http://i.imgur.com/oQE4J.png


That sounds like the campus network, which should be separate from Google Fiber.

It'a also disappointing to me to see our bastions of intellectual freedom embracing such policies.


No, it was on the Google Fiber network at their house (in Stanford). To speak of one is to speak of the other, currently.


Can someone please explain the ISP to be Tata Communications in a US city? It is an Indian company. So is it because it owns some broadband company in US?


They are a global tier1 carrier which provides connectivity to other ISPs (such as Google, Comcast, etc) around the world. Not sure why it shows Tata though.


I've got a friend that have had to deal with Tata Communications in a UK datacenter and he said they were the absolute worst company to deal with.

Comcast has a link with Tata Communications that it apparently runs at a 100% capacity day in day out ... there was a story about it not too long ago.


Comcast has a link with Tata Communications that it apparently runs at a 100% capacity day in day out

http://www.merit.edu/mail.archives/nanog/msg15911.html

To be clear, that is Comcast's fault because they're too cheap to buy more bandwidth.


tata's running the isp for google i guess.


France fibre provider: http://www.universfreebox.com/article13240.html

Eat that google :P


Ahh Speedtest.

Running a speedtest and reporting the results is akin to running a ApacheBench on your index.html, recording how fast it goes, then telling everyone your PHP server can serve pages that fast.

Speedtest by default finds a local server, which is almost always delivered over local uncontended peering. Of course you're going to see great results, you might as well be testing to a Speedtest server plugged into your LAN.

Run speedtests to ~10 sites around the country and a few around the globe and report those figures.


But that's not what they want to measure. All they want to know is the speed of the local link, and speedtest measures that.

It's not everything you need to know, but it does provide the desired measurement.


People want to measure the bottlenecks they can control. Speedtest shows what speed your first few hops get. In most cases, "The Internet" is much faster than their DSL line. In this case, that's not true, but there's not much that can be done by the end user.


Not Google Fiber

Local ISP with fiber

http://www.speedtest.net/result/1447021540.png


The speed isn't there but that latency is to die for!


Well according to my ISP I have 8Mbs down and 4Mbs up ... or at least that is what they bill me for and yes the latency on my connection is good. I'm not sure how accurate those numbers are.


http://www.speedtest.net/result/1447236364.png

5 ms? Apparently I have the fastest internet in Australia


And people say that Google are competing with Facebook. Give it 20 years and see ...


Who do I have to kill for this?


Will this extend to the greater Palo Alto area?


No.


can anybody explain how is Google able to do this or point to an explanation of it?


Gigabit FTTH is not technically difficult; it's just expensive to install. Google is supposed to be working on some secret sauce to make it cheaper, but they haven't revealed anything yet.


How is Gigabit FTTH different from ordinary FTTH ? Is it the fiber itself or the equipment on the endpoints ?


I guess these days FTTH is all Gigabit, but I remember a few years back when some companies were talking about 100 Mbps FTTH; that's nearly pointless now that cable can deliver that speed.


Cross site poll: What would you pay to get something like Google Fiber?

http://www.wepolls.com/p/2028308/How-much-would-you-pay-for-...


I need to know more.

Right now (though without any contractual guarantees I'm aware of) Time Warner hands me via DHCP a publicly routable IPv4 address that doesn't change except during extended outages, which don't happen very often (the address stays the same through brief outages - less than an hour every month or so). Effectively, I can initiate a connection to my home machine. There are no explicit data caps, though if TWC were to start slowing things down after more thann 100Gbyte per month, I wouldn't know. Tomorrow I could find out that TWC has decided to use NAT'd private addresses and quenching at 40Gbytes, and I'd lose all that, with the only recourse being to use AT&T ADSL.

I haven't adopted use of any dependencies on high bandwidth - no internet backups, no TOR participation, yell at the kids when they torrent anything already available to them on Netflix.

TWC business class effectively guarantees the features which I'm getting but not paying for - for an extra $200/month. I would pay that if I were running a server for customers, but I'm not.


I find it curious that most users on this poll would not pay more than $100 for Google fiber speed.

I already pay about $100 for my OK-but-not-great 7Mbit DSL service.

The DSL service itself is only $50, but I also have to pay $50 for a land phone line I never use.

Not that I want to give the cable company any money, but they don't serve my apartment building, so DSL is my only option.

I'd happily pay twice this for Google's service.


Googles owns all your surfing.


speedtest.net at WWDC download area was about 700Mbps down, so I am waiting for Apple Fiber )


Because Apple supplies fiber to Moscone Center, right? ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: