A long standing complaint of mine is that Cloud egress pricing severely limits the usefulness of compute. If I want to say process some visual effects on a large (1TB) ProRes video, I might spend $1 on the compute but $100 on the egress getting it back.
Unfortunately these changes don't really resolve that problem. "Standard" pricing is a paltry 20% less. That 1TB video egress still costs $80 and for that price I can rent a beefy server with a dedicated gigabit pipe for a month.
Why is "Cloud" bandwidth so damned expensive?
I'd love a "best effort" or "off peak" tier. I imagine Google's pipes are pretty empty when NA is asleep and my batch jobs aren't really going to care.
I was going to back up my photos to AWS Glacier before noticing that retrieval costs are multiple times the storage cost. I guess that is possibly OK for a backup but scared me into a physical alternative.
AWS has greatly simplified their Glacier pricing from the inscrutable transfer rate pricing to more common sense latency based pricing.
If you want your data back in < 5 minutes it's $0.03/GB, 3 - 5 hours, $0.01/GB, 5 - 12 hours, $0.0025/GB
AWS Glacier is cheaper per month than Google ColdStorage, but Google doesn't charge more for faster access to your data. So maybe you still need that Calculus to figure out which is a better deal for your data access pattern.
Also nice to have sub-millisecond latency to first byte (versus multi-hour) and performance characteristics of normal GCS.. so you could run a rare MapReduce job on it just the same as regular GCS :)
99% availability seems not so "highly available" as they call it. It's just SLA, but that means if the service is down for 7 hours every month it still meets the availability SLA.
I know it may not matter that much for backup, I'm just wondering why would they set the bar so low - i.e. how do they organize the storage that they had to set it so low. I'm guessing MAIDs but I still don't get it.
You just said that you were looking to Glacier as a "back up", so isn't that a good use case for Glacier?
One advantage of Amazon versus many other cloud backup providers is that (for a fee), they'll ship you your data on a hard disk if you need it in a hurry. Having cheap access to your 4TB of data may not be such a great deal if it takes 3 months to download it.
I'm fairly certain that the calculator you linked is incorrect. Unless I'm not understanding the Glacier pricing page, it should cost at most $1,240 to store 10TB for 30 days and then re-retrieve over the internet (even at the expedited retrieval rate).
The calculator shows the same setup (albeit with some of the rate information hidden) as costing $3,934.91
Backblaze B2 has no retrieval cost, so the same setup (storing 10TB for a month, then downloading) would cost only $250 (less than 1/3 of the cheapest Glacier option). Glacier is really only a viable option if you're sticking within the same region (where transfer is free) or you plan to literally never retrieve your backups.
Otherwise, B2 and other alternatives are much cheaper.
I'm trying to use the nomenclature from the Glacier pricing page, where "transfer" is the verb to reference the cost per GB downloading the data from the remote provider. Glacier has an additional "retrieval" cost per GB that B2 and others do not (because they most likely have to physically retrieve a tape/HDD drive with your data on it).
Sorry I don't get it. With B2 I have to pay if I want to say download 100GB of data in B2. From what I read, you say this isn't so. But the pricing page state otherwise
As a suggestion, I use Glacier (with Arq) as a tertiary backup. If I have to go to that it means a lot of other things failed and I'll be willing to pay the price.
FWIW Amazon drive will store any number of photos (or anything it thinks is a photo -- I havent looked to see if it looks at the file name or checks to see if it's a valid jpeg/png) for free. So if you're really backing up your photos that's an option.
Yeah, I organized my backup around it. Then they decided that maybe they won't offer unlimited storage after all, and I better pay up or they are going to delete my files. Not falling for that again.
I just checked the terms of service and they don't at all. They did ban rclone a while back but they said it was because of poor security in rclone itself, and haven't banned anything else.
Interestingly they say you can't use the storage for commercial purposes (whatever that means).
rclone was sharing an API key publicly. Not really a security flaw but understandable to block. Except that they shut down the ability for anyone to get API keys a while ago, so users can't get their own.
I wonder what that same 1 TB of data costs on the "open market" (or how it is even priced).
Say I'm a smallish ISP on the east coast and want to send 1 TB to another smallish ISP on the west coast. I probably have a contract with, I don't know, AT&T or whoever owns the cables (I'm from Europe so I have no clue, in Germany it would be Telekom). I'm guessing it would probably cost cents (and that would bring my data across the continent, not just out Google's door)? Definitely nowhere near $80.
In the open market you either get transit (you buy bandwidth from a carrier) or peer (you exchange traffic with a provider) with other networks.
There are also IXs, where you can peer with a lot of different networks in one place (DE-CIX is the biggest for example, located in Frankfurt).
Transit is billed 95th percentile, that means if you want to commit to 1 GbE on a 10 GbE uplink you pay the 1 GbE outright and can burst to the 10 GbE the uplink provides you. If you are over 1 GbE at the end of the month (95th) you will be billed the remaining mbps with a per mbps pricing.
Now, to answer your question, it basically works like that:
You are present in NYC and you want to reach a customer on the west coast that uses Comcast. You would buy transit in your NYC facility from Comcast to get a direct path/way to them. They will take care of everything else. You don't have to lease/rent/buy the dark fibre (cables) from the east to the west coast, the provider (in this case comcast) is doing that already. If you don't buy transit from comcast, but instead form some cheap carrier (say cogent) they will do the same BUT not guarantee a specific bandwidth to comcast, you only would get 10 GbE to cogent, but you don't know if you can push the 10 GbE to comcast.
Transit providers also vary vastly on price. DTAG (Telekom) for example is ridiculously expensive (5x the prices of regular transit providers).
Let's say you want an decent pipe (transit) with 1 GbE commit on 10 GbE, you would pay something like $500-$600 for the 1 GbE commit and after that around $0,50 per every additional mbps. Additionally you need a router, preferably two (redundancy). Fiber cables, transceivers, etc. All those things are very expensive.
What you also notice is that you can't get transit with a TB billing, but most providers offer you per GB/TB billing. They need to commit to a certain amount and hope they calculated well in regards to their commitments and usage etc etc. Anyways, $80-120 per TB, the cloud providers are charging, is a bad joke.
With enough competition a smallish ISP would pay as low as $0.25-$0.40 per mbps per month, which is on an order of $1 per TB. More expensive offers are like $1-$2 per mbps per month, which is still a single digit per TB.
Intentionally not linking to any of the companies I found with just a quick search at google (mislaid the PDF for carrier 2 carrier pricing of a major provider): 40E/month for 30TB traffic, 1G uplink (1G/s guaranteed), 2x500G SSD on a i7.
This is one of the easiest ways for cloud providers to make money. I very often see people doing lots of advanced calculations for compute and choosing providers based on that without really thinking about bandwidth usage.
Famous last words, "Oh, it's a web service so bandwidth shouldn't be a big deal."
They could simply ban adult websites in their ToS, I doubt this is the reason. Also seed boxes are nowhere near useful if you still would pay $10 / TB and at the same time can get traffic for free at OVH or Online/Scaleway.
It's most likely vendor lock-in and people are dumb enough to pay for it.
The reason, is that bandwidth is a solid profit center for the cloud majors. Amazon initiated it, the rest are going along in cartel fashion, nobody wants to disrupt one of the lucrative angles by driving that profit center toward zero.
There's no other rational explanation why Azure or Google aren't hammering AWS on bandwidth cost as a competitive angle. They're basically conspiring in the industrial sense, no different than airlines that somehow magically agree to simultaneously raise prices or department stores that for decades all agreed not to discount the perfume etc. box section of their stores.
No need for conspiracy theories. When one market participant anchors a price at a high premium, you don't need to collude to know that undercutting them is a scorched earth strategy that causes both of you to lose. Part of this is because financial markets value your revenue more than market share, and undercutting bandwidth pricing hurts the former more than it helps the latter.
High bandwidth costs also make it more costly to move some of the data processing to companies which offer very cheap dedicated servers.
--
But it's not fair to say that price from Google is high just because some random hosting company gives you unmetered 500mbs for $99/month. As we have seen with many unlimited cloud storage offerings, some companies do unsustainable offers to attract customers. There are different strategies behind this. Some just count on that majority of customers won't use everything they could. Others know the price hike is inevitable, but believe customers will still stuck.
Save your outputs in the cloud. There is zero cost to access S3 object storage from EC2. Its only if you insist on pulling down 1TB files to your local workstation that you run into this. This seems like a self-inflicted problem.
The problem is often I need that data locally. With video editing in particular, I can't work with data that's not local and I can't run my editing software in the cloud because I'm not aware of any kind of remote workstation that offers colour management and whatnot.
I think part of the reason is people look at Linode/Vultr/DO and presume that 1/2TB for $5 is the market price (and then quote HE.Net/Cogent/Hibernia offering 10G to certain colos at $2-5kpm as backing proof), or dedicated servers with 10TB for $50pm etc... Linode/Vultr/DO mainly work off an average usage of 1%-2%, although at least they dont stop you deploying more boxes as you reach the limit, you can get clever and one would presume they dont invoke some TOS clause. I imagine GCP, AWS, and Azure bandwidth cost is closer to the truth at running a network that is resilent and can scale beyond 100G, the transit or costs of fiber is probably only a tiny fraction when pushing beyond 100G per region, vs your low end VPS provider, let's not forget the epic fail of Linode DDOS handling. I do agree though in comparison it sounds expensive.
No, I can assure you that Linode, OVH, DO are making money per G at 1TB per $5.
Cloud providers are expensive because they want vendor lock in. They want to make it artificially cheaper to use their own services (elasticsearch, etc) over something else. And they want a high switchover cost when you want to transfer all of your data to another provider.
- An unmentioned alternative to this pricing is that GCP has a deal with Cloudflare that gives you a 50% discount to what is now called Premium pricing for traffic that egresses GCP through Cloudflare. This is cheaper for Google because GCP and Cloudflare have a peering arrangement. Of course, you also have to pay Cloudflare for bandwidth.
- This announcement is actually a small price cut compared to existing network egress prices for the 1-10 TiB/month and 150+ TiB/month buckets.
- The biggest advantage of using private networks is often client latency, since packets avoid points of congestion on the open internet. They don't really highlight this, instead showing a chart of throughput to a single client, which only matters for a subset of GCP customers. The throughput chart is also a little bit deceptive because of the y-axis they've chosen.
- Other important things to consider if you're optimizing a website for latency are CDN and where SSL negotiation takes place. For a single small HTTPS request doing SSL negotiation on the network edge can make a pretty big latency difference.
- Interesting number: Google capex (excluding other Alphabet capex) in both 2015 and 2016 was around $10B, at least part of that going to the networking tech discussed in the post. I expect they're continuing to invest in this space.
- A common trend with GCP products is moving away from flat-rate pricing models to models which incentivize users in ways that reflect underlying costs. For example, BigQuery users are priced per-query, which is uncommon for analytical databases. It's possible that network pricing could reflect that in the future. For example, there is probably more slack network capacity at 3am than 8am.
I like your thinking, but one minor clarification. BigQuery's actually introduced Flat Rate [0] ( a year ago) and Committed Use Discounts [1] (Amazon RIs are similar) since that's kind of what some Enterprises want. These are optional and flexible.
I personally still hold that pay-per-use pricing is the cloud native approach [2], the most cost-efficient, and the most customer-friendly. However, it's unfamiliar and hard to predict, so starting out on Flat Rate pricing as a first step makes sense.
( work at Google and was a part of the team that introduced BQ Flat Rate)
The problem with bundling is that it stops reflecting underlying costs and creates incentives for customers that skew customer population.
Contrived example of this: since most HDDs workloads are IOPS bound, you decide to sell IOPS bundles and give space for free. Not before long all your customers are backup companies that have low IOPS and high space usage. Your service runs at loss, customers are doing nice price arbitrage on top of it.
Same goes for all aspects of computing platforms for sale: CPUs, RAM, Networking, HDDs, SSDs, GPUs.
Two additional problems are bin packing and provisioning: you need to sell things in such quantities and ratios that you can actually effectively utilize your hardware configurations. You need to order and design your hardware in a flexible manner to be able to adapt for changing ratios of component needs due to changing customer demand.
So it's easier to run "pay for what you use plus profit" pricing, but some customers don't like it due to perceived complexity and potential unpredictability.
Back when I was in Google Netops we had a healthy (unofficial) fear of having the backbone connect all the way around the world in a circle, especially with high-enough bandwidth links that both directions would start to look attractive to traffic management.
I imagine TheSockStealer was wondering about India to Iran ping time through Google's private network (the one in the diagram upthread), which doesn't appear have a connection from Chennai to Amsterdam.
The packets in the trace you posted were presumably routed by someone besides Google. Or did you run that traceroute from inside GCP?
Or I could be misunderstanding the diagram and maybe Google has some connections that aren't shown. I don't really get what the ">100" in "edge points of presence >100" means in the legend.
You know when I saw that map it made me wonder which nation (or nations) those two large point of presences represent in the Gulf. It's Iran, isn't it?
Egress pricing for Google and AWS (sans Lightsail) continues to be one of the biggest price differences between them and smaller hosts such as Linode and DigitalOcean.
I think Google missed an opportunity here. They should have cut the prices more significantly for standard tier (sacrificing performance) to make this more competitive.
Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan.
The extremely high egress prices for any cloud don't seem to have hurt their popularity much so far. So I suspect they all don't want to give up their cash cow.
Compared to a VPS or renting a dedicated servers, the egress costs can be enormous if you come even close to using the traffic contingent you get with many VPS or dedicated hosts.
Just as a comparison, a dedicated server with Hetzner for ~50-70 EUR per month includes 30 TB of traffic, which would be at least 2,400 EUR on the Google Cloud.
Huh? VPS and dedicated hosts usually charge below a penny per gigabyte over your traffic limit, and if you open a support ticket they will usually work with you towards something in the sub-$4 per TB range.
Are major clouds actually facing any real competition because of bandwidth pricing?
On another note, Softlayer used to have generous multiple TB allocations for their dedicated servers but they took them away. It's likely that anyone needing high bandwidth, especially for static content, will have other edge networks and CDNs in place for that use case.
> biggest price differences between them and smaller hosts such as Linode and DigitalOcean.
I suspect that the smaller hosts' plans traffic quota is sold below their own cost - assuming (correctly) that the vast majority of customers would not use anywhere close to the limit.
The traffic quota also scales much slower than the price as you move to bigger, more expensive plans.
I guess that it's fine if you reach the limit on a few servers, but if you rented 1000 x $5 droplets from Linode/DigitalOcean, and maxed out all of their traffic quota, you would get kicked out. Has someone tried to use these hosts just for cheap file servers?
I don't know about DO Droplets but you can get a reasonably priced dedicated server* at OVH and knock yourself out piping 4-8 TB to the world for ~$40/mo.
* It's hard to push a lot of useful data with a tiny VPS.
In late 90s, we configured AboveNet as the preferred backbone for the world’s largest Windows Media streaming network thanks to “cold potato routing” providing such a competitive advantage for broadband video. This allowed us to build out video distribution hubs rather than “edges”, giving a cost of service a fraction of competitors’.
I’m surprised it took two decades to see this network policy choice productized for mass tech market.
Seeing the map of Google's network makes me appreciate more the impact of undersea cables.
If you're interested in the history of earth-scale networks I recommend this free documentary on Cyrus Field and the heroic struggle to lay the first transatlantic cable: https://www.youtube.com/watch?v=cFKONUBBHQw
Related, one of my favorite Wired Magazine stories -- and favorite pieces of long form journalism, period -- Neal Stephenson goes out on a boat to string some cable: https://www.wired.com/1996/12/ffglass/
How is this different from paying more for a fast lane, which net neutrality is supposed to prevent?
Edit: there seems to be a bit of confusion what I'm referring to. I'm referring to the Open Internet Order of 2015 [1] which states:
18. No Paid Prioritization. Paid prioritization occurs when a broadband provider accepts
payment (monetary or otherwise) to manage its network in a way that benefits particular content,
applications, services, or devices. To protect against “fast lanes,” this Order adopts a rule that establishes
that:
A person engaged in the provision of broadband Internet access service, insofar as such
person is so engaged, shall not engage in paid prioritization.
Since nobody has so far managed to hit the target:
This is Google acting as your network provider. You first choose Google, and then take your pick amongst their offerings. It's pretty much the same situation as it has always been for residential ISPs, where you have always had providers offering tiers at various speed/price points.
Net neutrality becomes relevant when you don't have a say in the choice of network. Practically, this will usually mean your audience's ISPs (called a "broadband provider" in that order). You get your servers set up at AWS, or GCE, or Linode or wherever you like the terms. But then, no matter what, you get an e-mail from some residential ISP in Florida, telling you that it'll cost you XX$ if you want to stream to your customers on their network at 1080p.
The problem is pretty obvious: the consumer choosing the ISP is not going to take your welfare into consideration, especially if your company doesn't even exist yet. The ISP won't be able to squeeze google, facebook, or amazon. But anything not used by 50%+ of the population simply has no power in these negotiations.
The result will be ISPs capturing almost every last cent you can earn amongst their customer base. If your service competes with any existing offering by either the ISP or a company with deeper pockets than yours, you will never have a chance to compete on quality.
Oh, and you won't just get an email from that ISP in Florida. You'll get an email from every single ISP in the world. Part of the startup decision making process will be like MadTV (if anybody remembers): should we invest these $2,000,000 into our product, or is it better spent on getting access to the markets in Iowa and eastern Michigan? Should we continue haemorrhaging money in southern Europe, or is time to cut our losses and go dark on that continent?
Google is not a broadband provider in this context.
This is simply reflecting the reality, that different network connections have different costs and different performance. It costs Google more to transfer it over their own network than to deliver it to another network near your server, and the same in reverse, accepting traffic for your server only from nearby networks saves costs, and they pass the cost savings to you.
The problem with "network neutrality" is that the network was never neutral. Using a different connection that is shorter, or otherwise has less latency, and is less congested provides better performance, but that's rarely been exposed to end users. In Europe, it used to be common for customers to pay a different rate for EU traffic and out of region traffic, due to cost of transatlantic bandwidth, but I think that's mostly fallen out of favor as global bandwidth increased.
It's not differentiating by content, you're paying for a different service tier. It's like saying getting faster internet by paying for Comcast Business violates net neutrality.
One of the primary consumer arguments in favor of net neutrality was that ISPs advertised bandwidth at certain speeds, and therefore were obligated to provide that speed to any point on the Internet.
Using that same argument, Google should be obligated to provide maximum interface speed from any of their compute instances to any point on the internet, and should not be allowed to sell a tier that gets closer to that ideal.
> One of the primary consumer arguments in favor of net neutrality was that ISPs advertised bandwidth at certain speeds, and therefore were obligated to provide that speed to any point on the Internet.
really? even a good ten years ago when i was looking for apartments with good DSL speeds, it was always well explained that bandwidth listing was an ideal number that didn't apply to things outside the ISP's direct control.
Nope. The core tenets of net neutrality have always been around access and priority of the content going between networks. The only factor where net neutrality would regulate the physical pipe like this is based on edge providers and backbone service access. E.g. A fair price would have to be consistent with what all where being charged. You couldn't say charge Netflix 100 million dollars to have access to inter-network backbones then simulataneously only charge Disney 1 million dollars for the same thing. This is because most large scale network providers are also content companies in the USA.
To quote a nice layman summary from the EFF/Save The Internet:
Net Neutrality is the internet’s guiding principle: It preserves our right to communicate freely online.
Net Neutrality means an internet that enables and protects free speech. It means that ISPs should provide us with open networks — and shouldn’t block or discriminate against any applications or content that ride over those networks. Just as your phone company shouldn’t decide who you call and what you say on that call, your ISP shouldn’t interfere with the content you view or post online.
Without Net Neutrality, cable and phone companies could carve the internet into fast and slow lanes. An ISP could slow down its competitors’ content or block political opinions it disagreed with. ISPs could charge extra fees to the few content companies that could afford to pay for preferential treatment — relegating everyone else to a slower tier of service. This would destroy the open internet.
No real mention of advertised speeds being ahered to. Speed in the context of which you were using it also would not be correct. Speed in the context of Net Neutrality means no prioritized slow downs or speed ups in exchange for cash, essentially. Again, I think that had more to do with access and not actual pet megabit speeds
How are advertised speeds and maximum speeds equivalent? Google is advertising speeds to their consumers and their consumers are selecting a speed. Googles obligation is to provide that service at that speed. Similar to how ISPs should provide service at the selected speed. This isn't the ISP choosing to throttle certain services. This is a service choosing to run at a certain speed.
Nothing is being discriminated or artificially slowed down. You pay to use their private network. Standard tier means you leave their private network and move onto the public internet, which would be more open to ISP net neutrality issues if anything.
You are aware that the Internet is a collection of different companies running different networks, right? Google's network that is serving public traffic is part of the Internet. The inter-network routing decisions that are made are literally the crux of the net neutrality arguments made so far.
Yes, networks comprise the internet, but it's not free to run on any of them. Some are better than others and cost more.
As a typical consumer, this negotiation is out of your hands with ISPs balancing performance and profit (although tilted towards the latter) but as a Google Cloud customer, you now have some control over how much of their network you use - all the way to the edge or just regional - without any discrimination of the traffic itself.
The concern about fast lanes was mostly about unfair prioritization of traffic over a single link. In this case, Google is offering different links that you can choose to send your traffic over, and they're letting you choose rather than choosing for you. Probably both the standard and premium networks are uncongested on Google's side, although there may be congestion elsewhere in the Internet (that Google can't be blamed for) that slows down standard traffic. The standard path also probably has a higher RTT which may lower TCP throughput.
IMO net neutrality was never about equality of outcomes (there's probably not really any way to equalize performance short of socializing the Internet), but that seems to be a common misconception.
Google's network (especially w/ BBR[1]) is amazing and this makes the price point more approachable for other use cases (like running your own CDN[2]).
Are they exclusive owner/builders of any of those? I know I've seen a number of cables which Google was a part investor/owner in, but sole ownership would be amazing.
On the other hand, if anyone could use up an entire cable, it would be Google, and there are no particular financial reasons to try to diversity - they are big enough to self-insure any cable losses.
and each dedicated fiber can run multiple "colors" of light, so multiple customers can have their own segment of the traffic fully isolated from each other.
There's quite a few multinationals that have underseas cables as part of their network infrastructure (think finance). It's sometimes worth the expense just to lay cable then tunnel through the public internet for peace of mind. On a smaller scale there are plenty of people running long distance microwave networks and paying for direct fiber links that are outside the public internet.
Great idea, but it's still way too expensive. I pay between $0.01/GB (the fancy CDN stuff) to ($0.0024/GB) from an IP transit provider for Neocities. That's market rate. I would have given them a pass at $0.02-$0.03, but not for $0.085.
If you pay this "public internet" rate, you're paying essentially 2007 transit prices. I hope you don't need to ship a lot of traffic. I hope you don't need to compete with someone that's paying market rate.
I would love to use GCS for our infrastructure, but with rates like this, it's hard to imagine us ever switching.
> There are at least three independent paths (N+2 redundancy) between any two locations on the Google network, helping ensure that traffic continues to flow between these two locations even in the event of a disruption. As a result, with Premium Tier, your traffic is unaffected by a single fiber cut. In many situations, traffic can flow to and from your application without interruption even with two simultaneous fiber cuts.
What does this mean? N+2 redundancy should mean, that even if both go down, then service will not be affected at all, no?
N+2 usually means one redundancy for normal operations (deploys, maintenance) and one for disruptions. If the disruption happens while there is an ongoing maintenance, you'll be resilient for one event only.
The most interesting thing to me here is that they can actually deliver a cheaper service by going over the public internet. I would think their private net would be cheaper because they don't have to pay for transit.
I guess transit is still cheaper than maintaining ones own lines...
Seems like a shaky inference? Hypothetically, two tiers could have the same cost to provide, but have different profit margins. Market segmentation tends to be more about value to the customers and what they're willing to pay.
Do you really think Google didn't study it before building their network? You think they hired Vint Cerf to cook omelettes at the cafeteria, or something?
There was a leak saying that Google had no internal cost accounting until ~2011, although most of the network has probably been built after that point.
I thought I read an article about an online game company who was doing something similar with their users; trying to get their users on their private network as soon as possible. Does anyone else remember that article on HN?
Reading this I just got stumped by how many stacks, layers, hardware, technologies, and knowledge incorporated into all of that those bytes needed to travel so I could read them on a laptop across the globe
This is in fact the "original" case of net neutrality. Recently, the discussion was mostly about your ISP charging - either you or some site - to prioritize certain content, like streaming.
But in this context, it is about the providers of the "backbone" of the internet providing the same service to everybody. Usually, different companies and organisations had their public nets, and they allowed each others data to flow through their nets. They made so called "peering" agreements which are a bit intransparent, ISPs have to pay certain amounts depending on the traffic they cause. Then there are private connections that institutions use for themselves.
This is Google renting out their "private" net. What is different is that before, not many organisations had so massive private nets, and while you could buy traffic on them, this was something typically large organisations and companies negotiated (I know certain research institutes pay to have their data routed over a "direct cable" instead of over the regular network). Now, everybody can do so, and choose to pay for the faster route.
What they could have done instead would have been to sign peering agreements and add their connections to the "public" net.
Now, is this illegal or immoral? Well, they certainly have the right to rent out their private net. People have been doing that before. But I think net neutrality is not a binary question, but a continuum. If being able to afford a faster route means your site is faster, that violates net neutrality IMO.
Suppose you are an ISP with a new streaming video service as a customer -- ComFlix. You sell a 1Gb/s pipe to ComFlix. You buy a 10Gb/s pipe from Google.
If Google limits, deprioritizes or drops traffic from ComFlix, that's Google committing a NN violation.
If you limit ComFlix's 1 Gb/s pipe to 1 Gb/s, that's not a NN problem.
If Google limits your 10Gb/s pipe to 10Gb/s, that's not a NN problem.
If Google offers to replace your 10Gb/s pipe with a 20Gb/s pipe at the same price on the condition that video streams will be intercepted and limited so that they cannot support more than 480P, and you accept, both you and Google are violating NN.
If you ask Google for a 10Gb/s pipe but they refuse to sell it to you solely because ComFlix is a competitor for YouTube, you have an interesting court case.
Basically, NN violations occur when someone drops packets that they otherwise would have carried, based on the content, source or destination of those packets. But paying more for a bigger or more direct pipe by itself is not an NN problem.
This has nothing to do with net neutrality. Google is giving you the option to use their private network (which is the default) all the way to the edge, or to stop using it after the local region. They're not discriminating traffic based on it's content or who you are.
It's the same as installing a direct line from your house to Google's DC. It's a private network that you pay more for, or you can just use the internet.
I worked on this product release, at Google. Can you guide my eye please where we don't specify TB? I'll get that fixed. I took a look and it says TB. @nealmueller if that's easier for you.
> Can you guide my eye please where we don't specify TB?
I would be extremely surprised to find that the new pricing is 1/1000th what the old price was. It's not really $0.105/TB is it?
I think the key confusing factor on the linked page is there is no "per" anything detailed. To answer your request for guiding your eye: http://imgur.com/a/9je4T
Thanks for taking the time. I figured someone at Google would read my comment and review the pricing table; I did not expect to engage with someone directly involved in the product release.
You are right, it does say TB everywhere, and does not mention GB anywhere. But, if this is correct and all prices are per TB, that would be a ~1000x price cut compared to your current pricing, which is per GB.
A Naive Questions. When we say Private Networking, In Google or Amazon terms, does it actually mean Google buying / laying down Fibre from DC to DC, much like how OVH does. Or they are renting / buying dedicated links in multiple exchanges.
I just think the funny thing is the new feature is instead of taking the hit on price for the current level of service their shiny new feature is “standard” networking! Woot!!
https://cloud.google.com/terms/ has explicit language around termination, disagreements around billing, breaking the law, etc. Sections 4 and 9 are probably the most relevant. Again, I'm not a lawyer, but read the indemnification and so on. Unlike other services at Google, we have real, paid, 24x7 support. I understand the general concern, but I think it's misplaced in this case.
[Edit: whoops, missed your comment downthread. I'll circulate internally with legal]
Here are Codero's terms.[1] There's a procedure for dealing with breaches of the agreement, with 7 day and 30 day notices to cure. It's not one-sided, like Google's: "Google may terminate this Agreement for its convenience at any time without liability to Customer."[2] Do you want to bet your business on that?
Codero, although they now offer cloud services, started as a raw rackmount PC service, which is what I use them for. That business is generally run as equipment leasing - you lease for a term, and can renew. Those businesses generally have reasonable terms of service, because they're almost entirely business to business and the other side has lawyers.
Not at all. Offering different bandwidths for different prices has nothing to do with net neutrality. Net neutrality is the principle of not discriminating traffic based on destination or content by blocking or throttling.
> not discriminating traffic based on destination or content by blocking or throttling
This assumes all traffic is of the same kind, such as web traffic. For other protocols or use-cases, things like latency have a big impact on the content.
And Comcast Business charges more than regular internet, that's not a NN violation either. Charging more for more bandwidth is what ISPs do all the time, that doesn't have anything to do with net neutrality. Net neutrality has to do with packet discrimination: blocking or throttling traffic based on its content or destination. Google here does neither.
Right now (in Alpha), you have to put your own Regional Load Balancer in front of GCS [1]:
> Standard tier for Google Cloud Storage can be configured by adding the bucket as the backend of cloud load balancer and then setting the network tier on the forwarding rule. For this release, you cannot directly configure the tier on the Google Cloud Storage bucket.
Disclosure: I work on Google Cloud (but not on this)
This precedent (including CloudFlare's new private routing) doesn't bode well for the public internet.
Imagine the day when everyone has to use private routing and the public internet barely even gets maintained anymore.
Of course, public internet also suffers tragedy of the commons and not much is happening on that front. Like how most people are still behind ISPs that allow their customers to spoof IP addresses. And nobody has reason to give a shit. We're getting pinned between worst of both worlds. It's a shame.
Net neutrality is violated IMO when only those who can afford it get faster routing [1]. The only form of price discrimination allowed is at the ends - you pay a certain amount for bandwidth and volume to your ISP or hoster. Inbetween, if you are an end customer, there is just this amorphous peering-net internet blob, where you pay market price to have stuff routed. You shouldn't care what route it takes, and the net shouldn't care what data you are sending. There are no pricing tiers.
Sure, Google is allowed to route stuff on their private net, or on the peered net. They can let other people use their private network, it happens all the time. They are not violating the letter of anything. But it is a slippery slope, and it can lead to a two-class internet, where some people can afford the "good" internet and some people can't.
[1] Edit: and I realize, in parts that is already the way it is. If you are doing high-frequency-trading, or sending huge amounts of research/big-data data, then you can pay somebody beyond paying your ISP for volume and throughput, and they will give you a custom connection. Net neutrality is not a black-white question.
This isn't about the last mile so there is presumably lots of competition. And it seems like some companies being large enough to build their own is another form of that?
But I'm just responding with another armchair argument. It would be good to see some actual numbers about Internet health.
I suppose this was inevitable, the costs of cold potato routing must be prohibitive, especially if we consider more exotic places, for example riding the GCP network from just a few milliseconds away in Bangkok all the way to a tiny GCP compute instance in London on practically all GCP network (exc first three hops locally). GCP network is awesome, I am surprised we are only see a small pricing reduction for standard offering, perhaps idea is to eventually make it 2-3x price, a premium worth it imo if you consider one would push most bandwidth heavy assets onto edge CDNs anyway.
You know what? If you aren't ready to invest in infra you are out for the buck and worth a damn. I won't even mention cloud in a meeting anymore. It is plain fraud.
So is this like the app engine price hike debacle a few years ago but with "better" messaging? So "Try Network Service Tiers Today" means "Migrate to Standard Tier today to avoid the massive price increases coming soon"?
But fundamentally they just massively underestimated costs and need to find a way to adjust pricing. With app engine it was very conveniently beta, so they used the end of beta for the price hike. For this, they're having to invent a "Premium" and a "Standard" Tier, and hey guess what, everyone has been using "Premium".
My experience so far with Google has been "Use this now, and we'll have a massive price hike later, if we keep it around at all."
When will the AppEngine pricing change from six years ago stop being relevant? Has there been another hike since? Pricing AE is very different from pricing bandwidth. Even more concretely, there weren't historical data or industry trends to draw from.
They announced the new prices[1]. There doesn't seem to be a price increase for most scenarios, and the price is slightly lower for some. There appears to be a $ 0.01 increase if you currently push more than 1 TB, but less than 10 TB of traffic across continents or within regions like Asia or Oceania.
So you're saying that the fact that they have new prices now refutes my suggestion that there will be price hikes in the future? I'm saying Google has a habit of pricing something, and then hiking the price later. I said specifically, above, that this announcement is early warning of price hikes: "migrate to standard tier now before the price hikes". Your rebuttal is "but the prices right now haven't gone up"?
One price increase for a beta product hardly makes for a habit of increasing prices. If anything, prices in cloud computing have continuously been getting lower across the entire industry, and with margins being fairly large and the market getting more and more competitive, I imagine that trend will continue.
If they'd wanted to increase prices for the premium tier, this would've been the time to do so. Instead, the kept the prices roughly the same, with small savings for some scenarios and a tiny increase for edge-cases, while introducing a cheaper tier that has feature parity with the competition and is quite a bit cheaper. Whatever your past issues with Google, I think you're way off on this.
Unfortunately these changes don't really resolve that problem. "Standard" pricing is a paltry 20% less. That 1TB video egress still costs $80 and for that price I can rent a beefy server with a dedicated gigabit pipe for a month.
Why is "Cloud" bandwidth so damned expensive?
I'd love a "best effort" or "off peak" tier. I imagine Google's pipes are pretty empty when NA is asleep and my batch jobs aren't really going to care.