Hacker News new | past | comments | ask | show | jobs | submit login
New EC2 M3 Instance Sizes and Lower Prices for S3 and EBS (amazon.com)
103 points by pentium10 on Jan 21, 2014 | hide | past | favorite | 47 comments



One of the things I hate most about AWS is their refusal to give monthly pricing for EC2 instances like Digital Ocean. Instead I have to use a web form to estimate my monthly/quarterly costs. One of the reasons I might move to DO is that I can do back of the envelope calculations so easily when projecting my costs. And bandwidth costs.


Their billing dashboard provides a nice breakdown https://console.aws.amazon.com/billing/home?#/ and compares to your previous month

You are in a dynamic environment, a relatively static environment, or a mix of the two.

For static environments you should be buying reserved instances in year long blocks to save yourself the premium of on-demand pricing.

For dynamic environments are you really using services by the month? We scale with load and wouldn't care about monthly costs shown on their pricing page.

In our situation, a mix, we have year long reserved instances (some 3 years) and our scalability is on-demand or spot instances which changes, and is never monthly. Some months we have only a couple of days of on-demand use, other months it could be a week or 2. If we are getting to the point where there is a lot of dynamic use we add another reserved instance to the base cluster.

So I understand your frustration about monthly pricing for a quick glance, however the real world use of 'monthly' hosting is the least applicable in Amazon's eyes.

Bandwidth costs I hear you. Would love to not charge clients bandwidth, and we typically don't charge unless its over, coincidentally, 1 TB

For my personal DO servers for side projects/moonlighting, I wish they offered reserved year long pricing like Amazon.


Check out http://ec2instances.info

It will be even better when this PR is merged (soon) https://github.com/powdahound/ec2instances.info/pull/37


It's be cool if the 'annually' version could also take into account reserved pricing, since doing the 1-year reserved thing would be the sensible option if you wanted to use a machine as an always-on VPS-style box. E.g. the m1.small instance is $797.16 annually (what the page currently reports), but with 1-year "heavy utilization" reservation it ends up being a more reasonable $291.64 (including both the cost of reservation and the instance cost).


The calculator provided by Amazon does have the option of factoring in the Reserve Pricing and will give you the monthly and annual charges as separate line items.

http://calculator.s3.amazonaws.com/calc5.html


We posted a calculator that does exactly this: https://scalyr.com/cloud. It will let you compute a "normalized" monthly cost for the various reserved and non-reserved options, customized for your usage and cost of capital, so that you can make an apples-to-apples comparison. You can also compare with other providers.

Always happy to get feedback on this tool.


Really, if you are worrying about ongoing monthly costs with AWS, you might be better served with a different host. If you want to be billed at a monthly rate, you should think more about dedicated hardware. Unless you are spinning things up and down with some regularity (in response to demand), AWS is usually more expensive than a dedicated server.

I think that only giving the hourly costs helps to reinforce this.

(I say this not knowing anything about your particular setup...)



720 hours in a month is always a good back of the envelope number to remember.


I don't know why you were downvoted since it is a frustration I share. It is always interesting trying to decipher our monthly charges to see where the changes occurred.


Are outgoing bandwidth costs ever going to drop? It just seems crazy that DigitalOcean can provide you 1TB of bandwidth in their $5 plan while 1TB on AWS ($0.12/GB) is $122.76 (first GB is free).


Transit costs an ISP (who buys 40+ gigabit circuits) about $1/megabit @ 95th percentile. 1 Terabyte is about 3 megabits/second fully loaded for a month - So, presuming that DO has to pay for their pipe, that 1 TB costs them $3.

Amazon is probably charging a bit much, but DO's costs are probably unsustainable if people actually used their 1TB (much like any service that offers people a "huge amount" with the hope that nobody actually uses it.


Or they plan on overages. They charge '$0.02 per GB thereafter' which works out to 20$ per TB so they break-even on bandwidth at ~1.2TB. Sure, some people can get vary close to 1TB without going over but for most people it's hard to manage bandwidth that exactly.


Transit costs an ISP (who buys 40+ gigabit circuits) about $1/megabit @ 95th percentile. 1 Terabyte is about 3 megabits/second fully loaded for a month - So, presuming that DO has to pay for their pipe, that 1 TB costs them $3.

Except that nobody ever has an average bandwidth utilization equal to their 95th percentile utilization. And Amazon probably has a lower mean-to-95th-percentile ratio than most ISPs, since a lot of their customers are "peaky" to begin with.


I have about 15 droplets on DO right now and none of them are close to using a TB even over a year (but we don't use them for production). I'd bet they are banking on this for most cases.

I checked out Verizon's new cloud beta yesterday, and while they haven't announced pricing maybe they can do a good deal on bandwidth given that they own so much of the network.


Exactly how Dreamhost/Hostgator can offer 'unlimited diskspace/bandwidth/puppies' for $6 a month or whatever.


Two ways:

1) They oversell their capacity. 2) Their terms of service agreement prohibit anything that would actually let you us unlimited.

http://webmasterfaqs.org/is-unlimited-web-hosting-a-scam/


Yeah. Dreamhost kindly asked me to switch to their VPS service a few years ago when I was using too much CPU.


Not saying its the best bandwidth but I'm pretty sure you can get $1/Mbps from Cogent on 1Gbps commits (or you could about 2-3 years ago when I was talking to their sales people). I would expect Amazon and DO's costs to be significantly cheaper.


I'm almost certain that while DO currently offer 1TB for $5, they actually can't offer that in the long term. Or rather, they are relying on less usage than this, and if all of their customers were actually utilising it, they'd have to jack up prices.


Dreamhost had a blog article about overselling or oversubscribing - it's very common from ISP's to gyms.

http://www.dreamhost.com/dreamscape/2006/05/18/the-truth-abo...


Not everyone will use ALL 1TB. So DO will able to save some bandwidth.


But will you use 1TB on the 16GB ram plan? If yes then you just saved 40% of the bill.

They will also make it later so all your nodes'bandwidth is cumulative.(like linode has)



> They will also make it later so all your nodes'bandwidth is cumulative.(like linode has)

Source? I've been told that that isn't the case by their support.


so this brings them in line or lower than google's cloud storage https://cloud.google.com/products/cloud-storage/

Great to see this competition. Does anyone know if s3 and gcs are comparable to azure's locally redundant or geographically redundant storage? The new pricing is basically in the middle of the two for azure.


S3 is locally redundant; buckets live in a specific region you put them in. You could roll your own geographically redundant storage by mirroring the same data into buckets in two or more regions, though. With the new pricing, two-region mirroring would run you between $0.136/GB/mo and $0.17/GB/mo, depending on whether you also wanted local redundancy within each region or were using "reduced redundancy storage" for each copy.

Joyent's pricing is slightly better if you want to roll your own multi-region storage: http://www.joyent.com/products/manta/pricing (Also comes with an interesting Unix-compute service where you can submit jobs to run over the data where it lives, rather than having to download it into a VPS to process, which I find more interesting than the storage itself.)


US Standard S3 region is geographically redundant. It routes to both Northern Virginia and "facilities in the Pacific Northwest." This redundancy is why it does not provide read-after-write consistency like the other regions.


That's what I thought as well, but recently found out otherwise. While it is true that, for buckets in the US Standard region, requests are routed to the closest endpoint, the data still only lives in one location.

To clarify by way of an example, say you're putting data into S3 from an EC2 instance in us-east-1. Those requests will end up being handled by the NVA US Standard S3 endpoint, and the data will only be stored in NVA. If try and retrieve that key from an EC2 instance in us-west-2, that request will indeed be routed to the PNW S3 endpoint, which will see that it doesn't have a local copy of the object and will need to retrieve it from NVA before serving it to you. That object will be cached locally in the PNW endpoint for an unknown amount of time, after which it will need to be retrieved from NVA again.

All of this information was from a recent support ticket I had open with AWS while trying to troubleshoot poor S3 performance.

The verbage they use to describe the US Standard region is quite confusing, leading a lot of people to assume that it provides geographic redundancy when it actually does not provide this at all.


Ah interesting, thanks! I didn't see that meta-region mentioned on the Pricing page (http://aws.amazon.com/s3/pricing/). Is it priced the same as the two lower-cost U.S. regions (N. Virginia / Oregon) that it apparently overlays?


Oregon (us-west-2) is a separate region. I don't know if the US Standard "Pacific Northwest" location actually is in Oregon, but even if it is, it's classified separately.

us-east-1 is the S3 "US Standard" meta-region. For all other services, us-east-1 is just good old "US East (N. Virginia)". For S3, the two names refer to the same thing. It uses the us-east-1 pricing, no matter where your data is physically located, since it is us-east-1.


This also makes Amazon EBS ($0.05 per provisioned GB/month) more competitive with Google Compute Engine's Persistent Disk ($0.04 per provisioned GB/month).

https://developers.google.com/compute/pricing#persistentdisk

Although for my applications, GCE persistent disk > 500 GB has more than twice the sustained IOPS of EBS standard volumes.

With GCE, IOPS now scales with the size of the volume, no need to pay extra for provisioned IOPS.

https://developers.google.com/compute/docs/disks#pdperforman...


m3.medium only has 4GB of storage? Is that a typo? Why not the standard 8GB? That'll make it unusable for my AMIs. The m3.medium is $0.113/hr for 3 ECU and 3.75 GB RAM, while the m3.large is $0.225/hr for 6.5 ECUs and 7.5 GB RAM but 32 GB of SSD storage.


As with all instance types, what you see in the Instance Storage column is the ephemeral storage available to you as /dev/xvdb, etc. The root volume remains the same at 7 GB, and for this instance type it must be EBS-backed.


According to the aws blog[1] you can now launch these instances from s3-backed AMIs. Its not clear if this still uses an ebs volume or if it can now use an instance volume.

[1]: http://aws.typepad.com/aws/2014/01/aws-update-new-m3-feature...


That's the correct number. The complete list of instance types and sizes lives as http://aws.amazon.com/ec2/instance-types/


Jeff, When are the m3.medium and m3.large instances going to be available for use in Elastic Beanstalk in USEAST1?


Is that the size of the root volume?


They have a concept of an "ephemeral" volume, which is a disk mounted on the same rack. It goes away every time you reboot, but is useful for caching or file transfers, because it's the fastest disk you have access to. In day to day usage it's not actually that important.

You can have as much EBS/SAN data as you're willing to pay for, so everything important gets stored on that.


After spinning up an instance-store m3.medium, the 4GB volume is in addition to the root volume.


There is no small instance type in any current generation instance types anymore.


Cool - signed up for AWS S3 last night and woke up to this news :)


I would love to have SSD-based EBS?


Provisioned IOPS EBS volumes are SSD based. From the product page (http://aws.amazon.com/ebs/details/):

""Backed by Solid-State Drives (SSDs), Provisioned IOPS volumes support up to 30 IOPS per GB which enables you to provision 4000 IOPS on a volume as small as 134 GB.""


How would that be different from provisioned IOPS – simply having a limit greater than 4K IOPs?


I guess it is kind of the same but a higher upper limit?

Rackspace Block Storage vs EBS (provisioned) - Page 73: http://c1776742.cdn.cloudfiles.rackspacecloud.com/downloads/...


remember this all starts February 1st :D.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: