Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Micro Instances for Amazon EC2 (amazon.com)
290 points by bpuvanathasan on Sept 9, 2010 | hide | past | favorite | 160 comments



$0.02/hr == approximately $14/mo if you leave it on.

Is this really $6/mo cheaper than a linode 512? That might be nice for personal projects.

I'm trying to teach myself some things that really need more than my local dev machine (puppet, backup strategies/more resilient code, learning cassandra). I've been running a bunch of VM's on my laptop, but my dev machine is a weakling and can't hardly handle it.

It almost seems like I could just spin up a dozen of these little instances for $2.88 per waking day and teach myself under substantially more "real" conditions on the cheap. That's something I'd love to have as an option on linode, given that teaching myself is a large part of what I use it for.

Is there any reason that wouldn't work? Is this too complicated in practice?


One thing to note is that even EC2's "small" instance pales in comparison CPU-wise to even the low end Linodes. When playing with EC2, it blew me away how ridiculously slow they are until you get to the high levels (oddly it seemed the memory speed was also very poor - I had to wonder if even memory was networked somehow).


I took a few minutes to google a few terms, like "cassandra EC2" and so forth.

I get the general impression that doing anything architecture-y or deployment-y requires amazon-specific steps, steps that I don't have to take on a barebones ubuntu install. (of the 'Do this to get Thrift working on EC2' variety)

That sucks some activation energy away.

The ease/hurdles of VMs on my dev machine, even if performance is a bit of a bear, is still more attractive to me than doing anything Amazon-specific, because it's still all just unix.

I don't want to learn "Amazon", I want to learn [puppet, Cassandra, et al].


hmm. I do cassandra on EC2 and I can't think of any ec2 specific setup that I need to do. Its just plain ubuntu. I do install my own custom cassandra to stay on top, but I would do that on any ubuntu install.


I think he's talking about making an AMI for his particular packages.

This is an issue for me to. It looks like a hassle and the instructions look vague and all I want to do is just setup a VM on my machine and then run it on amazon.

Turnkey Linux seems to make this better.

I'm guessing you're just using an off the shelf Ubuntu AMI, right?



These instances are different though, they are able to burst CPU usage, just like Linode. AFAIK, none of the other EC2 instance sizes are able to do this.


Linode doesn't "burst" CPU usage. Processor time is shared fairly among Linodes on a host, and you can use any time that isn't used by others. It's worth noting that each Linode has access to four cores, so you can go up to 400% CPU utilization. For a good idea of how Linode's CPU performance routinely exceeds that of competitors, try this review: http://journal.uggedal.com/vps-performance-comparison


Semantics. The end result is the same: there is almost always a large amount of spare CPU cycles on each physical box, which can be utilized by instances to "burst" above their allocated capacity.


It's more than semantics. The term "burst" carries the implication that you only get something for a short time. That's really not a good reflection of reality in this situation.


Who cares what term they use or what implication it has? In reality, it is the same concept. That's why it is SEMANTICS.


Words have meanings and carry implications. By your logic, we might as well use the term "banana" instead. The point is that the concept doesn't reflect reality, and thus your point has no merit.


You fuck goats. Way to take this way the fuck off topic.


You've been here a year and a half and should know better than to post comments like this one.


I haven't seen anyone mention the cost of IO requests associated with EBS. Quoted from http://aws.amazon.com/ebs/ :

As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second $0.10 per million I/O).


I'm running 4 m1.small EBS-backed LAMP servers in all 4 regions. I pay about $1/mo for EBS IO (8-11 million IO requests per instance per month). The servers get a few thousand hits per day. The cost of EBS IO should be minimal in most cases. The benefit of EBS versus the local storage you'll get with Rackspace or Linode is greater durability because it is off instance and AWS automatically duplicates the data. If the VM host system fails (i.e. power supply goes bad), you can bring your instance back online within a very short period of time on another host at the exact state it was at the time of failure.


One word of caution: sometimes when an instance becomes unavailable (think network partitioning), all its EBS volumes will become unavailable as well. You can't snapshot, you can't detach. My most frustrating experience on EC2 yet.


You can force detach using the console tools. I had this problem too


I can remember at least one case where force-detach didn't work either. In that case there is absolutely nothing you can do to access your data.


If you use a 1-year reserved instance, it'll cost you about $115/year, which is substantially cheaper than the lowest-priced Linode. I'll be switching http://mbusreloaded.com/ after its Linode subscription runs out, as there's absolutely no way I use more than 10 GB of bandwidth per year.


It quoted me at $54/year.


That's the one-time fee. You still have to pay $0.007 per hour.


Oh, I didn't catch that part. Thanks for the heads up!


Yup - you could - this is a fantastic move by Amazon - this is the one thing that's been missing from my experimentation world - Linode is fantastic for the servers I run permanently there - I have no plans to remove them, but the commitment to an instance for a certain period of time is what keeps me from using it to work on deployment scenarious and whatnot. Amazon is fantastic for this (and has differnet constraints, of course)

This just brought AWS down into the realm of competition with the likes of Linode and others, and we can now figure out how to spin up a dozen nodes and work with all the innards of aws without worring about getting accidentally hosed on forgotten instances... works for me.


Remember, though, you still need to pay for EBS and bandwidth seperately.


Which should be pretty low if I'm the only user.

Linode doesn't charge you for bandwidth between nodes in the same datacenter -- is there something comparable in the EC2 world?


Yes, but only if the nodes are in the same region and availability zone. From amazon's pricing page [http://aws.amazon.com/ec2/pricing/]:

  There is no Data Transfer charge between Amazon EC2 and
  other Amazon Web Services within the same region (i.e.
  between Amazon EC2 US West and Amazon S3 in US West). Data
  transferred between Amazon EC2  instances located in 
  different Availability Zones in the same Region will be
  charged Regional Data Transfer. Data transferred between 
  AWS services in different regions will be charged as
  Internet Data Transfer on both sides of the transfer.


None that I can see. I'm going to buy one reserved one, even though I have 7 hardware servers. I need to get better with the platform and for $54/year, the price is right


One thing that is useful: if you like to keep several instances configured and 'ready to go' for your learning experiments, when you are done, don't terminate them, just stop them. You don't pay for stopped instances (but then you do pay 24 cents a day per unattached Elastic IP address).


I'm in the exact same situation with Riak + node.js developement. Micro instances seems like a good fit.


Can Linode instances occasionally randomly be killed like EC2 instances, or do they offer stronger guarantees about uptime?


Linode is a VPS provider, not a cloud provider like EC2. Instances are always on, not on-demand. If they are ever killed it would be because of some kind of outage but I've found that to be extremely rare in my experience. Slicehost has also been good for VPS.


AWS instances don't have persistent storage...if your server crashes you loose your data, the changes to your OS and even your IP.

AWS is great, but you can't compare the AWS instances with other VPS or Dedicated offerings.


Micro instances don't have any local storage -- EBS only.


Aside: How far off are we from renting our PCs in the cloud, and just having a local terminal? I know it's an old Failed Dream (mainframe-terminal, client-server, settop box etc), but maybe we're getting closer...

It seems a bit ridiculous, because you still need a bit of local power for display and fast reactions, and current iPhones/netbooks could do with more power. But desktop PC's have been fast enough in that respect for a while. An advantage of the cloud is that as RAM, cycle prices etc drop, you get it (more of them or cheaper) without the hassle of physically upgrading. And bursty usage is available too, eg. when compiling.

There's solid economics here: it's a sort of timesharing idea, instead of cycles being wasted while you type, someone else uses them to compile. Even more compelling globally - someone else uses them while you sleep. The same argument works for sharing your desktop's own cycles, p2p, but a centralized cloud has admin advantages and other economies of scale.


I regularly use a cheap netbook as a terminal to an m1.large instance for development work. My scripts use spot instances to keep the price low--typically around $0.13/hr. Not for everyone, but it's saved me maybe $500 over buying a fast laptop. Unlike a laptop I can easily let others log into the system, leave it on for long compute jobs when I'm out of town, and don't have to worry as much about losing important data.


The "failed dream" been successfull but never for long. As disk bandwidth and network bandwidth increase irregularly and leapfrog one another, it goes in and out of fashion.


Yes, but network bandwith (and latency) may become cheaper than local maintenance one day.


The user experience is a very sensitive variable. "Cheaper" matters to Enterprise folks but lag will make the rest of us pay a little more.

In fact local maintenance is the driving force behind web apps already. They generally lag like a mother, have lame controls and update in whole-page flashes, but I see more every day.


If you take a look at the sad state of this countries broadband infrastructure we are a long way off. I live in a "tech" city, I have two internet connections (Time Warner and ClearWire) patched together on a high-end router and I still wait for things I shouldn't have to.


How much of a speed increase do you see because of this setup?

Are these actually redundant carriers, or is the last leg owned by qwest and just leased to Time Warner and ClearWire?


I'd also be interested in hearing how and why you do this.


OnLive has gotten pretty close with their gaming platform. They've shown demos of Autocad software and I believe Photoshop running with it.


OnLive sounds cool, but it's a little hard to believe, since latency is a common issue in regular gaming already. Demos are invariably carefully orchestrated. The other thing that gives me pause is that it is such a compelling vision for publishers. It's an absolutely magic idea. However, this enthusiasm is not technology-driven: it's not about what has become possible, but what would be really amazingly cool if it was possible.

But latency isn't such an issue for compiling etc. Someone was saying in the instant google thread that 250ms is imperceptible for search (and I've heard 200ms for the command line, from Rob Pike on Go). That's much lower hanging fruit than for gaming, where 250ms latency is far from instant but unplayable.


I thought there was no way it would work, got a beta invite a few months back and was blown away. My brain still insists that the game had to be installed on my local system to be able to be working so well.


I had a similar reaction to you on Slashdot a few years ago when it was first announced. I thought it was completely made up.

Yet, it works. Shockingly well. Better for FPS games than strategy games actually (you notice lag on a mouse cursor more than anywhere else).


How is it hard to believe? They are essentially doing what Akamai did and locating server farms very "close" to end users on the network. I have a 20ms ping from the closest Akamai CDN server.


We already do.

My pictures are all stored "in the cloud" (on facebook, or flickr, or photobucket). "My" music doesn't even really exist anymore, it is playlists on grooveshark, or stations on pandora.

Documets? Google documents.

I do all of my web development on VIM running VPSs.

The only stuff that doesn't happen "in the cloud" is specialized, media-creation type things, an activity that I would be surprised if more than 5% of the population participates in.


so you don't have a backup of any of your data?


What do you mean?

There isn't really any data to be backed up here, except for the web dev things. For that, there are three servers that all exchange rsyncs every night. One is prod, one dev, and one that is just using some spare disk space on one of the arrays at work.

For the photos, the online stuff is the backup. The originals still live either on their negatives, or on SD cards.


You fit your photos on SD cards? wow. my 8GB card fills up pretty frequently.


8GB sd cards are cheap, much cheaper than it would cost me to buy 1000 frames of film.


oh sure. But TB hard-drives are cheaper still.


online is not a backup. there is a reason we back up databases to magnetic tape, store them in remote locations, rotate tapes and test their integrity.


You can do it right now, as long as you aren't after video and gaming.



Here is a Geekbench benchmark report for one of these new micro instances: http://browse.geekbench.ca/geekbench2/view?id=287891 . It uses the same E5430 processors as the other m1 instances. CPU performance is about 2x m1.small based on these m1.small Geekbench results: http://browse.geekbench.ca/geekbench2/view?id=241412 and comparable to a rackspace 4GB server I also benchmarked: http://browse.geekbench.ca/geekbench2/view?id=243138.


That CPU performance is consistent with what they give as the burst performance: up to 2 EC2 Compute Units, which should be 2x as much as the small instances' 1 ECU. Would be interesting to know how often/long you can burst, and what the non-burst baseline is, though.


Lower total score (higher fp, lower everything else) if your instance has an older Opteron 2218HE (us-east-1b) http://browse.geekbench.ca/geekbench2/view?id=287900


To my knowledge (I've spun up many EC2 m1.small instances in all 4 regions), Opteron 2218s are only in use in us-east. The other 3 regions use E5430s.


Aside: I looked into AWS a couple weeks ago, to play with a simple webapp idea, but the myriad choices, acronyms and signups confused me, and there was [seemed to be] no free options for getting started, and initial traction. It seems focussed on sophisticated enterprise users (nothing wrong with that). So, I went with google's App Engine, which was much simpler, and has been great. These micro-instances seem the same.

Did I give up too soon?


If you can make your app fit within the significant limitations of App Engine, then it is a great service. I've used it for several projects from basic CMS to AJAX chat.

That said, sooner or later you'll want to do something that should seem possible on App Engine (e.g., image transformations with BufferedImage) and you'll hit a brick wall.

That's when I turn to a generic Ubuntu image running on EC2. It's not free, but with spot pricing it's awfully cheap. I expect spot pricing for this newest micro size to stabilize at around $10 a month.


Maybe not as huge a deal for Linux instances, but this is HUGE for Windows users. There's nothing comparable elsewhere. The cheapest Rackspace Cloud instance is $0.08/hour for 1GB. There is no faster, cheaper way to spin up a Windows server than AWS now.


Yes but 600mb on a windows server doesn't give you a lot of room to play with. I'm struggling to think of a use-case but I'm sure there are many.


Yeah, they're not going to run SQL Server. It's plenty for just serving up some simple stuff. I use the small instances mostly for testing stuff out with different configurations and running small short-term side projects. These will work perfectly for that.


Do those have VNC support, though?

Could you pay $14 and have a remote IE instance that the whole team can access no matter where they are, instead of using up your precious local memory on every machine?


They support Remote Desktop, but since they're Server 2008 the only IE instance you could run on it is IE8.


You could probably use one of the hacky solutions like IETester (http://www.my-debugbar.com/wiki/IETester/HomePage) for older versions.

I'd much rather run IE locally in a VM though.


Local memory is precious in these days of mid-range machines with 6gb+ ram (win xp in a VM doesn't take that much ram)?


This seems like an excellent opportunity for hosting Server Core, particularly as a front-end web server.


Why is it not possible to get Small Standard instances with 64-bit architectures?

This is the only instance that is missing 64-bit.

Even Micro instances offer 64-bit.


c1.medium is also available only in 32-bit


I put together a google spreadsheet of the EC2 instance pricing. US East / West only, so far.

https://spreadsheets.google.com/ccc?key=0AtNTMtkGNKnfdGJoajF...

$10/mo for 1 year reserved is pretty amazing.


don't forget the cost of the EBS volume. that'd be an extra $5/mo for 50GB of storage


Yes, you're correct. But its the same $5 no matter what instance size you use, which is what this spreadsheet is for.

Also, on my small slicehost, I'm paying $20/mo for 1/3 the RAM, and using 6GB of disk, which would be $0.60 on EBS.


Well, you don't need the EBS volume on other instance sizes, because they have instance storage.


Nothing durable, though.


That's a lot cheaper than an extra 50GB on Linode.


The spot prices are very low for the new Micro Instances. For Linux servers they are currently at $0.007/hour (about $5/month).


A good strategy I've used with EC2 for the past 6 months is to always purchase spot instances using a bid price that is just slightly above the on-demand price. Generally spot pricing stays around the much lower reserve pricing but will occasionally spike. By doing this, you basically get reserve pricing without having to pay the upfront reserve fee, and can keep your spot instance online long term. This site provides some useful historical data on spot pricing: http://cloudexchange.org


I have been horribly burned and disfigured following this advice. It's important to keep in mind that just because it's unlikely the spot price will go above the reserve price, there's nothing from preventing that. It has in the past, and it will again.

The important take away: Don't use spot instances unless you are 100% okay with the idea that your machines will disappear without warning at some point. Let that sink in.


What happens if everyone did this? Wouldn't the spot price then always reflect the on-demand price?


Presumably, there are people out there using it for what the things it (was likely) made for -- short computational tasks, temporary processing, or jobs that require lots of computation but aren't time sensitive -- meaning there will always be people who aren't using the instance 24x7.

Still, I wonder the same thing -- actually, whether or not it's possible for the spot pricing to go higher than the on-demand pricing, etc.


Can you keep your spot instance up 24/7?


No, spot instances can be shut down anytime you are outbid.


However, they can be great to help fill out your server cluster with things that don't always need to stay up (like Mongel servers). Buy some reserved instances, then put out several spot bids as well for things you can load balance to.


That said, keep in mind that if you need to keep at least one instance up, you need to put them in multiple availability zones, or vary the max spot prices - if there's a load spike that leads to your spot instances being terminated, it'll likely hit all of the spot instances at a given price in the same AZ.


If you keep your offer high enough, you can keep spot instances running 24/7, unless there's no offers at all.


There was a bug a while ago that it would occasionally kill them, even if you were well above all spot prices.


I've had 2 spot instances up for over 6 months. My bid price for those instances is the on-demand price.


This might just make me replace the Slicehost instance I use for Mercurial and build server. Elastic IP + EBS + micro instance makes a pretty nice low level machine.

It always bothered me that for a development server you are basically overpaying for bandwidth. Who cares I have 450GB bandwidth when I use maybe 30GB per month ?


Try Rackspace Cloud Servers (Rackspace owns Slicehost too). Pretty much the same configs as slicehost except no bundled bandwidth so you only pay $11 a month.

No affiliation to them but I do have VPSs at both (slicehost for things that require lots of bandwidth).


Yeah, I thought about that too, but apparently there is no way to migrate data (a slice) from Slicehost to Rackspace automatically.

And if I have to do it myself I might as well use Amazon, I use it for everything else anyhow.


It seems there is no local storage included in the price.

I did not try, but that probably means complicated settings, which is a pity, since while the price could probably appeal to people launching side projects at minimal costs, like me, being a side project also means that not much time can be devoted to sysadminery.


Go to the AWS console and launch one. It's pretty much as simple as it can be. They automatically use EBS so they're persistent. No special configuration required.


Then it is really interesting. I have a few possible use cases in mind :

- we are hosting a little software load balancer for web services and it definitely does not need more than that

- thanks to the web service api of Amazon, it is relatively easy to set automated recovery plans, and the idea is very attractive, but until now I was detterred by the price.

- for small web applications with low bandwith, the price is good. For reserved instances, for one year, you pay 54$ up front, then 87.6$ for usage for the whole year, for a total of 141.6. That's a lot less than renting a server at linode.com for a year (~220$).


About time...at $15/mo these are now a viable competitor to generic VPSes.


Don't forget that static IP is extra. All bandwidth is extra. Memory is at a fixed limit (some VPS will let you burst above your assigned memory). No software like Plesk to help configure the box. Those things can add up.

Edit 1: I mistakenly said that static IP was extra when it's not while in use.

Edit 2: Storage is also extra.


Static IPs only cost money if you're not using them. For most small VPSes utilization is practically nothing. I'm not saying it will replace VPSes, but it's a valid competitor.


Except that you have to pay separately for storage, as you have to use EBS: http://aws.amazon.com/ec2/#instance


A comparable generic VPS might have 10-20 GB of storage, which only adds $1-2 to the monthly cost.


Yeah, the storage doesn't seem like a huge deal, but the bandwidth might be. A $12/mo VPS from prgmr.com gives you 80 gigs/month free transfer, which Amazon would charge you another $12 for.


If you try to construct a standard VPS or dedicated server plan out of EC2, you're always going to find that the bandwidth makes EC2 more expensive -- but that ignores the fact that most people don't even come close to using all their allocated bandwidth. The fact that AWS only charges for actual bandwidth used makes a big difference.

(The same applies with Tarsnap's $0.30/GB storage cost vs. fixed-plan backup pricing -- $10 for 50 GB sounds cheaper, but if people only use 5 GB of that on average, it turns out to be far more expensive.)


Perhaps you don't understand. This is EC2 we're talking about, not prgmr. People run massive production websites on EC2 because it is reliable, predictable, and secure. It provides features that allow you to recover from outages and failures. For instance, being able to periodically snapshot your EBS volume to S3 and recover from a total datacenter failure within a few minutes by reconstituting the volume in a separate datacenter. However, this just scratches the surface on what AWS offers over a traditional VPS provider.


and, a reserved instance only costs about $10/mo.


$0.03 * 24 * 30 = $21/mo?


Reserved instances (assuming you're going to use it for the full year, which of course is the big caveat) also bring the cost down to $0.007 an hour with an up-front investment of $54, so:

(($0.007 * 24 * 30)*12 + 54)/12 = $9.54 a month assuming you use it for the full year.

Edit: Also, as mentioned by _delirium, Linux instances are also $0.02 an hour. I did the Linux calculation but the same one with Windows goes at $0.012 per hour.


The Linux instances are $0.02/hr.


Ah, yes. Thanks.


This is a big deal for us mongodb fans, who were bitten badly by the small instance's 32 bit limitation. Sign me up.


The real interest is in seeing what the downstream value added services do with this.

For example, I'm interested to see what changes this makes to the Heroku offering. It seems to be a perfect fit for their product.


I hope EngineYard drop their prices a lot. I really want to use it but it's too expensive.


Same here. I am sure EngineYard will use this opportunity to provide services on micro-instances. It's going to be a big boost to EngineYard. Heroku, not as much I guess since they anyway run multiple dynos on a single m/c.


Now that there are very small instances with 64-bit support, these can form the basis of the ultimate incrementally scalable MongoDB cluster.


My thought also, but be a little careful: you really want enough memory on mongod servers to keep the indices in memory.


It's my understanding that the indices would be sharded as well, so wouldn't you be able to just fire up more instances if the indices started to approach some kind of 80% figure?


I think you are right, but I have never had to use sharding with MongoDB (yet)


If you book a reserved instance, the price for lnx get as low as $0.01 ($54/yr). It's a bit premature but for spot instances, atm windows ones are around $0.0135 (linux history is not yet available). As for other instance-types it looks like that with spot instances you'll get the usual 60% off the original price.

EDIT: $54 upfront and then $0.01


$0.01/hr is $88/yr


Anyone got any ideas why the strangely specific size of 613MB RAM?


Sure. They probably followed a similar reasoning (example with fictional numbers, but probably close to reality):

* They have 64GB RAM hosts.

* They want to dedicate only up to 85% of the RAM to the Xen instances (keep 15% for the host OS, buffercache, etc).

* The Operations/Management team decides to target an overall rate of $1.82/hr per host to achieve desirable profits.

* The AWS marketing department has a requirement that instances be priced $.xx/hr (no fractional cents) to evoke "simplicity".

* At a first pricing attempt, they see they have the choice of charging $.02/hr and assigning 65536 * .85 / (1.82 / .02) = 612MB per instance

* ...or charging $.03/hr and assigning 65536 * 0.85 / (1.82 / .03) = 918MB per instance

* They select the first option (612MB/instance) because it is deemed sufficiently smaller than the existing "small" 1.7GB instance offering, whereas 918MB was not small enough.


I believe they're using chunks of RAM on each physical server to hold S3 objects. This might help explain where the "buffer/cache" is going to.


That makes almost no sense. S3 is a purely network based file delivery service over HTTP, and pre-dates EC2 by a significant amount of time.

A Xen supervisor needs a fair amount of memory for its own operations, plus it can buffer the physical disks in the machine as well as any network attached storage. If these servers were also hosting S3 in their "spare time" it would degrade performance, and expose the system to potential vulnerabilities.


IIRC Jeff Bezos hinted at this when he spoke at Startup School a couple years ago


He did, in reply to my question about whether it would ever happen. Now that it finally has, I'll be migrating over to EC2 shortly :-)


What exactly makes you migrate to EC2? It doesn't look cheaper or more reliable than VPS from a solid provider.

To be better positioned for rapid growth in the future?


Primarily, the fact that Amazon really has its act together with respect to security. That silly HMAC canonicalization bug notwithstanding, they've made a whole lot of good design decisions. I currently use Linode. A year ago, I reported two vulnerabilities in their control panel to them, both elementary in nature, one of them with a PoC exploit. Last I checked, neither has been fixed.


So how often do EC2 instances go down? Is it at hardware fail rate? or more often? Can I use this as a VPS replacement, and not have to worry about monitoring and fast restoration? (of non-important projects).


I think you'd need a pretty large sample set before you came to any reasonable conclusion.

I've been running ~10 nodes for AdGrok for the last 3 months, and we've already had one node fail (in that it wouldn't respond to any ec2 cli command to shut down or even terminate).

That hardware failure rate is about what I'd expect if it was our own colo and our own machines. Stuff always breaks.


It's difficult to say. I've worked at two companies that have used EC2 for various purposes. One had an instance that was used for dev work get "corrupted" on three separate occasions (I couldn't get the specifics, but there were definitely I/O issues with the instance storage), and the other has been using the same production box for more than a year.

The bottom line is that EC2 installations need to be designed as semi-permanent. My preferred strategy is similar to how Google talks about their hardware (when they do), that any one server can go down at any time, but the overall setup is resilient to failure.


We've been running hundreds of instances on EC2 for a couple years, and have never seen one just "go down." However, we will get notifications of "degraded instances." When an instance is degraded, you have some window (generally a couple days) to move the services running on that instance to another one. Even at the aforementioned scale, this happens maybe once every three to four months.

Can you use this as a VPS replacement? Probably. My guess is that your uptime will be no worse than some VPS provider. However, if you're storing information on the ephemeral storage, the onus is on you to get it to the new instance. I imagine that isn't generally the case on a VPS.

You may be able to mitigate this by using EBS (required in the case of these micro instances), but I've only used EBS a handful of times, and am no expert on the subject. If I understand their layout for these micro instances, it would simply be a matter of spinning up a new instance and spinning down the degraded node.


I've been using the high-cpu medium instances (c1.medium) for our rails nodes, just to avoid the slothy m1.small CPU. It seems like these are tailor-made for running either haproxy or your web tier!


I notice that, in a sense, AWS proves that the Total Cost of Ownership of Windows infrastructures is higher than the TCO of Linux infrastructures.

Amazon charges more for Windows instances across their entire offering. A Windows micro instance costs 50% more than a Linux micro instance ($.03/hr vs $.02/hr). This likely reflects Amazon's statistical studies on their EC2 datacenters that a Windows stack (OS + apps) uses on average more resources than a Linux stack, therefore more power costs, cooling costs, etc.


I imagine the price difference also relates to licensing costs for Windows instances. Also, TCO is more than just compute efficiency, although the latter is important in its own right.


It proves the part that wasn't in dispute. That is, that free software costs less to purchase than licensed software.

The TCO debate centers around the labor required to set up & maintain one versus the other.

That, incidentally, is why I run Windows on all my EC2 boxes.


Who said amazon charges according to what is costs them ? Infact, I CAN say, that, according to the market, windows is better than linux and hence amazon is charging more for windows machines.


What mythical market are you referring to?


The Amazon EC2 is sooo slow. I spun up an instance for Amazon and Rackspace both to see how long it would take to render a frame in Blender. It is shocking how slow the difference was. I didn't do an apples-to-apples comparison, but the Rackspace blender 64bit 2.49b rendered in 47 seconds. The Amazon linux blender 2.48 rendered in 17 minutes!

http://www.jasonrowland.com/2010/09/amazon-vs-rackspace-for-...


What can we conclude about the performance comparing with Linode and Slicehost smaller offers? I'm curious if anyone has any conclusions.


Seems that EC2 feels some competitive pressure...Yet a big move that kills the (superficial) cost-argument against the Amazon offer.


I don't think that argument is superficial. At scale, Amazon is actually quite pricey. Currently, Amazon makes most sense if your site does have large variability in usage and if it makes use of the ability to spin up/down instances on demand. If you're an event-related site where usage goes up by a factor of 10-100 for a few hours every week, for example, Amazon makes a whole lot of sense.

However, if your usage is way up there all week long, it seems to me there are significantly cheaper alternatives, e.g. Hetzner servers.


One of the biggest advantages to using EC2 is its scaling capabilities. EC2 offers 10 different instance sizes from m1.small to cc.4xlarge (with 10 Gbps clustering capabilities), 4 different regions, auto-scaling, load balancing, high availability via off-instance storage, durability via copying, GigE uplinks, and much more. You can't get that level of features from in any other IaaS cloud I am aware of. Yes, you might pay more than co-locating yourself or leasing some dedicated servers... but that isn't exactly an apples to apples comparison to EC2.


Do you have a testing environment or multiple testing environments? Does it/Do they need to run all the time? If not, you save there. Do you like being able to spin up an entire duplicate of your environment to do environmental tests? You can't do that in normal server environments without ridiculous expenditure.


Can you use Microsoft SQL Server with Micro On-Demand Instances? The pricing doesn't offer a Windows and SQL Server Usage costing.

http://aws.amazon.com/windows/


The best option if you wanted to do this would be to install SQL Server Express since you're not going anywhere near the memory limit of 4 gig that that product is bound by.

That way you're not going to incur any licensing cost for SQL Server provided you can live with the DB having a file size limit of 4 gig.


Tried it with SQL Server Express on Server 2008, pushed RAM usage right up to ~530/613 MB before even starting SQL Server.

Regardless, I'm glad to see the offering. Was looking for something similar and with the bar of entry constantly getting lower by competition rising, am sure it'll find a niche.


For crappy PHP blog hosting, it's a pretty good deal compared to "shared" hosting where you take your chances.


You probably could, but that's not really enough RAM to do anything worthwhile with SQL Server. And you'd have to install it yourself it appears.


It doesn't seem to be up on the pricing page yet (http://aws.amazon.com/ec2/pricing/). Are 32-bit and 64-bit the same price?


http://aws.amazon.com/ec2/#pricing is where I see it now. I can't find anything that would indicate different prices for 32- or 64-bit.


This is awesome, now it's super affordable for me to run beanstalkd and a few queue workers as necessary and communicate with my Heroku app over the Amazon private network.


Hrm I don't see the option?

http://cl.ly/0430c3a3f00c59743368

edit: only appears to be available with certain AMIs


It only works with EBS backed AMIs.


Very impressed that these are available in 64 bit. That means that one OS image can scale from micro to ginormous. WIN.


For approximately a quarter of the cost of small instances.

This may cut away at the incentive for people to start with Google App Engine.


That is exactly the position I'm in right now. Was going to use GAE but now I'm rethinking that decision. A micro instance of their relational database service would be perfect for my use, but I guess the ram would be too small.


Hm... I don't see AWS and GAE playing in the same ballpark at the moment.

I'm sure AWS will continue to increase the convenience and ease-of-use of their services, and GAE will continue to increase the breadth of their services, but right now they seem to be targetted at quite different app-dev areas.


I'm an AWS user, and I also use Rackspace some, so interesting to find this article indicating you're better off with a small Rackspace instance than a medium AWS instance.

http://www.thebitsource.com/featured-posts/rackspace-cloud-s...


This study was sponsored by Rackspace... I think the end results are questionable. Here is a study I wrote comparing AWS, Rackspace and some other cloud providers using some more standard benchmarking methods: http://blog.cloudharmony.com/2010/05/what-is-ecu-cpu-benchma...


I pay about $18 for hosting from a provider and get 500 gigs of transfer and 60 gigs of storage space. I was hoping this would be a real deal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: