Hacker News new | past | comments | ask | show | jobs | submit login
Amazon LightSail: Simple Virtual Private Servers on AWS (amazonlightsail.com)
1142 points by polmolea on Nov 30, 2016 | hide | past | favorite | 614 comments



Just to be clear: This service is offered by Amazon/AWS themselves, it isn't a third party. That's a question I had when I first clicked, which is why I am answering it here.

One big "gotcha" for AWS newbies which I cannot tell if this addresses: Does this set or allow the user to set a cost ceiling?

AWS have offered billing alerts since forever. They'll also occasionally refund unexpected expenses (one time thing). But they've never offered a hard "suspend my account" ceiling that a lot of people with limited budgets have asked for.

They claim this is a competitor for Digital Ocean, but with DO what they say they charge is what they actually charge. I'm already seeing looking through the FAQ various ways for this to exceed the supposed monthly charges listed on the homepage (and no way to stop that).

Why even offer a service like this if you cannot GUARANTEE that the $5 they say they charge is all you'll ever get charged? How is this different from AWS if a $5 VPS can cost $50, or $500?

That's what Amazon is missing. People want ironclad guarantees about how much this can cost under any and all circumstances. I'd welcome an account suspend instead of bill shock.


This is exactly it. 5-6 years ago (I think), I signed up for an aws account under the "free" or educational or something tier. AWS was newish at the time, and I wanted to learn about it.

Via some accidental clicking in the control panel (trying to get an IP address for the instance, I think?) I ended up getting a bill from them for over $100. Which, to me at the time, was a huge amount of money.

It put me off of AWS forever. I don't ever want something that tells me how much they're going to charge me after I have already given them my credit card information.

edit: they did credit me back when I complained, but that doesn't matter. The risk to me wasn't/isn't worth it.


100%. I can't stand it. It's unlimited liability for anyone that uses their service with no way to limit it. If you were able to set hard caps, you could have set yours at like $5 or even $0 (free tier) and never run into that.

One of my services had a Google BigQuery "budget" set at $100. One of our test machines went haywire and continuously submitted a bunch of jobs. The "budget" turned out only to be an alarm, and even that they sent us 8 hours late, after $1600 of charges had been racked up. I responded in 20 minutes and shut it down. Google insisted we pay the full bill. After I wrote up a blog post on the situation and had the "publish" button warmed up, they finally relented and refunded us for the amount of time their alarm was delayed. Absolutely ridiculous that's not their policy to begin with...


Me too. I want protection against my own stupidity, as well as sheer ignorance of the charges. This put me off AWS for years, and I was deeply shocked there was no one-click 'suspend at x$'.

For a company that supposedly puts the customer first, this is appalling.


It's difficult to come up with a good model for how a billing ceiling would work in software as a service. A good start would be to fully specify what behavior you desire when an account hits its billing limit. Are you expecting everything to keep working like normal while the cloud provider pays the bill for those resources, or are you expecting the provider to fully shut everything down in a way that prevents the accrual of further costs, or something in between?

There are a number of resource types that, simply by existing, will accrue costs. A lot of them, actually. On AWS that includes things like running EC2 instances, EBS volumes, RDS databases and backups, DynamoDB tables, data in S3 buckets, and more. The question is what should happen to these resources upon hitting a billing ceiling?

Should EC2 instances be terminated (which deletes all data on them), DynamoDB tables deleted, S3 data erased, RDS databases deleted? If that was the behavior, it would be an extremely dangerous feature to enable, and could lead to catastrophically bad customer experiences. This is a nonstarter for any serious user.

Conversely, if you expect those resources to continue to exist and continue operating, then that's basically expecting the cloud provider to pay your bill. The provider will then have to recoup those costs from other customers somehow, and so this option sets poor incentives and isn't fair to others. If you expect your account to remain open the following month, you'd have to settle the bill, and we're back to square one.

AWS gives people tools to tackle this problem, such as billing alerts. These can notify you over SMS, email, or programmatically when you hit an "$X this month" billing threshold, and then you can decide what to do. Since these events can be processed programmatically, it's possible to build a system that will automatically take whatever action you'd like AWS to take, such as shutting things down or deleting resources.

If you think all of this through, it's really hard to come up with an approach to billing limits that's fair and a good experience, so I think it's reasonable for cloud providers to give billing threshold alerts while leaving the choice of what to do in the hands of the customer.


The correct answer, as always, is to ask the customer.

Let's take a simplistic example and say you're paying per gigabyte. You decide you're willing to pay up to $X, and Amazon tells you ahead of time how much your $X will buy you, and you accept.

One type of customer will be using that storage to store priceless customer photos. Even if the customer ends up deleting the photos, it has to be your customer who makes that decision - not you, and not Amazon. You tell Amazon that you'd like an alarm at $X-$Y, but that if you hit $X, keep going, at least until you hit $X+$Z.

Another type of customer will be using it to store a cache copy (for quicker retrieval) of data backed up in a data warehouse somewhere. You tell Amazon that you'd like a policy which automatically deletes all the oldest data, to guarantee to stay under the limit.

Yet another type of customer would rather keep their old data and just return an error code to the user for stuffing too much new data into too little storage, so basically, guarantee to stay under the limit, and guarantee never to delete data.

You can't solve billing until you communicate with your customers and ask what they want.


And the "correct answer" sometimes leads you to realising "Hang on, I just asked the wrong question to the wrong people".

So lets for a moment assume you talked to a large cohort of customers, and found a bunch of "types" including those three you list and many many more (inevitably, at AWS's scale).

You then need to make some business decisions about which of those "types" are most important to you, and which are way less profitable to spend time addressing.

So of course you solve the big pain points for your customers spending tens or hundreds of thousands of dollars per month before you prioritise the customers worried abou going over a tens or hundreds of dollars a month budget.

What would that solution look like? It'd have ways for customers with hundreds or thousands of services (virtual servers, databases, storage, etc) to make all their own decisions about alarms, alerts, cost ceilings - and tools to let them decide how to respond to costs, how to manage their data availability, how to manage capacity, when to shut down services or limit scaling, what can and cannot be deleted from storage. It would also 100% need to allow for practically unbounded capacity/costs for customers who need that (Think AliExpress on their "Single's Day" event where they processed $1 billion in sales in 5 minutes.) All this would need - for the $100k+/month customers - to be machine drivable and automateable, with extensive monitoring and reliable alerting mechanisms - and the ability to build as much reliability and availability into the alerting/reporting/monitoring system and the automated provisioning and deprovisioning systems as each customer needs.

And at least to a first approximation - we've just invented 70% of the AWS ecosystem.

You might think Amazon don't cater to people who want hard $5 or $70 per month upper limits on their spending. You're _mostly_ right. There are many other people playing in that space, and it's _clearly_ not a high priority for Amazon to complete for the pennies a month available in the race-to-the-bottom webhosting that people like GoDaddy sell for $12/year.

The thing to think about is - "who does Amazon consider to be 'their customers'?". I think you'll find for the accounts spending 7 figures a year with AWS - billing _is_ "solved". The rest of us are on the loss-leader path (quite literally for the "free tier" accounts) - because Amazon only need to turn a few tenths or hundredths of a percent of "little accounts" into "their customers" for it all to work out as spectacularly profitably as it is doing right now.


"and it's _clearly_ not a high priority for Amazon to complete for the pennies a month available in the race-to-the-bottom webhosting that people like GoDaddy sell for $12/year."

Except that that's what this announcement is.

Which makes me think this may be AZON's fix to runaway billing - if you don't have the resources to pay for mistakes[1], stay in the per-month kiddie pool and don't play with the heavy machinery.

[1] I started to add, "or trust yourself not to make them", but that's silly, because mistakes will happen.


I'd guess it's more to scoop up mindshare and make getting started easier, which almost assuredly leads to future upsells. That developer who starts prototyping a project on AWS instead of DigitalOcean now might make them $$$$ they otherwise wouldn't have down the line when that person needs to scale and doesn't want the huge pain of switching providers.


I don't disagree with your details, but you're arguing in a circle (here and in another similar comment).

Let's assume, based on the evidence at hand, that Amazon is rolling out Amazon Lightsail, and that as such, they're willing to do work (create business plans and write software) to court the $5/month market. In that case, it's a relevant comment for people to write "I can afford $5/month, or even $20, but I can't afford unlimited liability, even with what I know about AWS customer service, so I cannot use this product." It's relevant because it suggests that there's anxiety that is preventing uptake, which can be solved by a combination of writing software and internally committing themselves to eat the loss if the software is imperfect (as others have said, stopping service actually-on-time is actually harder than it sounds, but the provider can always just eat the loss, invisibly to the seller).

Your (probably-correct) observation that Amazon doesn't really care about the penny-ante user's money (in the short term) is beside the point.


It doesn't have to be an actual functional ceiling -- just a customer-facing cost ceiling. Things don't have to really "freeze". Each service could have some defined "suspend" mode that attempts to minimize Amazon's cost non-destructively. A "limp home" mode. And yes, it's possible that this mode for some kinds of services would be no different than the service's normal operating mode.

When a customer's ceiling is reached, their mix of services goes into limp mode. Things slow down, degrade, maybe become unavailable, depending on each service's "freeze model". Alarms ring. SMS messages are sent to emergency phone numbers. The customer is given a description of the problem and an opportunity to solve it -- raise the cap or cut services.

So wouldn't this cost Amazon money? Sure, but that's a cost of doing business. And as others in the thread have pointed out, the actual costs to Amazon are surely much lower than the "loss" they're incurring by not unquestioningly billing the customer. Especially since Amazon often refunds large surprise bills anyway.

If this were the official policy -- no dickering required -- there's a definite cohort of risk- and uncertainty-averse customers who would be willing to start using Amazon (or switch back).


> Each service could have some defined "suspend" mode that attempts to minimize Amazon's cost non-destructively.

That's what stopping instances _is_ already. You don't get charged for stopped instances which is a defining feature of Amazon's cloud. Very few providers actually offer this. Most just charge away for the compute even if the instances are powered off, Azure being one exception.

This whole "spin up compute and get charged a minimal amount when not in usage, but keep your working environment" model was pioneered by Amazon.

> So wouldn't this cost Amazon money? Sure, but that's a cost of doing business.

Why would Amazon spend a bunch of money, so that they can charge customers _less_ money, in order to keep customers who are cheapskates, and/or won't take the time to learn the platform properly?


Because they can get more customers that way, and having a hundred cheapskates might be more profitable than having ten non-cheapskates.


Raise the price by the actual cost of keeping the resources suspended for a week multiplied by the estimated probability of it happening. If that week passes with no additional payment then delete everything. The additional cost doesn't have to be applied to unlimited liability accounts. What's so difficult about that? There's not much worse customer experience than massive unexpected debt. Outages and data loss are minor problems compared to potential starvation and homelessness.


Oh give me a break man. Starvation and homelessness. Deleting customers data is something you don't do. If they can't pay you can write off the bill. But people have committed suicides because of data loss. The parent post nailed it.


People have committed suicide over debts too. I'm not suggesting Amazon gets rid of unlimited liability accounts, only that they give customers the choice.


If you're going to commit suicide if you lose your data, perhaps you shouldn't rely on the graciousness of a third party to save your data for free.


I think financial ruin was the reason not data loss.


> But people have committed suicides because of data loss.

Citation Required


"it's reasonable for cloud providers to give billing threshold alerts while leaving the choice of what to do in the hands of the customer.".

But, they don't don't give us the choice. I need to keep an eye every moment of every day for an alarm, as hundreds or thousands of dollars rack up. That's the ONE THING I DON'T WANT. I'd take anything else (delete my data, lock everything, whatever) over charging me money I can't afford to pay.

I think it would be reasonable to put everything into a no access / deep freeze mode, until I pay up and choose to unfreeze. Would it cost Amazon that much to just keep my data locked for a couple of weeks while I sort out my storage? I'd even be happy for a reserved $100 or so to pay for keeping the storage going.


"I need to keep an eye every moment of every day for an alarm"

You know you can make a machine do that for you - right?

In fact all the tools Amazon would use to do this are available to you right now. Cloudwatch, SNS, and Lambda are 98% likely to be all you need - apart from the time to get it set up to do whatever you think is "the right thing".


Well, except if something's gone wrong and my bills are suddenly shooting up, that's exactly the kind of time when some piece of software might misbehave, and fail to freeze everything. And it's not really very easy to test either.

This seems like the kind of thing you really want to get right, and it will be (I imagine) hard to get right. If it was easy, I would expect some company to offer it (along with, of course, a guarantee that if they mess it up, they will pay my bill).


Sure - and if you need that, buy that. WHM/Cpanel and Plesk both let you have 100% guaranteed monthly costs with vendor configurable response to over-use of resources. You can get that for $5/month or less - just not from Amazon, because that's not what they sell.

Nobody rings up Caterpillar and complains about the costs of leasing/running/maintaining a D9 'dozer if they're doing jobs that only need a shovel and a wheelbarrow.

Tools for the job. AWS might not be the tool you need. Or might not be the tool you need _yet_.


I've been involved in renting heavy equipment, and it doesn't work like Amazon. No one gets unexpected massive bills, you agree before what the bill will be. I don't see the comparison you are trying to make.


If you leave it parked in a pit overnight that fills with water, you may find yourself on the hook for a big bill if your insurance finds you negligent. Likewise, if you neglect to perform required maintenance, you could find yourself on the hook for an expensive engine overhaul.

Even heavy equipment rentals can result in large unexpected bills if you don't pay attention to what you're doing.


Actually a good comparison -- if reddit users came around and smashed up the equipment, I would be OK as I would have insurance.

I need "reddit / DDos insurance"


Sure - maybe I used a poor example. Apologies.

But.

There's nothing "unexpected" or "unagreed beforehand" about Amazon's pricing or costs either. You order a medium EC2 instance and we all know exactly what the bill per hour will be.

There's nothing unexpected or un agreed beforehand about the ordering/provisioning process. You ask AWS to start one, they'll start one. You tell them to stop it, they'll stop it. You get charged the known agreed upon rate for the hours you run it. You ask for 10, you get 10. There's even checks in place - the first time you ask for 50, you hit a limit which you need to speak to them to get raised before you can get a larger than previously seem bill.

Same with your earthmoving gear. You ring up for prices and they'll say "$200/day for a bobcat, $2500/day for a D9 - includes free delivery in The Bay Area!"

If you need one bobcat for one day at 10 Infinite Loop, Cupertino - and click their web order form and say you want 10 D9s for one day at 1 Infinite Loop, Cupertino (and happily click thru all the never-read the web interface confirmations) - you should 100% expect to get a bill for $25k, as well as dealing with clearing up after parking 10 'dozers in Apple's parking lot.

This is not "unexpected". From the vendor's perspective $25k is not "massive". You knew and agreed to the prices and had every opportunity to calculate what your bill was going to be.

If you were only expecting a $200 bill - that's kinda on you. The earthmoving guy has heaps of other customers who spend many times that every single week - and they all started out as some guy who ordered a $200 bobcat or $25k's work of D9's as a one off. You are just another sale and another prospect in the top of the MRR funnel for him.

(Note: See holidayhole.com for a contemporary example of an unbounded earthmoving bill! ;-) )


The problem isn't starting up 250 servers.

The problem is someone putting up your hobby website on reddit when it's 2 in the morning your time, and you wake up the next day with a $10,000 bill.


It seems like a hard technical problem to shut down gracefully. But it's an easy product problem. Just suspend the account. AWS must do this already for some cases.

No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling. Which Amazon has.

So, this is just the nuclear option for people's pet projects. It's not a bad thing to have but I wouldn't expect it to operate any differently than what would happen if you broke the TOS and they suspended your account.


> No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling

That's absurd. Of course there are businesses that want hard ceilings. Perhaps not on their production website[1], but on clusters handed over to engineers and whatnot for projects, experimentation, etc.? I've seen these things lay around for months before they were noticed.

[1] Maybe you don't consider startups 'real' enough, but I can totally imagine early stage startups wanting limits on their prod website, too. You can't save CPU cycles for later consumption.


> No one running a real business on AWS wants a hard ceiling instead of billing alerts

Are you sure? I'd imagine many startups would rather take a few hours of downtime over billed thousands erroneously. The latter could easily mean the end of the company but the former, when you are just striking out is not the end of the world by far.


My CFO and I run a real business and we'd like this. Especially being able to constrain it by sub/child accounts and/or departments/tags.


> No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling. Which Amazon has.

I know startups that I could bankrupt with a few lines of code and a ~$60 server somewhere long before they'd be able to react to a billing alert if it wasn't for AWS being reasonably good about forgiving unexpected costs.

I'm not so sure no one running a "real business" would like a harder ceiling to avoid being at the mercy of how charitable AWS feels in those kinds of situations, or when a developer messes up a loop condition, or similar.

Perhaps not a 100% "stop everything costing money" option that'd involve deleting everything, but yes, some risks are existential enough that you want someone to figuratively pull the power plug out of your server on a seconds notice if you have the option.


I meant a business that makes significant revenue and has enough users that downtime or data loss would be unacceptable.

If you can't afford downtime you probably can afford to wait for the alert and choose your own mitigation strategy. A system that can't tolerate downtime probably has an on-call rotation and these triggers ought to be reasonably fast.

If you can't react or can't afford to react, you probably can afford some downtime / data loss.

So the system doesn't need to have granular user defined controls. Just two modes. That was my point.

I think I triggered people with the phrase "real business" and I apologize for that.


> If you can't afford downtime

Only a tiny fraction of businesses can't afford downtime. A lot of businesses claim they can't afford downtime, yet don't insure against it, and don't invest enough in high availability to be able to reasonably claim they've put in a decent effort to avoid it.

In most cases I've seen of businesses that claim they "can't afford downtime", they quickly balk if you present them with estimates of what it'd cost to even bring them to four or five nines of availability.

> A system that can't tolerate downtime probably has an on-call rotation and these triggers ought to be reasonably fast.

A lot of such systems can still run up large enough costs quickly enough that it's a major problem.

> If you can't react or can't afford to react, you probably can afford some downtime / data loss.

I'd say it is the opposite: Those who can afford to react are generally those with deep enough pockets to be able to weather an unexpected large bill best. Those who can't afford to react are often those in the worst position to handle both the unexpected bill and the downtime / data loss. But of the two, the potential magnitude of the loss caused by downtime is often far better bounded than the potential loss from a crazily high bill.


Why don't those startups use something like cloudflare? It would stink to be at the mercy of the good graces of ddos purveyors to not attack.


Consider API's etc.. A lot of businesses have needs where putting CloudFlare in between would be just as likely to block legitimate use. I love CloudFlare, and use it a lot, but it's not a panacea.


Plenty of teams will want this for dev/test.


> Should EC2 instances be terminated (which deletes all data on them)

You know exactly how much a paused EC2 instance charges you. The ceiling implementation could say, if the total amount charged so far this month, plus the cost of pausing the instance for the rest of the month, exceeds the ceiling, pause it now. So there's no data loss; the worst case is the customer's service is offline for the remainder of the month (or until they approve adding more money). At some point less than this number, start sending angry alerts. But you still have a hard cap that doesn't lose data.

It's not what a serious production user wants, but it's exactly what someone experimenting with AWS wants, either a running service that's looking at a cloud migration, or a new project/startup that hasn't launched yet.


Even a serious production user would generally have some threshold above which continuing and hoping AWS forgives the bill puts the company at greater risk than suspending service.

Granted, for a big company, that amount may be so big it's unrealistic to ever hit it.


How much does data at rest really cost Amazon? And how much of that cost is simply opportunity costs?

Most companies will hold onto your data for a time, then delete it afterwards.

This doesn't smell like technical concerns to me. It smells like sneaky Amazon-wants-to-make-more-money concerns.


From the other perspective it sounds to me like a sneaky "I want Amazon to bear the costs of me failing to pay for the resources I've agreed to costs for and consumed, and give me a bunch of 'grace time' to change my mind later" concern.

(<snarky> What's a gallon of milk on the shelf really cost Walmart? And how much of it is opportunity cost? If I usually buy 2 gallons a week - why can't I keep taking home a gallon every few days for a month or so after I stop paying, then cut me off afterwards? Sounds like a sneaky Walmart-wants-to-make-more-money concern.)


In the course of using Walmart in a fairly normal way to buy a gallon or two, I make a teeny mistake and take home 15 trailers of milk and they charge me full price for it.

If only Walmart would have a process in place to notice that I was ordering a spectacular and unusual amount of milk and save us all the trouble.


(Not entirely sure if you're agreeing or disagreeing with me here... ;-) )

So my local Walmart has a Netflix guy who gets 1000 trailers of milk twice a day, and the Dropbox and Yelp guys get a few hundred trailers a week each - and I know these guys from when I see them at the other Walmart in the next town over buying the same sort of amounts there as well. There's people like the Obama campaign who we'd never seen before who fairly quickly ramped up from a gallon a day to a pallet a day, then jumped straight to 50 trailerloads a week for a six months, then stopped buying milk completely one day.

What's considered "normal", "unusual", or "spectacular" - and to whom?


What's normal isn't the problem. The problem is if said Walmart doesn't have a way for a new customer to call up and say "our staff is only ever authorized to order 1 trailer a day; if they ask for more, don't fullfil until you have written confirmation that we authorise the amount, or we won't pay".

Plenty of companies operate like that, and e.g. require purchase order ids and accompanying maximum spends issued for any expense over X, where X can be very low. I've worked for companies where it was 0 - every expense, no matter how low, needed prior approval from the CEO or finance director. Not just tiny companies either - one of the strictest such policies I've dealt with was with a company of more than a hundred employees.


Then I click the "I plan to scale this to solar-system size" button when deploying, instead of the default "I'm a fallible human being and prefer not to burn piles of money" setting.


Alternatively, if you don't want the ability and risks associated with being able to scale to solar system size - use a different vendor who isn't focussed on providing that.

Amazon AWS's "important customers" are not "fallible human beings" who plan to keep their monthly spend under $100. They'd perfectly happily inconvenience thousands of those users in favour of their customers who _do_ need solar system scalability.b(And, to their credit, there's an abundance of stories around of people on typically 2 digit monthly spends who screw up and get a 4 digit bill shock - which Amazon reverse when called up and pleaded with.)

So they built their thing as "default unlimited". Because of course you would in their position - follow the money. When Netfix wants 10,000 more servers - they want it to "just work", not have them need to call support or uncheck some "cost safety" checkbox.

If you need "default cheap", AWS isn't the right tool for you. You can 100% build "default cheap" platforms on AWS if you've got the time/desire (well, down in the "I can ensure I don't go over ~$100/month - it's not real easy to configure AWS to keep costs down in the $5/month class - the monitoring and response system needs about twice that to keep running reliably).

I sometimes don't think people (especially peope who "grew up" in their dev career with "the cloud") understand just what an amazing tool AWS is - and the fact that they make it available to people like me for hobby projects or half-arsed prototype ideas still amazes me. I remember flying halfway round the world with a stack of several hundred meg hard drives in my carry on - catching a cab from the airport to PAIX so I could open up the servers we owned, and add in the drives with photos of 60,000 hotels and a hardened and tested OS upgrade. Buying those 4 servers and the identical local setup for dev/testing, getting them installed at PAIX, and flying from Sydney to California to upgrade them was probably $30+ thousand bucks and 3 months calendar time. Now I can do all that and more with one Ansible script from my laptop - or by pointing and clicking their web interface.

AWS is an _amazing_ tool - talk to some grey-beards about it some time if you don't remember how it used to get done.But the old saying holds: "With great power comes great responsibility." If you don't want to accept the responsibility, use a tool with less power. Don't for a minute think Amazon are going to put a "Ensure I don't spend as much money with AWS as I might otherwise" option in there - if there's _any_ chance of it meaning a deep-pocketed customer _ever_ gets a false positive denial from it. (WHich, now I think about it - makes this new Lightsail thing make so much more sense...)


I completely understand that AWS is designed for scaling. But I've seen CS professors and students get burned and turned off AWS when Amazon "screwed them" for a few hundred bucks when they were planning on spending $50. This is not good customer development. The next massive AWS user is a grad student right now.


On another note it makes me sad that people are so willing to justify "costs" without showing or explaining exactly what they are. Everyone needs a costs audit, yesterday.

Also how our are analogies alike? Milk is a consumable, data is information. Completely different usage pattern.

Finally, every internet service provider I've ever used that held data for some reason granted me a grace period, even if it was never officially stated. Sometimes you just have to ask nicely


I do suspect that there is a set of circuit breaker actions that could mitigate runaway bills mostly non-destructively. Stop writes to stateful storage. Stop all inbound/outbound data transfers. Terminate EC2 instances (which as you say wold delete data but that would generally be ephemeral data anyway). Halt tasks like EMR.

On the other hand, based on near-universal industry practice, there doesn't seem to be a huge demand for this. I suspect it may be better for everyone concerned to have heavy-duty users control their costs in various ways and for Amazon to refund money when things go haywire without bringing someone's service down.


Do you mean shutdown EC2 instances vs terminate? I believe a shutdown EC2 instance only accrues costs for things like EBS.


If you are running a large team, and handing out resources to people you may not directly manage, it'd be nice to be able to enforce billing alerts on certain individuals. Is there a way to do that?

I've seen engineering teams hand out accounts to support teams for testing, and since the resources are not under the purview of the dev team things go unnoticed until someone gets the bill. Arguably there are better ways to handle these requirements, but it'd be nice if you could force people down the path of setting billing alerts because these individuals don't always realize that they are spending money.


I wonder if itemizing payments per service would help? You'd then only incur suspension for services you couldn't afford. Maybe this in combination with some form of prepays?

So maybe a couple of EC2 instances go down, but you pay for and keep S3, Dynamo, etc. At least enough to salvage or implement a contingency. You'd still owe Amazon the money.

It's tempting to wonder why Amazon would incur that risk, but it is a risk already inherent to their post-pay model, and it serves as good faith mitigation to the runaway cost risk that is currently borne by the customer.

Not perfect, but maybe a compromise.


They already have to make all these decisions for accounts suspended for non-payment (expired CC, etc). This would just add the option for customers that would prefer to be locked out rather than have unexpected charges.


Sounds like a good business idea. Pay $10-50 a month for us to automatically disable your services which go over your stated budget. Free tier at 1 service.


Until you get successful and Amazon copy your implementation.


It'd definitely be a gamble, but imagine pitching this idea at Amazon HQ. "We're going to roll out a fantastic new project to allow our customers to spend fewer dollars on our platform!"

Not saying Jevon's Paradox wouldn't kick in, but the friction of convincing businesses to work on tools to allow their customers to spend _less_ money is high.


That's not the pitch. The pitch is that you're making people feel safer about spending money on your platform.

This is one of the fundamental things that make any sort of market work. If it's not safe to participate, people won't.


That's what the existing resource limits are for. AWS wants the customers for whom $1000 is a rounding error first and foremost. The DO-style $5-$500 / month customer is gravy on top and probably a future upsell.


That's really not true at Amazon. It's deep in their core values to cut customer expenses whenever possible, regardless of competitive pressures. I can't explain why they don't offer this feature, but I doubt it's because anyone is scared of a feature that could potentially help customers.


If that was true for AWS, their prices would be far lower pretty much across the board.

One of the most amazing feats Amazon has pulled off is to convince people that AWS is cheap. They're cheap in the way that Apple are: Only if you need a feature-set (or name recognition..) that excludes the vast majority of the competitors from consideration. If/when you truly need that, then they're the right choice. There are plenty of valid reason to pick AWS.

But they're very rarely the cheap choice.


It is cheap compared to setting up and running your own datacenter, S3 replacement, etc. You have to keep in mind that that's where the story started and continues to be, a lot of folks (the ones with $$$) see it as "no cloud vs cloud" not "DigitalOcean vs. Amazon".


(EDIT: To be clear I agree with you that that's the reason people often think that AWS is cheap)

Yes, but that's a false comparison. It's cheaper to rent dedicated servers at any of several dozens large hosting providers than it is to use EC2 or S3, for example. For most people it's cheaper to rent racks and lease servers too (but depending on your location, renting dedicated servers somewhere else might be cheaper - e.g. racks in London are too expensive to compete with renting servers from Hetzner most of the time, for example).

It's extremely rare, and generally requires very specific needs, that AWS comes out cheap enough to even be witting batting range of dedicates solutions when I price out systems.

When clients pick AWS, it's so far never been because it's been cheap, but because they sometimes value the reputation or value the feature set, and that's a perfectly fine reason to pick AWS.

The point isn't that people shouldn't use AWS, but that if people thing AWS is cheap, in my experience it means they usually haven't costed out alternatives.

It's an amazing testament to the brand building and marketing department of Amazon more than anything else.


AWS cuts their prices all the time. Every couple months an email comes out announcing price cuts for S3 or various EC2 instances.


And yet they're still far above most of the alternatives.

E.g. my object storage costs are 1/3 of AWS. My bandwidth costs are 1/50th or so of AWS prices.

There are valid reasons to use AWS depending on what exactly you do, but it's extremely rare for price to be one of them.


I'm always fascinated when someone mentions a paradox so I looked up "Jevon's Paradox".

The real economic term for this is elastic demand (specifically, relatively elastic demand). For example, microprocessor cost reductions make new applications possible, thus demand increased so much that the total amount spent on microprocessors went up for decades. Example of inelastic demand is radial tires. They last four times as long as bias ply tires. But since this didn't cause people to drive four times further, the tire industry collapsed on the introduction of radial tires.

Does anyone know an example of an actual paradox? I've never found one, and I'm curious if they really exist.


Are you sure?

Jevons's Paradox is about demand increasing for a resource when it becomes more efficient to use, e.g., someone invents an engine which can go twice as far with the same amount of fuel but instead of halving the demand for fuel the demand actually increases.

If I recall, elasticity of demand has to do with the relationship to supply. A very inelastic demand will cause people to consume the same rate no matter how much _supply_ is available. It doesn't have to do with the efficiency at which the resource is consumed like stated above. It's a subtle difference but I think they're actually quite distinct concepts.

Actual paradoxes are common. Just consider the classic: "This sentence is false".


Yes I'm sure. When coal becomes cheaper as a fuel (efficiency being one path), if that opens up new applications or use by a broader set of customers, it's no surprise at all that total revenue could go up.

As for your example, most sentences are neither true nor false. Nothing interesting has a probability of 0.000 or 1.000.

"This sentence is false" is clever use of language, may be interesting to sophomore philosophy students while smoking weed, but its not useful and there's nothing paradoxical about it.


Regarding your question about a paradox, what would qualify to you as "an actual paradox"? What is the definition against which you try a contender? Free free to look it up in a dictionary, but that probably won't help you generate a definition that makes "This statement is false" non-paradoxical. Note also that the etymology of paradox is "beyond strange", so historically the bar for qualifying is simply to be a idea or combination of ideas that is remarkably strange or surprising.

> most sentences are neither true nor false. Nothing interesting has a probability of 0.000 or 1.000.

I'll start by observing that surely you're talking about propositions, not sentences, nor utterances. Or at least you ought to be.

But more significantly, I'll note that most propositions are either true or false (under a given interpretive framework), but that as epistemologically-unprivileged observers, we must assign empirical propositions probabilities that are higher than 0 and lower than 1. Propositions like "I am a fish" or "You hate meat" or "If Rosa hates meat then Alexis is a fish" are either true or false, under any given set of meanings for the constituent words (objects, predicates, etc). I'm curious what probability you think applies to propositions like "2 + 2 = 4" and "All triangles have 3 sides" and "All triangles have less than 11 sides". I think there are very many interesting propositions that differ from these only in degree of complexity (e.g. propositions about whether or not certain code, run on certain hardware, under certain enumerable assumptions about the runtime, will do certain things).

Based on your very strange claim that all interesting sentences have non-zero non-unity probability, perhaps you're saying that you find theorems uninteresting, and moreover are only interested in statements of empirical belief, such as "I put the odds of the sun failing to rise tomorrow lower than one in a billion." In that case, I cannot imagine what statement interest would qualify as a paradox, except perhaps insofar as some empirical statements of belief are "beyond strange".

"This sentence is false" is a paradox under pretty much everyone's notion of a paradox.


> I'm curious what probability you think applies to propositions like "2 + 2 = 4" and "All triangles have 3 sides" and "All triangles have less than 11 sides".

Those are great examples, thanks. All true, and there's nothing interesting about them.


"All triangles have 3 sides" might be an uninteresting triviality, but "The sum of the squares of the lengths of the catheti is equal to the square of the length of the hypotenuse in a right-angled triangle" is neither trivial nor uninteresting and yet it has a probability of 1.


I love this example the most. It's the exception that proves the rule.

If you need to dig this hard to find something interesting with a probability of 1, that's pretty good evidence that the vast majority of interesting statements are not of the true/false variety.

Although I don't find it interesting, I am open-minded enough to ... embrace.. the .. uh.. diversity of the world, that allows some people, to find that interesting.


Pythagorean Theorem is "digging hard" and "not interesting"? Mind explaining?

The language that contains all Turing Machines that halt on all inputs is not decidable.

Or

e^(iy) = sin(y) + i * cos(y)

Are those uninteresting trivialities to you?


Yes, exactly. And "this code has a mathematical error in it" is often interesting, often non-trivial, and often has probability 1 (and often probability 0).

And these things are exactly the sort of thing that "differ from [trivialities about triangles] only in degree of complexity".

Note that "all triangles have 3 sides" is probably an axiom, but "all triangles have less than 11 sides" is a trivial theorem.


An actual paradox is an apparent paradox that you don't know how to resolve.


"This statement is not true."


"This statement is true, not."


AWS actually have a "pillar" of architecting called "Cost Optimization". Found it interesting in their GameDay at the AWS Reinvent conference they effectively punish solutions costing too much & being inefficient.


Which is the point, right?


well, hell then you get what everyone wanted in the first place anyway :p


That would be Azure. They have hard limits and they shutdown when you get over the limit I believe


I keep wondering how accurate is that. On one hand implementing that precisely surely must be difficult - if it wasn't, everyone would have it. On the other hand, Azure clearly have issues calculating the bills in real time. Heck, once I kicked off a bunch of large VMs for a day. Once shut down I checked out the billing page - the costs estimates kept increasing every hour!

My hypothesis is that they don't really have it nailed down but given big margins they have they can afford to let you use more resources than you pay for in the end.


I can only speak for my own experience, but it has saved my wallet a couple of times in the past few years. The spending limit is advertised fairly explicitly just that: a customizable limit on your monthly spending [1]. If it doesn't work as intended and you end up with a bill larger than the limit you set up, you'd have a rock-solid case for disputing the extra charge.

[1] https://azure.microsoft.com/en-us/pricing/spending-limits/


I've been using a $150/mo spending limit on Azure for maybe two years now, and I can go on record stating that it is extremely accurate. They're really good about showing me a breakdown of exactly where every cent I'm spending is going over time, and the second it hits $150, Azure automatically shuts down all related services and stops charging me.

There are a few Azure services to which the spending limit does not apply, but as long as you know what they are then you can choose to use them on your own volition.


Azure tried to charge me 3200 euro/month for inbound traffic


If you think that solving this problem in a consumer-friendly way is beyond the wit of Amazon then I think you are wrong.


Completely. I will say amazon is usually very good about refunding money if you made a mistake or got hacked so in that sense their customer service is pretty good.

One time we had literally 1 million cloudwatch metrics get created because we were monitoring mongodb databases and a CPAN test was creating test DBs and not deleting them and we were not ignoring dbs created with names like test_* when creating the metrics.

Another time an outside developer committed a root credential on a public repo in github to a (basically) unused amazon account.

Both times they refunded the costs. Not sure if that was because we were paying tens of thousands a month though to get this service though!


That is charity. Why not just put the <SHUTDOWN LIMIT EXCEEDED> check mark which will solve the issue entirely.


What thing would they shut down first?


Does this not work as a hard limit for App Engine? [0] It also says here [1] that 'Spending limits are set for paid apps and cannot be exceeded.'

[0] https://cloud.google.com/appengine/pricing#spending_limit

[1] https://cloud.google.com/appengine/docs/quotas


I'm not sure because we don't use AppEngine, but these seem like two important caveats:

> Important: Spending limits are not supported in the App Engine flexible environment

> You may still be charged for usage of other Google Cloud Platform resources beyond the spending limit.


Is the free tier even worth trying out? I went to look into it previously, but stopped before entering my CC info.

Can it incur charges even if you've set up the server to only be a free tier?


Depends what you want to get out of it. I used it for a year to host a crappy blog that got trivial amounts of traffic. It's perfectly fine for its primary use case of getting your feet wet in the AWS ecosystem, learning what all the different features are for, and how to manage them. If you want to do any "real work" you'll blow past what the free tier offers, but that's ok with me.

Yes, you can incur charges if you exceed what's covered by the free tier. Not all AWS services even have a free tier, and those that do are severely limited (1 micro instance, 5GB of S3 storage, etc). You're not off in some sandboxed environment where they just shut you down if you go over the limits. It's more like a monthly credit of $X for the first 12 months of your account. To cover my ass, I set a really low billing alert threshold. Like "email me if my monthly bill ever projects to exceed $1".


Yes, when I was still in high school, I had set up a Munin graphing node on AWS. Well, what I didn't realize at the time was that Munin likes to use a lot of disk IO every time it writes out the graphs. AWS charges for I/O on their SAN (not on local disks, but the free tier doesn't come with local disks), and so I ended up with a $150 bill and only use them now for Route53 (DNS hosting, it is fantastic for that), and S3/Glacier for archival storage.

It is worth trying if just to gain knowledge on AWS. But for hosting, I'd say DigitalOcean


The newer EBS types don't charge per I/O operation. (Provisioned IOPS types charge for the speed you theoretically can do, but don't charge for the ops you do do.)

https://aws.amazon.com/ebs/pricing/ https://aws.amazon.com/ebs/previous-generation/


Yes, I think Elastic IPs can incur charges in some way.


Yep.

I had a personal $400 learning experience with Amazon. They did refund it. My last company had a low-5-figure surprise a few years ago. Some of that could be considered their fault (alerts were sent to someone on vacation), but again, the refusal to allow the option of a "hit a limit, pull the plug" option is what causes this.


Of course, there's also the opposite scenario "So here we were, having the best sales day in our history, and suddenly Amazon pulled the plug on our servers because we went over our authorized limit! We lost 5-figures of sales that day. Sure, they sent us warnings, but the alerts went to someone on vacation... but why wouldn't they let us exceed the limit for a bit before they pull the plug"


nobody says that the hard limit should be there for everybody, but I SHOULD be able to hit a checkbox that says "no matter what happens, runaway CPU, somebody DDOSing me with traffic, runaway disk, I do not want to be responsible for more than $x/month"

Personally this is the main reason why I have never considered using AWS for my small projects, but maybe this is an intentional choice by Amazon, to keep away "hobbyists" and only go after companies where an extra $1k in AWS bills this month is just a blip on the radar...


I'm pretty sure Azure had that the last time I used it.


Nope.

They have billing alerts ('beta') and used to offer a prepaid account type that they have discontinued for new customers (some may still have grandfathered accounts).

Closest thing now is the MSDN credit. It doesn't require a credit card and the account auto-suspends when you hit it. Problem with the MSDN credit is that it is for non-production only (and they reserve the right to kill anything they consider "production").

They should really offer prepaid again or bill caps. But Microsoft is too busy copying AWS to consider that they can do better than AWS.


Azure has spending limits for the subscription which is what is being discussed.



> I can't stand it. It's unlimited liability for anyone that uses their service with no way to limit it.

It's not unlimited liability, most of their services have limits imposed. If you've scaled any service to thousands of machines you'll quickly find out that they stop you at 20-30 machines or so. Then you have to contact support to get the limit increased.

Sure you can still rake up an unpleasant bill. But there are limits :)


But even the default limits are high enough that there are plenty of companies that could at least in theory bankrupt themselves with it. Especially because there is no hard total cap, and so many services have high enough limits that you can get really nasty shocks if any one of them is maxed out. Even more so if you e.g. make use of different instance types (separate limits) in different regions (separate limits) and a wide range of services (separate limits).

And I've done work for clients that have requested really big increases because of both realistic and unrealistic expectations of handling traffic peaks. E.g. one client asked for an increase to 100 instances of 2-3 different types in a few regions to be prepared to handle a couple of days of high traffic. If said event had happened, they scaled it all up, and somehow didn't take them down again, it'd only take a few days of charges for them to be insolvent at their then-current funding level.

So you're right, there are limits, but limits or not doesn't matter if it's high enough that it can make you go out of business.

Which makes me wonder if anyone has ever gone out of business because AWS was unwilling to forgive a "surprise" bill. I'd be inclined to assume that they're willing to stretch quite far to avoid that, given that they seem to be very good about it. But I'd also not want to stake my business on hoping Amazon will be charitable about something like that.


It's not actually unlimited liability. Every AWS service has a set of default limits, and you must request AWS raise those limits before you can provision additional resources.

I agree with your larger point, but you're going to be surprised by a $500 bill, not a $500,000 bill.


Hey there, I work on Google Cloud.

Did you set a billing alert? Google BigQuery has proactive "cost controls" that won't let you go overboard, whereas billing alerts are just that - alerts.


At least last I checked, Azure offers this.


Amazon lets you set a cost limit in your account. Look it up.


I did. They don't.

They offer billing alerts, have a budget tracker thingy, but have no actual automated caps. Closest thing you can do is write one yourself using the AWS APIs.


Yeah, gotta monitor daily


That is only for automated reporting and alerts. It does not allow you to actually limit your expenses.


IIRC, it can fire off to a SNS topic; you can burn down your stuff to your heart's content.


Billing alerts that are simply a notification you exceeded $X is not any sort of limit, particularly when they may take hours to arrive.


There was this awesome bug with AWS a while back too where you could accidentally sign up an account twice with the same email address, but no way of knowing you had two accounts without paying really close attention to your account ID. So if you went to log in and used your email and one password, you would sign into one account, but if you used the same email and a different password, you would sign into the second account (who knows what would happen if both accounts had the same password). Anyways a couple years ago I signed up for an AWS account to mess around with the free tier (doing a pluralsight course I think), and after a bunch of messing around didn't touch it again for a while. A year after I was going to use AWS for something and forgot I had already setup an account earlier (or thought it was on a different email or whatever), and managed to sign up for a second account with the same email address (which now became the primary account). Continue down a few months and I start getting billing emails for a few dollars a month but could not figure out why (and the invoice wouldn't show up in my AWS console for that email address). After digging I realized somehow I had two AWS accounts one the same email and the bill was on the other one, but I couldn't log into it as I didn't remember the password, and doing the password reset would just sent me a reset for the second account. It took a tonne of back and forth emails with amazon support to get it fixed and gain access to both accounts, the charges ended up being for having a few VMs created (but stopped) and my free tier ran out so it was billing me a few dollars for storage a month. I haven't really touched AWS since after that because the billing can be so obtuse if you aren't paying very very close attention.


> There was this awesome bug with AWS a while back too where you could accidentally sign up an account twice

Not a bug, this has been amazon's philosophy with accounts on all systems from very early on. Some of the initial designers of amazon knew families where multiple people shared one e-mail address, but wanted separate accounts for shopping.

Multiple accounts per e-mail address was a concious design decision for all Amazon systems.


A poorly implemented design decision then (which they turned off on AWS back in 2012 due to exactly this happening to many people). https://forums.aws.amazon.com/thread.jspa?threadID=101218

There was no way at the time for me to a) see that I had a second account associated with my email address or b) reset the password for the second account without going through support c) merging the two accounts into one even with supports help.


Indeed. Multiple accounts per email is the most legacy of legacy features, and it's actually pretty easy to not realize it exists even if you work for Amazon. I'm not surprised internal systems don't handle it well. If I recall right, there was a push to get customers to move off it, using site messaging etc, because it was such a pain to maintain.


I have that problem at my current job. The average user is probably 60. Our policy is to tell people that email accounts are free. Our primary concern is that people would recover the password to someone else's account and access data that we only let the other person access.


Similar experience here. In grad school I forgot to shut down an 8 core instance I was doing data analysis on and it ended up costing $400 before I noticed it.

It would be great if when entering your CC information, they let you set a default monthly cap for all your projects, to be overridden at the project level if you suddenly need to spend more.


I think it comes down to them seeing themselves as a utility. You wouldn't want to be prompted for payment confirmation every time you plug something into an outlet, but this does mean that you'll have a huge unexpected charge if you accidentally leave on the air conditioner when you go on vacation.


I think part of the reason that utilities can get away with this is the fact that the maximum bill you are likely to run up is generally 2x-3x your normal bill. This doesn't well for Amazon because your actual bill can be orders of magnitude larger than what you expect.


And every now and then a utility bill makes the news because someone's water line sprung a leak and they used $5000 worth of water in a month.


What's the quote? Something like, "At some point a quantitative difference becomes a qualitative difference."


I know two people near my parents who've ended up with $30,000+ water bills for a single month.


I'm sure they see it that way. The problem here is that they aren't a utility unless the utility was also selling air conditioners and microwaves, and those air conditioners and microwaves had a button that said "Charge met 10x my normal bill this month" on them.


Offering the option to cap expenditures doesn't affect anyone that chooses to not use that feature.


Similar thing here (except I was trying to be more paranoid).

Tested out the free tier of Amazon, but didn't realize spinning down and spinning up would ding me if they were within an hour.

Even know, when I use it for testing and I'm being fairly careful, I'll get a $3 bill at the end of the month. I was trying to set up alerts, but their alerts and dashboard, while I'm sure super capable, is a bit overwhelming as a new user.


$3! :)


I got my first bit of AWS credit in a cloud class and something like this was common. From what I've heard they will null out the bill if they can see you didn't use it.


This happened at one of the first startups I worked at also. They were burning through a ridiculous amount of capital on AWS silly charges here and there. I think they were spending somewhere in the range of 5-6x what they really should have been spending just because they were "testing" features and forgetting about them.

It's why all of my projects sit on DO and I only really use Route 53 from AWS.


That's exactly the reason I wouldn't sign-up for something "free" when I was required to give away my credit card details.


The same thing happened to me when I first sign up for AWS. I contacted support and they just credited my account then gave me a promo on top of it.


Also, if you have a Static IP attached to the VPS and you first Stop, and then Destroy your instance, you will need to make sure you "free" the IP as well to avoid _small_ 0.005/hr price.

From FAQ:

> What do Lightsail static IPs cost?

> They're free in Lightsail, as long as you are using them! You don't pay for a static IP if it is attached to an instance. Public IPs are a scarce resource and Lightsail is committed to helping to use them efficiently, so we charge a small $0.005/hour fee for static IPs not attached to an instance for more than 1 hour.


> $0.005/hour

That's $3.60/month... seems similar to mail-in rebates—many people forget, and accidentally give Amazon some (mostly) free money.

Also, from later in this thread:

> FWIW, bandwidth overages at Linode and DO are $0.02 per GB, LightSail is $0.09.

It's these seemingly-tiny (but not-so-tiny when I'm running 60-70 VPSes) costs that kill when you get your first bill after a large traffic event.


Would anyone please tell me which of Linode, DigitalOcean and Vultr have cost ceilings? I looked at their pricing pages but couldn't figure out. They all claim that they have monthly billing caps for the hourly rates, but meanwhile, both DO and Vultr have per-GB charges if the transfer quota are exceeded, and Linode is silent on this on its pricing or FAQ pages. Can the data transfer charges be capped too? If so, what happens when the quota is reached?


> Would anyone please tell me which of Linode, DigitalOcean and Vultr have cost ceilings? I looked at their pricing pages but couldn't figure out. They all claim that they have monthly billing caps for the hourly rates, but meanwhile, both DO and Vultr have per-GB charges if the transfer quota are exceeded, and Linode is silent on this on its pricing or FAQ pages. Can the data transfer charges be capped too? If so, what happens when the quota is reached?

They don't for traffic. So you do run a small risk of something happening.

However, Linode at least pools your VPSs so if you have 100 of them and 20 of them "go over" the cap you still are often okay because of the other 80 that didn't "go over".

The truth is none of these providers provide truly hard caps. The difference is with Amazon/Google/Azure/etc you can realistically get hit with a 4 figure bill if something goes seriously wrong.

DO/Linode/Vultr I've never seen accidental "mistakes" causing that sort of thing and even an active dos/ddos attack that would cost you more than $100 in overages before they started null routing you.


It's not exactly free money because Amazon will surely reserve that IP for your use at any time. By sitting on Static IPs you are using AWS resources. Yes, operationally it costs them nothing, but there is an opportunity cost.


> but there is an opportunity cost

Is there?


Yes. They could sell the IP address to someone else.


But... they're selling it to you.


It's more valuable to be in use than to sell it to you. They are very limited on ipv4 space, so the charge is really a penalty for keeping that resource from another customer.


IPv4 allocation limits are still mostly a scare tactic to get people onto v6. I know dozens of people from my webhosting days with /12 and /16 allotments doing nothing that they pay peanuts for. This isn't a unique scenario.


That's not relevant, if I have 10 cars and you need a car ... then all that matters to you is that you need a car, not that I have 10 cars sitting there doing nothing. People sitting on IPv4 addresses don't care but new entrants cannot get new IPv4 addresses since they're all allocated.


> This isn't a unique scenario.

And this is exactly why we're running out of publically available IPv4 addresses.


I think the point is that there's a reason they charge you for it instead of letting you hold onto it for free.


They aren’t, unless you have an instance attached to it.


yes.


You're capped at 20 instances as well by the looks. Plus you'll get AWS 'dog shit' support included which is hopeless.

Will stick with Linode.


They put the caps on to help with the very problem you are all bitching about; provisioning a ton of resources and getting a big bill. It's very easy to raise the cap.


I wasn't complaining about that. I've had 2-3 day turnarounds on "everything is broken" events on normal AWS VPC. Always factor their support offering in as well.


What level of support did you have? I've been on developer or first-level business support, and generally get someone knowledgeable; only the timing changes.

The only bad experience I had was with SES - we got blocked by high bounce rate, sending to a test email that did not exist (specifically because it was a test email). It took two days for the special unblock team to unblock us, even though the general support guy I was talking to had responded a couple of times in that wait period.


Zero to start with, now business. Business is "ok" - sometimes takes a couple of attempts to get someone who knows what they are talking about.


I suspect that the VPS cap is more about discouraging large AWS users from spinning up a zillion to reduce egress costs.


The main purpose of AWS's cap is to prevent abuse.


Like mail in rebates it creates a moral hazard on the vendor's part.

The right thing to do is to just discount the product and just re-use IPs unless otherwise reserved. Mail in rebates can be ignored or "lost in the mail" and seems to happen often enough for me to have lost trust in them. I have little control over what the vendor does, so I would rather avoid vendors who think screwing with me is ok.

I don't buy products with mail in rebates and now I won't buy into lightsail (Presume this thread is accurate and Amazon doesn't fix it).


DigitalOcean also does this:

> due to the shortage of IPv4 addresses available, we charge $0.006 per hour for addresses that have been reserved but not assigned to a Droplet. In order to keep things simple, you will not be charged unless you accrue $1 or more.


It would be nice if they offered an option of adaptive bandwidth throttling, so that once you're past 75% of your allotment it starts slowing you down such that you never reach your entire quota, and never get charged for overages.

The Zeno's paradox in action - once you reach half your limit, the speed is cut in half. "Zeno" throttling if you will. :)


There must be something you can run on the server that will do this for you. I agree that Amazon should offer it in front of the server, but there are options.


Or imagine if they just bundled a flat rate in the service? You're effectively specifying a burst policy.


I don't think that actually holds up. There are certainly some people that want "ironclad guarantees", but the reality of cost here is pretty close: you pay $5/mo max for the server. The things that can cost more are either overages on bandwidth/dns (and here, 1TB of bandwidth on the low end is included, eliminating a lot of the flux from EC2), or things you choose to initiate, like snapshots.

It feels like the larger thing they're trying to solve, that I expect actually stops the majority of people who don't choose AWS, is the complexity around setting up VPCs/SecurityGroups/Subnets/etc.

Most providers in the VPS space already charge overages for bandwidth, and most of them don't support suspending the account vs just billing you.


I personally don't use AWS for personal projects because of the lack of a cap. I would rather see my system suspended rather than continue to pay beyond my budget.


This. I got excited about this until I came to HN and started reading the comments. I _have_ to have a cap. I don't want to do something stupid that puts me on the hook for $$$.


So who are you using then? I don't see anyone that has a cap. Everyone charges you if you go over your allotted bandwidth.


Not sure if this is still the case. Digital ocean had a pricing structure for bandwidth, but was not charging for that used bandwidth because they did not have a way to show how much bandwidth has been used in a billing cycle. Best to ask them if this is still the case.


I agree.

I am currently using Linode, but would move to AWS if they offer a cap. 2 years ago I signed up for the free AWS and forgot about it (didn't use it at all). Ended up costing ~60$ before I found out and since then I've avoided it.


Linode has bandwidth overages, just like LightSail, so I don't see how this compares. You pay more money if your Linode surpasses the quota.


Exactly. I have alerting setup on DO that notifies if I need to address scaling issues. Let me make the decision for smaller projects. 100%.


AWS's pricing and usage related billing is orders of magnitude better than DO.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2...


Exactly. There's certainly a market for VPSs and other cloud services that have a hard cost cap. AWS has apparently decided that they're fine with not competing for that slice of the business.

Setting cost caps on more complex applications that use a lot of different AWS services would get complex in a hurry and could easily have unintended effects.

As someone else wrote, I view this as primarily a simple VPS for people who are already using AWS for other things. I suspect that AWS isn't really interested in being a VPS-only provider for the most price-sensitive customers.


Exactly, the lack of cost ceiling is the #1 reason I can't pick AWS.


The cost ceiling are the initial resource limits. Maybe you can ask them to lower the limits?


Not for bandwidth, though. That's where AWS gets you. All the complaining of server costs are the AWS of yesteryear - servers are pretty similar in costs to competitors now. But traffic doesn't have any caps that I'm aware of.


It's like they saw all these consumers being screwed by ISP and cell provider data caps and astronomical accidental overage fees and went "how do we get in on that action?!"


Hehe, sounds pretty much like what they've been doing. Really wish they bring some of that customer centric-ness they talk about to this aspect


They frequently refund mistaken overages.


The AWS cost structure is one reasons we switched to a different provider. I feel Amazon has taken metered pricing too far with AWS, especially when the prices aren't really that low. Everything seems to cost extra with AWS, and it's quite hard to estimate beforehand how much a thing will actually cost in practice.


It's so confusing that I switched to DO. Not that DO is cheaper but it's pricing is way simpler.


Cloudwatch alarms can stop running EC2 instances. We do this to prevent accidentally leaving expensive instances running. If that helps a little.


Afaik the data is delayed and the damage might already be done by the time the alarm fires


For just running instances, that's not true. You set up a metric that checks if the instance is alive every minute, ten minutes, whatever, and set it's alarm condition to trigger once the metric is true for six samples, 600 samples, whatever you need. The alarm action: stop instance.


While it wouldn't be as easy as what you want, and this certainly wouldn't be the best solution, they say they offer an API.

So in theory (I haven't checked what the API gives you control over - so this may be worthless), you could monitor your instance (bandwidth, time up, disk usage, etc), and if things get out of hand, or approach your limit (whatever it is), you could use the API to say shut down or delete the instance, or throttle the bandwidth (maybe via firewall rules or something?).

Again - this would assume the API allows you to do this (and ideally from within the instance itself - which shouldn't be an issue, I wouldn't think). And again, it shouldn't take this much work (you're right, it should just be a simple control panel setting).

But maybe it's an option for those who have the skills to implement it?


EDIT:

I just took a quick look at the API docs - and while it doesn't look like you can mess with the firewall rules settings, everything else should be possible (get metric data, start/stop/reboot/delete instance, etc).


"Why even offer a service like this if you cannot GUARANTEE that the $5 they say they charge is all you'll ever get charged?"

Exactly! We have used AWS and DO a lot this past year. DO is great for smaller sites/api's and super easy to use. Their support is also outstanding.

AWS has tons of tools but many come at a cost. We are in the process of moving a couple of sites off of AWS and onto a LiquidWeb dedicated box. We will be paying much less and the LW dedicated server is more than enough for what we need.

AWS is great for spinning up and scaling instances quickly and comes with a ton of other tools. At the end of the day however it is not always the most cost effective or even best offering for most sites/apps.


AWS has tons of tools but many come at a cost.

Well yeah, that's how they pay for those tools -- they charge for them.


Nothing wrong with charging. There are more affordable alternatives though.


Isn't $80 for a 2-core processor a bit high especially if they are competing with Digital Ocean?


And DO is already immensely expensive compared to VPS at OVH, or actual dedicated hardware at Hetzner, OVH, Online.net, Scaleways, KimSufi, however they are all called.


I'm in Australia so I either pay a premium (100x) for an Australian VPS, or deal with 150 (US) to 400ms (EU) latency. I'm "locked out" of Hetzner et al due to that - and yes, I know there's nothing you can do about the speed of light. I just wish Australian/New Zealand/Singaporean/Hong Kong offerings were more competitive (all of those are <200ms)


Digital ocean have servers in Singapore for the same price as all their other ones. The latency from Perth is about 40ms from memory.


167ms from Canberra, on the NBN. Terrible routing though - I wonder what it would have been pre-TPG takeover, though.


Have you checked out vultr? I host my VPS there with no issues and extremely good ping.


Other comments regarding Vultr's billing in this thread have put me off it completely.


They say on the FAQ:

>For every Lightsail plan you use, we charge you the fixed hourly price, up to the maximum monthly plan cost.

Wording implies the monthly pricing is a 'maximum' price.


Traffic is the big item that can add additional charges:

> Data transfer overages above the free allowance are charged at $0.09/GB.

On the $5 instance the second TB (at $90) is 18 times as expensive as the instance itself with the first TB included.


Wow. That traffic is almost 50 times more expensive than at hosters of dedicated servers, or when you buy it directly from Tier 1 or Tier 2 networks.

Hetzner charges 1.36€ per Terabyte of traffic, and with most servers, gives you 10-20TB included.

I’ve heard people talk about the ridiculous traffic costs of AWS, but this is an entirely new dimension of expensive.


You get 30 TB traffic inclusive with the smallest Hetzner server that has ECC (4 core Xeon with 64GB RAM). And that is for ~70 EUR per month + setup fee.

That amount of traffic is more than 2000 EUR per month at AWS. Of course this is comparing entirely different things, but still, if you have significant traffic and can't avoid it with a CDN or something like that, AWS (as well as Google and Microsoft clouds) get seriously expensive.


I get that, but that's after the first TB. For a $5 VPS I'd expect most customers wouldn't go over that.


I'd agree. But it would be nice to have an option in the account settings so you literally cannot accidentally spend $90 on a $5 account.


It's not expected... until your site get unexpectedly featured on HN, reddit or any news site.


I'm pretty sure AWS only charges on traffic out.

Edit: just did some research, there are many cases this isn't true


That's just referring to the VPS itself. If you keep reading the FAQ you'll see that there are ways to exceed that amount.

Plus there's nothing stopping someone breaking into your account and upgrading it in all kinds of evil ways (which has been a huge hassle with AWS tokens being stolen from e.g. Github).


Really? What are these evil ways?


There were lots of stories of hackers spinning up the largest ec2 instances available to mine bitcoins on anyone's account they could get access to


Not OP, but I've heard of people getting hijacked and finding a bunch of VPSs spun up sending out spam and other malware related stuff. Maybe that's what they mean.


It's not; there are additional possible outbound bandwidth overage costs, and they are not insignificant.


>I'd welcome an account suspend instead of bill shock.

The big question here is what to do with stateful data. Would you accept an immediate deletion of all of your S3 data? RDS instances and snapshots?


I said suspend, not removal.

They could easily cut off public access to those resources while charging you storage fees. Obviously with any kind of ceiling there are certain details that need to be ironed out (i.e. most people wouldn't want configuration information or data to be lost, but they likely would want VPS to be taken offline and other services to be suspended).

Ultimately for most startups, small businesses, and individual developers being able to say "My AWS bill cannot exceed $1000, period" is a powerful tool. Right now if a billing alert fires at 1am, you may not see it until 9am and by then you're already in huge trouble.


SNS topics and lambda jobs. Just use the API to add a firewall rule to block traffic when it gets a billing SNS alert. Should be pretty simple to get working.


That's still a very striking omission of a feature.

Lambda isn't available in all zones, and not everyone has the time/ability/knowledge to set up such a thing. I'm sure I could do it, but only if I were to spend a few hours researching it, and probably a few nights working on such a solution. I'd also have to trust that I didn't mess it up -- I'd hate to have a bunch of traffic and NOT properly prevent the traffic.

This is the kind of thing that Amazon surely could provide easily if they wanted to.

It's like if your phone company didn't give you an option to limit your spending (prepaid), but said that you could use their arcane API to tell them each month to start/stop service. That's great, but not really very nice to customers.


There is no need to delete data.

The monthly budget cap should be allocated to existing storage first. This covers the existing data for the next month. If there is any free limit left, it could be used for new data writes + 1 month of storage, and/or running services. Once the limit is reached, then writes are blocked and services stopped.

The only situation where you would need to delete data is if you want to set a new monthly budget that is lower than your existing monthly storage-only bill - but the UI could just disallow this.


Per the AWS TOS, once Amazon suspends your account for non-payment your data's kept for at least 30 days. No reason they couldn't do the same here.


There is a way to hack this up, which is probably a bit complex for a single instance, but works:

- Setup a HTTPS endpoint on the server that listens for an SNS notification and performs an action (e.g. backup ephemeral data to S3 and shutdown). I wrote the service in Go and the action is just a shell script but choose your favorite language.

- Setup an SNS subscription pointing to the service endpoint.

- Setup an SNS topic for the message.

- Set up an SNS notification in AWS billing. I use "When actual costs are equal to 100% of budgeted amount".

The problem is that it's necessary to lock down the endpoint listener as it will usually need root access in order to shutdown the machine. This can be done by using authentication on the endpoint, setting up a locked down user to run the service under and granting that user the ability to run /sbin/shutdown in the sudoers file.

There are probably nicer ways to do it, but this does work to limit my spend on each instance.

You can also add AWS API calls to delete any other costly related resources (static IPs, load balancers etc.)

I've been thinking about writing a more modular and robust app that handles multiple instances etc but most of my servers are now in GCE so I don't really have the need.


These are going to be insanely expensive. Each FPGA chip costs $50k retail, and unlikely to be cheaper than $10k in volume, so I can't imagine the per hour cost being particularly cheap.

https://octopart.com/search?q=VU9P


Does this set or allow the user to set a cost ceiling? Probably not. Just based on transfer (network) costs, Amazon Lightsail includes this caveat:

Some types of data transfer in excess of data transfer included in your plan is subject to overage charges. Please see the FAQ for details.


Thanks, that's exactly what I was wondering. This makes LightSail a non-starter for little guys like me; I'll stick with Vultr for similar service that charges the actual price listed with no gotchas.


What are these "various" ways you're talking about?

The only overage charge I see is for data transfer. This isn't ideal, I'll grant you, but it's not the same as "various".


DNS, IP Address Reservation, Plan Upgrades, Bandwidth, and so on.

We've seen AWS accounts get broken into with stolen tokens, additional VPS's started, VPS upgraded, bandwidth consumed, etc. And while Amazon has been good with refunding the FIRST time, nobody wants to wake up to a 10K bill because your gitignore had a typo.

A ceiling or cap may even stop plan upgrades without an email confirmation. That would be hugely welcome, particularly in a world where bad guys are actively seeking out VPS to break into.


Genuinely curious to hear if that "gitignore had a typo" 10k bill has a story behind it!



Linode provides 2 gb for $10. AWS provides 1GB for the same thing. Also, AWS does not support ipv6 which means you can not launch your apps in istore if you use AWS servers.


> AWS does not support ipv6

Wait what? That's getting ridiculous. Seven years ago they would have been on time, perhaps even considered early by some, but three years ago when looking for hosting providers I already laughed at the ones without v6 and moved on without a second thought. They weren't even the cheap ones.

Currently enjoying a €3/mo VPS at Pcextreme.nl with the same specs as the $5 Lightsail VPS. But with IPv6 of course.

Perhaps I've been spoiled with dual stack at home since 2009 from xs4all. Other Dutch ISPs promised it in (iirc) 2013 and every year since, but there has yet to be a second big one to offer it and other countries like Belgium surpassed us by now. Even Germany's Telekom is getting there.


I use budgets in AWS to send me a text when I get to a certain limit. You could potentially use an SNS topic to take action and shutdown resources I suppose. I guess if you are always traveling or can't get to a computer this could be an issue for you. I can't imagine there are that many people seriously using AWS who could suddenly run up a huge bill that can't respond to a txt or have an friend, coworker, employee or automated task do so.


On the landing page they say there will be overages for excess data transfer. In terms of how it is different, I think a lot of people here have probably been using AWS for 5+ years and the jargon etc. is familiar, and we can calculate the price and understand what it is, but for people who are just getting started this will be a product that is directly comparable to i.e. Digital Ocean and can serve as a gateway drug to AWS.


That's how they get people on Glacier storage.

The storage is cheap as balls but the transfer can fuck you.


The whole point is to service customers with massive storage needs who seldom need to retrieve the data. For example, many companies are compelled by regulation to archive records for decades, but only need to access data in response to lawsuits/etc.

no one with frequent high volume retrieval needs would be advised to use glacier.


They changed the way Glacier retrieval costs work a little over a week ago to help address this.

https://news.ycombinator.com/item?id=10230937


>> But they've never offered a hard "suspend my account" ceiling that a lot of people with limited budgets have asked for.

That's probably where most of their profit comes from. That option was most likely squashed from the highest authority.


Can you use one of those top-up VISA cards? That you put so much money on, Do you need to be tied to a bank account?


Amazon usually bill in arrears (no idea about Lightsail). In the general case even, it doesn't matter. What you owe Amazon is independent of whether your card allows the charge or not. You can run out of top-up VISA card credit but you may still end up owing Amazon money.

This applies to any other provider, too. What you owe the provider is what you contracted to pay the provider (eg. by consuming services, or clicking the "upgrade" button in a web interface). It is independent of them actually taking the money.


this is why people set up companies, which in the UK costs 13 pounds/year to keep operational

limited liability, can't really beat it


Intentionally doing that in order to avoid paying sounds a lot like fraud, something your limited liability won't protect you from.


shifting risk to the creditors is the entire point of limited liability; the risk of deadbeats abusing it is always there, and the ability to pay is something almost every business will take into account when arranging a line of credit

amazon could completely negate this risk by requiring pre-payment for small/unknown operators, which is something a lot of people (myself included) desperately want them to provide.

I'm sure they've done their sums here, and have figured out the increased revenue from customers not being able to set a budget is more than their potential losses from deadbeats

the variable costs are basically zero, after all ( bandwidth and CPU time are worthless if not utilised)


Remarkably spot on! Everything.


That's exactly what we do: www.onekloud.com happy to chat (eric@onekloud.com)


I pre-paid for 3 years of service but 1 year in they sent notification that the server farm would moved and I had 30 days to relocate my servers. Was pretty busy at the time so was an inconvenience to move everything, but I did.

Then I get the next months invoice and it's not using my pre-paid services but instead is billing for full CPU usage - no reserved instances.

After emailing support several times they say it's my own fault for not using the correct instance type, even though it's identical to the one I pre-paid for. It may well be my error but it was caused by them since I never asked for my servers to be moved. It's been an expensive and time-wasting experience -- will never use them again.

Am currently evaluating GKE (even more expensive) and DigitalOcean.


"Just to be clear: This service is offered by Amazon/AWS themselves, it isn't a third party. "

says Someone1234. Should we believe it?


It's an official AWS announcement, so yea...


That's exactly what we do: www.onekloud.com. You can contact me at eric@onekloud.com


Price breakdown vs DigitalOcean, VULTR and Linode.

Of course all things are not equal (i.e. CPUs, SSDs, bandwidth, etc).

  Provider: RAM, CPU Cores, Storage, Transfer

  ----------

  $5/mo

  LightSail: 512MB, 1, 20GB SSD, 1TB
  DO:        512MB, 1, 20GB SSD, 1TB
  VULTR:     768MB, 1, 15GB SSD, 1TB

  ----------

  $10/mo

  LightSail: 1GB, 1, 30GB SSD, 2TB
  DO:        1GB, 1, 30GB SSD, 2TB
  VULTR:     1GB, 1, 20GB SSD, 2TB
  Linode:    2GB, 1, 24GB SSD, 2TB

  ----------

  $20/mo

  LightSail: 2GB, 1,  40GB SSD, 3TB
  DO:        2GB, 2,  40GB SSD, 3TB
  VULTR:     2GB, 2,  45GB SSD, 3TB
  Linode:    4GB, 2,  48GB SSD, 3TB

  ----------

  $40/mo

  LightSail: 4GB, 2,  60GB SSD, 4TB
  DO:        4GB, 2,  60GB SSD, 4TB
  VULTR:     4GB, 4,  45GB SSD, 4TB
  Linode:    8GB, 4,  96GB SSD, 4TB

  ----------

  $80/mo

  LightSail: 8GB, 2,  80GB SSD, 5TB
  DO:        8GB, 4,  80GB SSD, 5TB
  VULTR:     8GB, 6, 150GB SSD, 5TB
  Linode:   12GB, 6, 192GB SSD, 8TB
In an easier to read gist: https://gist.github.com/637693650bc8bb9baadf6293a99e1813


I closed my VULTR account after getting this email from them

----- Dear Vultr Customer,

Including pending charges, your account is carrying a $5.94 balance.

In order to cover your current balance and your estimated monthly costs, our billing system will automatically deposit $275.00 from your payment method on file in 24 hours.


That's ridiculous, regardless of your username.


How many instances did you have at the time you received the email?

Usually, when I receive an email like this, the amount is equal to the monthly bill of the instances I have active at the moment.


I had one $5 instance, which is what made this seem ridiculous


Yep, it is insane if you only had 1 $5 instance.

Maybe a bug in their billing software or something...


Yes, but it's still stupid. Cloud hosting companies who are using hourly billing are used to spin instances up and down all the time. Imagine your app has a busy day and you need 500% more resources than usual, with the system of Vultr you will be automatically charged 500% of your regular usage only because of one days spike (obviously only if you are near the $0 balance mark)


> you will be automatically charged 500% of your regular usage

You are not being "charged" per se. The amount is transferred to your account and is there as credit until you spend it.

Sorry for nitpicking, but it is important point.


I'm going to nitpick in reverse.

You are being _charged_

- This is a charge against your card

- They are not a bank, so the money in your "account" with them is just an unsecured, general liability to you. If they go bust, they owe you money but you will never see it.

- If you want to withdraw that money from your "account" and they refuse, then your options are pretty limited.

Once they take it from your payment method, it becomes their money, not yours. That's a charge.


I am not disagreeing with what you say, but "charged" as it was used in the GP comment, it might have lead someone to believe that VULTR charge for usage per month.


To be honest, I have no idea what you are talking about. Your card is being charged and that's all I said. Sure you have credits to spend, but the cash you have no immediate need for on your Vultr account sits now there because of their billing practices.


Wow. Have you contacted them about it? That's crazy, I'd have to assume it's a bug.

I've never been charged on vultr, as I use bitcoin to always pay my bills (which is push only). I think they have stopped accepting that for new customers due to abuse though, which is quite a shame (but understandable).


They do send out emails like this, however they seem fairly flexible. I have several Vultr instances running, and they're quite happy for me to fund the account as and when if required, and have never automatically charged me.

That said, it is an odd message to send out, and I had my concerns when I first received a similar email. Maybe it's something they should look into altering.


Wow, I've never had that happen all the emails I've gotten from them have said they'll automatically deposit $10. Never had anything over $10.


I've been using packet.net for a small side project, they do bare metal / single tenant servers but the provisioning is very similar to linode/digital ocean. Their ~$35/mo ($0.05/hr) option is on par with digital ocean's $80/mo offering.


Plus 5 cents per GB of outgoing transfer.


Curious, why don't you compare with known "non-cloud" hosting companies too, that provide vps services, like ovh, hetzner, leaseweb, etc.?

EDIT: Anyone cares to explain his reasons behind a downvote?


Does anybody care about the answer to your question? In all likelihood, the answer is: because that information is even more work to collect.

Your post reminds me of the burden of proof fallacy, in which you create work for other people by asking questions you could easily answer yourself.

If you or anybody posts the comparison you requested, it will surely be upvoted.



Thanks for posting a note in this thread - I would have missed it otherwise!


I don't believe he directly requested a comparison. What he asked was why the commenter above him chose not to. You seem to acknowledge that in your second sentence, but then go on to state that his comment is reminiscent of asking questions one could answer themselves - asking a commenter why they chose (not) to do something doesn't strike me as answerable by a second or third party.


There are also many less known VPS providers with truly great deals. I've been using these two:

NodeServ: $1.25/month = 50GB HDD, 512MB RAM, 1 core, 1TB bandwidth, location in Jacksonville.

Host.us: $6/month = 150GB HDD, 6GB RAM, 4 cores, 5.12TB bandwidth, location in Dallas.

Both deals found on LowEndBox.


Your lesser known VPS providers probably have lower tier bandwith, more crowded hypervisors, old virtualization tech or old hardware.

That's not to say getting a 8GB openVZ vps for $4 a month isn't an amazing deal, but just that there are caveats.


I just had a look through their ToS for the catch.

For the most part it's reasonable, but there's a freaking litany of reasonable things you're not allowed to run, including IRC, audio/video streaming, game servers, and so on.

Why on earth do you sell me X block of resources for Y$/month if you're going to tell me what I can and can't do with them? Surely unreasonable use would be covered by resource limits already in place?


There are mainly 2 reasons:

* they oversell, so they assume that only small fraction of web servers will consume 100% of resources, while almost every torrent client will consume 100% of the bandwidth. There is nothing wrong with overselling hosting within a reasonable margin, but most of the people here want to run more than a LAMP stack on the server. * they get too much admin overhead replying to Tor "abuse" letters etc., so they just decided to deal with it in the simplest way possible.

I guess both factors contribute equally.


Most VPS providers already limit the options to run Tor nodes or SMTP servers in one way or another. However forbidding things like IRC and audio streaming is quite unusual and I wonder how oversubscribed their bandwidth must be on these hosts.

I doubt CPU or RAM allocation is the issue here given AWS already have a good CPU time credit system to manage it.


In the old days (90s-2000s), allowing IRC bots opened yourself to being a DoS target and general receiver of harassment complaints from perceived social abuses that happened in the chat rooms. I assume things haven't changed that much.


All major IRC networks hide your IP address when you connect now (mode +x), but I bet the perception still exists. Most of these ToS'es are thoughtless copypasta from other services.. every now and then there's a Show HN from a new hosting company that has absurd nonsense in their terms, and the creator gets suitably chastised for it.


I don't really care. My $6/month deal beats any $5/month deal from the major players, and by a huge margin. I recently tested the internet speed on it, and I got 850 real Mbps out of the promised 1Gbps channel, which is good enough for me. I can give Memcached 1GB of RAM and not worry about killing everything else.

I have a bunch of sites running on it without stepping on each other, and I doubt that would be the case on AWS / Google / DO.


You're telling me that your 150GB HDD beats a 20GB SSD? Even though they're both going to sit at 10GB free for years.


All the critical things should sit in RAM anyway. The SSD will beat the HDD if you read/write to disk heavily. But not if you need space.

If I need an SSD, there are options too, though DigitalOcean is indeed one of the best if you need a cheap US-based server. If the location doesn't matter, EU, Russia, Ukraine have some great deals.

Example: $4.6/month = 40GB SSD, 1GB RAM, 3 cores, unlimited traffic @200Mbps.

https://firstbyte.ru/vps-vds/kvm-ssd/


[flagged]


He is free to use his server how he sees fit. And are you really sure that Amazon is going after a very different market with their 5$/mo. offer than a low-cost VPS provider?


I think comparing a (NVMe?) SSD solution to a HDD one shows the OPs ignorance of the differing market segments each is going after. They aren't comparable solutions.


WTF? I never did anything with wordpress plugins in my life. And even if I did, so what? You have some elitism / insecurity issues.


LowEndBox and LowEndTalk are awesome, been following for years. CyberMonday got me KVM with 8GB RAM for $10.


Possibly because (except for Leaseweb), they're all European and of limited usefulness as a result. Linode, Vultr, DO and AWS all have numerous regions around the world.

They're also typically leased for at least a full month and can't be spun up/down on demand like you can with these services.

Plus they focus on large (>16GB) dedicated servers.


Leaseweb is in Amsterdam.


It's also in the US, Germany, Hong Kong and Singapore: https://www.leaseweb.com/platform/data-centers


Oh wow. They must have expanded over the years then.


That's simple. When DO came on the scene it was the only one of its kind. Now there are 4 almost identical services: DO, Linode, Vultr, and Lightsail. On the surface those seem the same because of the near linear pricing and similar allocation of resources. The ones you listed aren't even close. Each of those may have some but they don't have all of what makes the DO model so useful to some of us:

1. Mission-critical/Production ready reliability and communication (all maintenance and issues)

2. No unexpected termination of instances / Reasonable warning & mediation

3. Not overprovisioned / little concern of noisy neighbors

4. Tier 3/4 redundancies

5. Strong American coverage (each DC with Tier 3/4 level services)

6. No setup fee on new instances

7. 1-minute provisioning (simple creation of instances / no ticket needed for deleting resource)

8. Programmatic IaaS management including provisioning, DNS, and images

9. Quality resources - mostly Xeons not ARMs, local SSD not Ceph

10. Huge backing - they're not closing tomorrow & I wanted a #10

While OVH, Hetzner, Leaseweb seem like nice services, particularly for needs in Europe, I can't build an American-centric service on those, set it and forget it nearly as easily or worry-free as with DO/Linode/Vultr/Lightsail.


I expanded it with those companies:

Price breakdown vs DigitalOcean, Vultr, Linode, OVH, and Online.net / Scaleway:

$5/mo

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              | 512MB |     1 |   20GB SSD |      1TB |
    | DO                     | 512MB |     1 |   20GB SSD |      1TB |
    | VULTR                  | 768MB |     1 |   15GB SSD |      1TB |
    | Hetzner (virtual)      |   1GB |     1 |   25GB SSD |      2TB |
    | OVH                    |   2GB |     1 |   10GB SSD |      ∞TB |
    | Scaleway (virtual)     |   2GB |     2 |   50GB SSD |      ∞TB |
$10/mo

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   1GB |     1 |   30GB SSD |      2TB |
    | DO                     |   1GB |     1 |   30GB SSD |      2TB |
    | VULTR                  |   1GB |     1 |   20GB SSD |      2TB |
    | Linode                 |   2GB |     1 |   24GB SSD |      2TB |
    | Hetzner (virtual)      |   2GB |     2 |   50GB SSD |      5TB |
    | OVH                    |   4GB |     1 |   20GB SSD |      ∞TB |
    | Scaleway (virtual)     |   8GB |     8 |  200GB SSD |      ∞TB |
    | Online.net (dedicated) |   4GB |     2 |  120GB SSD |      ∞TB |

$20/mo

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   2GB |     1 |   40GB SSD |      3TB |
    | DO                     |   2GB |     2 |   40GB SSD |      3TB |
    | VULTR                  |   2GB |     2 |   45GB SSD |      3TB |
    | Linode                 |   4GB |     2 |   48GB SSD |      3TB |
    | Hetzner (virtual)      |   4GB |     2 |  100GB SSD |      8TB |
    | OVH                    |   8GB |     2 |   40GB SSD |      ∞TB |
    | Scaleway (dedicated)   |  16GB |     8 |   50GB SSD |      ∞TB |
    | Online.net (dedicated) |  16GB |     8 |  250GB SSD |      ∞TB |
$40/mo

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   4GB |     2 |   60GB SSD |      4TB |
    | DO                     |   4GB |     2 |   60GB SSD |      4TB |
    | VULTR                  |   4GB |     4 |   45GB SSD |      4TB |
    | Linode                 |   8GB |     4 |   96GB SSD |      4TB |
    | Hetzner (virtual)      |  16GB |     4 |  400GB SSD |     20TB |
    | OVH                    |   8GB |     2 |   40GB SSD |      ∞TB |
    | Scaleway (dedicated)   |  32GB |     8 |   50GB SSD |      ∞TB |
    | Online.net (dedicated) |  32GB |     8 |  750GB SSD |      ∞TB |
$80/mo

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   8GB |     2 |   80GB SSD |      5TB |
    | DO                     |   8GB |     4 |   80GB SSD |      5TB |
    | VULTR                  |   8GB |     6 |  150GB SSD |      5TB |
    | Linode                 |  12GB |     6 |  192GB SSD |      8TB |
    | Hetzner (virtual)      |  32GB |     8 |  600GB SSD |     30TB |
    | Hetzner (dedicated)    |  64GB |     8 | 1024GB SSD |     30TB |
    | OVH                    |   8GB |     2 |   40GB SSD |      ∞TB |
    | Scaleway (dedicated)   |  32GB |     8 |   50GB SSD |      ∞TB |
    | Online.net (dedicated) |  64GB |     8 | 1500GB SSD |      ∞TB |
Gist available here: https://gist.github.com/justjanne/205cc548148829078d4bf2fd39...


It's must be said Hetzner have much cheaper offers on their auction: https://robot.your-server.de/order/market

There also have no setup cost for these dedicated servers.


They're also worryingly keen on non-ECC machines at the low end though


The OVH numbers are wrong. For example in the public cloud for $ 80 you get 60GB RAM, 4 vcores and 400 GB disk (or 200 GB SSD no raid).


Could you link the offer? I couldn’t find any in their VPS or Dedicated offers better than the 16$ VPS (which I then used for the 40$ and 80$ tier, too)



I edited the gist, but sadly can’t edit the post anymore. Thanks, btw. I completely missed those.


Thank you for this.


Well, you are comparing shitty atom cores to E5 cores. These are just some random numbers without context.


Well, I’m just extending the previous’ posters list.

But Amazon will lower your CPU quota as well, if you use it for too long, so it’s not like the numbers of Amazon themselves even really mean anything.

The same story with storage performance – Amazon’s is horrible, but it’s network-attached.


"EDIT: Anyone cares to explain his reasons behind a downvote?"

I downvoted you because you talked about your downvotes.

Don't interrupt the discussion to meta-discuss the scoring system.

Write your post and live with the results.


My question is why does no one talk about prgmr?


Do you have experience with prgmr? It's reliable enough for a small project?


I've used it and am happy with it. It's reasonably reliable, but has more downtime than the big providers. I wouldn't run mission-critical software that needs 5 9's uptime on it, but for anything else it's fine, certainly for personal projects. They're transparent with any outages, so you can check up on the outage history on their blog: https://prgmr.com/blog/

It's a small business run by a few people (though it's been around for 10 years, so a pretty stable one), which has the pros and cons that go along with that. The tech staff is good, techies who know what they're doing and generally assume that you do also. So if you send a request or problem report, you aren't going to get a form reply that asks if you tried turning it off and back on again. But it's just a handful of people, so if there's a major issue, fixing things is pretty manual and slower than at places that have armies of 24/7 devops staff.

One specific thing I really like about it: it gives you SSH access to a proper text console, in case you want to install a custom OS, recover a broken install, etc. Most VPS providers give you console access, but most do it via VNC in the browser, which is not my favorite way to do sysadmin work.


> in case you want to install a custom OS

Do they let you do that? They don't say that in the purchases page.

Also, how is uptime relative to vultr?


Yeah, you have a choice of using their prebuilt disk images, running one of the officially supported OS installers from the console menu, or downloading your own installer. The list of prebuilt images and supported installers is here: http://wiki.prgmr.com/mediawiki/index.php/Distributions

I haven't used vultr so can't comment on that.


[Disclaimer: I work at prgmr.com]

You can install a custom OS. But it can be difficult to use an installer we don't provide right now because we only allow serial console access, not VNC. This means most installers won't work out of the box. Worst case you can dd an image to the disk using ssh from the rescue image.

FYI we don't do overage charges right now. For network, if we can't throttle your traffic down then we will shut your service off.

Our blog is a little misleading these days in that for downtime for individual servers, we started emailing customers directly rather than posting to the blog. This is because we want to make sure customers see the downtime notice. We also got confused responses sometimes to the blog wondering whether a given service was affected or not and if we email directly there is no such confusion.

I think our worst case downtime barring about 5 services this year has been the following:

* 0.75 hour network outage, unplanned - 2016-03-16 (gave proportional credit)

* ~2.5 unplanned downtime due to hardware failure requiring new components - 2016-04-03 (gave 15% month credit)

* 2.6 hours downtime from start of maintenance window, planned due to security upgrade - 2016-07-23 (gave proportional credit)

* 2 hours or less downtime, planned due to security upgrade - sometime around 2016-09-01 (gave proportional credit)

* 1.5 hour network outage, unplanned - 2016-09-09 (gave proportional credit)

* 1.3 hour network outage, unplanned - 2016-11-06 (gave proportional credit)

* 2.04 hours downtime from start of maintenance window, planned due to security upgrade - 2016-11-18 (gave proportional credit)

This is a total of up to 12.69 hours downtime over the year so far, assuming downtime started at the beginning of maintenance windows (it usually started after.) Of that 6.05 hours, or less than half, was unplanned.

So far this year there's been about 336 days or 8064 hours. 12.69/8064 is 99.84% uptime overall, which is significantly lower than we would like. For some servers the uptime has so far been significantly better in that there were no hardware failures, one of the security upgrades was unnecessary, and the turnaround time for the remainder of the security upgrades was much faster than for this particular server.

For this particular server, the largest downtime contributors were security upgrades and network outages in that order. For network downtime, we got around to setting up our second upstream but there's a number of single points of failure we should take care of in 2017. There is also some additional scripting we should probably do that would cut down on the network downtime a lot, such as automatically taking down BGP if connectivity beyond the first hop is lost.

For the security update downtime, I think our most realistic bet right now is to get ourselves on the latest version of Xen once it comes out. That will hopefully have a stable implementation (not a technology preview) for live patching.


It's probably since the ones listed above are the typical VPS to go to when it comes to cheap hosting. The ones you mentioned are lesser known.


Are cores really comparable between DO and Lightsail. Are we sure that 1 core isn't really something not quite a real core but something that is already over allocated based on the assumption of less than 1 core of actual usage? Thus we need to know what the actual over allocation number is to realy figure out if they are comparable.


To me this is what's important. I mainly use VPSs because I'm lazy! I have a bunch of 5$ droplets that I use for development, and even sometimes just to move things around the net more easily... For my particular use case, I don't need to change unless Lightsail offers me a less crowded core.

Really, it just seems like AWS is fighting DO on this one, to get a share of their profits. My impression is they're looking for DO & AWS customers to stay on an Amazon-only stack. The comparison made by the commenter above actually makes me consider Vutlr and Linode :)


That's exactly what makes Lightsail attractive to someone like me. I have production services on AWS and Linode, and I have only positive things to say about Linode, but it would be very nice to manage everything in one place.


More of the same with Ramnode. Really not a lot of competition, is there? Every plan starts around $5/mo for a KVM instance, and each tier increases all of the specs, while doubling the monthly fees each time. No ability to customize to your specific requirements.

What if I need a lot of CPU power, but not much bandwidth? What if I want lots of RAM, but don't need much disk space? What if I'd rather have an HDD with more storage than a faster SSD? There's nobody offering a "configure your own VPS specs" plan.


Google now has "Custom Machine Types" https://cloud.google.com/custom-machine-types/


Well, there's Amazon Web Services (EC2 / EBS), MS Azure, Google Cloud Platform (Compute Engine) to name but three.


Dediserve offer this


www.cloudsigma.com


Since Oct 2014 I've paid $79.90 per month for 16 GB memory, 24 cores, 1 TB hard drive, 128 GB SSD, unmetered i/o, static IP, Windows O/S, on a dedicated physical machine.

I have no idea why people think Amazon pricing is worth it.


I'd love to see a comparison on core performance and SSD disk performance


I use VPSdime. $15-20 worth of VPS for $7. Already for 7 months with zero problems.

I'd be grateful if you used my link https://vpsdime.com/aff.php?aff=1272


A compelling product. The dashboard looks great. They even replaced the confusing term "user data" with "launch script", but they fall back into it later. SSH in-browser is great too and can be bookmarked/opened in a fullscreen tab. Uploading (instead of pasting) your SSH pubkey is a bit annoying.

The docs appear to say you can add these to a VPC but I don't see how to do it.

They don't say the SSD storage is local, so I'm sure it's not.

A few runs with `fio` confirms this is EBS GP2 or slower:

The bench: "fio --name=randrw --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randrw --rwmixread=75 --gtod_reduce=1"

Lightsail $5:

  read : io=3071.7MB, bw=9199.7KB/s, iops=2299, runt=341902msec
  write: io=1024.4MB, bw=3067.1KB/s, iops=766, runt=341902msec
DigitalOcean $5:

  read : io=3071.7MB, bw=242700KB/s, iops=60674, runt= 12960msec
  write: io=1024.4MB, bw=80935KB/s, iops=20233, runt= 12960msec
More than an order of magnitude difference in the storage subsystems.

These appear to just be t2.nano instances (CPU perf is good, E5-2676 v3 @ 2.40 GHz, http://browser.primatelabs.com/geekbench3/8164581).

For advanced users, there isn't much compelling here to make up for the administration overhead. It's a little cheaper than a similar-spec t2.nano (roughly $4.75 on-demand + $3 for a 30GB SSD). The real win is egress cost; you can transfer EC2->Lightsail for free. 1TB of egress would be nearly $90 on EC2, but is only $5 on Lightsail.

In other news, EC2 egress pricing is obviously ridiculous.


The terrible IOPS performance on AWS is the biggest downside for me.

All competitors seem to outstrip AWS on this. Do they have some legacy infrastructure that is just too big too upgrade to something more modern, or is this "on purpose"?


As @STRML said, the difference is that Amazon is using network attached (EBS) storage as the primary instance storage, instead of local SSD. This provides a ton of benefit to Amazon, and some benefit for the user as well: CoW backend allows for nearly instant snapshots and clones, multi server/rack redundancy with EC, ability to scale up with provisioned IOPS easily, etc.

The downside is that the access methods for blocks mean some operations are more computationally and bandwidth intensive, meaning you will get fewer IOPS and less sustained throughput without paying more money. In addition, there is always going to be a bit more latency when going over the network versus a SAS RAID card.

As with all things in life, it's a tradeoff. If you look at other large providers' (GCE, DO at least) network storage offering, you'll also see a significant performance regression from local SSD.


> CoW backend allows for nearly instant snapshots and clones

LOL => A 80GB EBS SSD snapshot takes more than 1 hour.

Subsequent snapshots are incremental and will be less slow.

> multi server/rack redundancy with EC

You can move drive manually after you stop an instance, if that's what you call redundancy.

> ability to scale up with provisioned IOPS

Nope. You need to unmount, clone the existing EBS volume to a new EBS volume with different specs, and mount the new volume. You can't change anything on the fly.

The last time we had to change a 1TB drive from a database, we tried it on a smaller volume... then we announced that it would be a 14-hours-of-downtime-maintenance-operations (if we do it this way) :D


> Subsequent snapshots are incremental and will be less slow.

Depends how much they've been written to since last snapshot. Heavy writes and it can be just as slow again.


Hence why I say "less slow" and not faster. There is nothing fast when it comes to EBS volumes :D


It doesn't appear that Lightsail actually allows hooking up data volumes. This is a surprising regression considering AWS basically invented it, and a major downside compared to DO.

So network-attached storage for Lightsail is upside for AWS, but all downside for the customer.


And they're still the same price as the others (traffic more expensive). How comes that? Do they just have such a higher margin or are there zero economies of scale?


It's just an artifact of network-attached storage.

Many competitors just use local storage, which comes with its own serious downsides for the company and customer. DigitalOcean just recently launched its Volumes service, but it's very limited compared to EBS, and not nearly as fast as its local SSDs.

EBS is generally fine but I would really enjoy the option to have ~40GB local SSDs for caching (but I suppose you can always grab an r3/r4 and cache in memory if that's your bag).

The best cloud I/O perf I've seen, bar none, comes from Joyent's Triton containers. Beats even DO by 3-4x. Beyond that you need to go bare-metal.


That's truly terrible performance. Don't understand how they advertise SSD storage which is then not local. It's something they should mention clearly. With that performance difference I don't see a reason to prefer them over DO/Vultr/Linode. Even if the CPU is better, disk iops will likely be the limiting factor here.


Thank you for the information, most informative reply in this thread


Re: VPC peering, the blog gets into more detail. I tried it out, worked great. Simple egress. https://aws.amazon.com/blogs/aws/amazon-lightsail-the-power-...


Anyone know what their web terminal is based on? I've been on networks that are so locked down that it'd be really useful... although maybe not 5USD/month useful.


You can always run an ssh bastion server on port 443. It's indistinguishable from https traffic without deep packet inspection or great-firewall-of-china like pattern analysis.

Just have a bastion host with that and you'll have no trouble ssh -A'ing your way there and then on to the real box on whatever port it's on.


Heh. I'm often on a network where traffic that doesn't start with HTTP CONNECT on port 443 is dropped, and they're not China.


well for that money it's really simple to interconnect the 3 and still use all the cool AWS things.


Let me get this straight, right now AWS is billing me $270/mo for 3TB of bandwidth on my autoscaling web servers. With LightSail, I can get that same bandwidth, plus storage and instances for $15/mo?

In total, I'm spending about $15,000/yr on AWS, and someone spending $5/mo gets their bandwidth 18x cheaper than me? Shouldn't it be the other way around, and I should be the one with the discount?

I get enough headaches dealing with reserved instances, and trying to buy them at the correct time of the year to line up with price drops. Now, I need to consider dumping my autoscaling groups, EC2 web servers, and moving them to LightSail? Why not just give us a fair price on bandwidth, instead of more complications?


From https://amazonlightsail.com/docs/

> Data transfer OUT from a Lightsail instance to another Lightsail instance or AWS resource is also free while the private IP address of the instance is used.

It could even be worth it to set Lightsail up as reverse proxy and profit off of very cheap(for AWS) traffic e.g. for S3. I can't really believe they would allow this. Am I missing something?


EC2 to S3 is already free. That's not true for all AWS services though. Also interzone bandwidth is charged at 0.01 leaving AND entering, making it the same as inter-region bandwidth costs. If lightsail is truly free for that it's a game changer.


my thought exactly.... set up a cluster of reverse proxies on lightsail in front of your web tier in the real aws account (making sure to reverse proxy over private IPs, assuming that's possible), build some automation to replace lightsail instances when they get to their bandwidth quota.... profit?


> If you delete your instance early and create another one, the free data transfer allowance is shared between the two instances. Data transfer overages above the free allowance are charged at $0.09/GB.

I think you would still be charged. I think it would just be better to upgrade.


They want to destroy DigitalOcean, so they have to match their pricing. Additional traffic is also $0,09 / GB with Lightsail.


And that's incredibly expensive, considering many other cloud providers around the level of DigitalOcean (which I believe doesn't charge for using too much bandwidth atm) charge a lot less. For example, Linode charges $0.02/GB after the allocated bandwidth

And specifically for Linode, since all the servers draw from the same pool of resources, you can instead create a ton of $10 servers and grab bandwidth for $0.005/GB, which Linode is perfectly fine with (note that bandwidth is pro-rated, so you'd need to create those servers in the beginning of the month to take full advantage of it).


I do not think they can. I am happy with Digital Ocean almost 3 years. Did not have any problem and pretty happy with it.


Exactly my thoughts. EC2 is stupid expensive in BW. If you are spending a lot just setup some reverse proxy :)


Someone did a quick testing and apparently the bandwidth speed is capped at 100mbps, and even less for overseas traffic: https://ayesh.me/amazon-lightsail-review


Do you use a CDN? If no. You should use a CDN.


I use a CDN for js/css/fonts/images, but the $270/mo is the bandwidth for the gzipped HTML going out from the web servers.


This made me think of a thing I've done in the past so here's a brain vomit about 'base page caching' - most of which is probably irrelevant here but perhaps someone will find it useful:

I don't know how dynamic or unique your pages are to each user or if they're largely ubiquitous across all users.

Assuming the latter I know that you can cache a base page in some CDN's through a different header. Akamai's DSA product in particular allows you to cache HTML using the Edge-Control header that has matching syntax to Cache-Control. Edge-Control is stripped out in transit, the client never sees it. This allows you to control cache TTL in Akamai's edge servers independently of the client side cache.

A quick look through Amazon Cloudfront docs seems to indicate that Cloudfront cache's will respect the Cache-Control header but this can get complicated if you don't want the same TTL within client side browser cache. Perhaps I missed something, though?

Even if your page is dynamic but you're willing to go a route like Angular or ReactJS polymorphic client side apps you can still offload a bunch of those basepage requests assuming your app is suitable for this kind of design pattern. The assumption is that you will be relying on API's so additional complicated caching calculations may apply ;)

Depending on how you construct your cache key you can do a certain amount of multi-variate caching and still achieve pleasant cache hit rates. This applies to both base pages and fronting API's with a CDN.

Regardless it looks like for only U.S. traffic (as a yardstick) Cloudfront charges 0.085 per GB so futzing with the AWS calculator I split a page I'm very familiar with (49KB) across your 3TB for US only traffic and the price comes to $304.68. No savings.

Looking around I see that Fastly (a lower priced than Akamai - Varnish based CDN) charges a $0.12 rate, the price there in the same scenario would be something like $400 (the number of http requests made factor into some of these pricing models.) Akamai comes in at $500.

OTOH a CDN I've never used (Stackpath) might cost only $140 which would get you close to half your current spend. Remember that depending on your cache hit/miss ration and how many cache flushes (cache object invalidation) that price could be anywhere from slightly to extremely optimistic as you still have to pay for the requests between your CDN edge servers and your AWS origin.

To be fair, when considering pricing Akamai is the big fish here and has something on the order of 170,000 edge servers sprinkled all over the world whereas some of these smaller CDN's have far far fewer.

Once you're into some CDN's there's all sorts of wonky things you can do with parent-child tiering models that can be leveraged to further limit the number of times a call is made back to your origin (aws) server.

Here's a pretty good CDN pricing calculator: http://www.cdncalc.com/


For $5, just try it..

At least a reverse proxy, maybe even some caching.


I'm amused by the model they have in their banner: a bearded, tattooed man wearing what appears to be a .. cape? We've come a long way since the stock photos of 'smiling super-normal people wearing ties, huddled around a computer'.


Programming while sitting on a motorcycle?

https://amazonlightsail.com/features/


Reminds me of Snow Crash.

> Because shortly after he gets into Port Sherman, the wheels on his motorcycle lock up - the spokes become rigid - and the ride gets very bumpy. A couple of seconds after that, the entire bike goes dead, becomes an inert chunk of metal. Not even the engine works. He looks down into the flat screen on top of the fuel tank, wanting to get a status report, but it's just showing snow. The bios has crashed. Asherah's possessed his bike.


I'm pretty sure Amazon is just trolling us with these photos.


He must be doing a firmware update to his ECU :-)


Not to mention the man is wearing on-ear headphones and is using a smartphone in front of his laptop, which seems more common today than ever.


Do on-ear headphones carry a negative connotation? I wear them because earbuds make my ears hurt after a while.


Its all about the over-ear mate.. over-ear..


Haha ok...I guess technically I'm wearing "over-ear" headphones right now. Hyper-X...pretty cheap gaming headset, extremely comfortable, no flashing led lights or other weird gamer features. Decent sound too.


This is a silly question, but do they go fully around the ear?

I have bigger ears and a get a lot of discomfort from most headphones. I have a pair of Audio Technica ATH-A900X (with huge cans) at home, but I'm always hunting for a cheaper pair to carry around with me.


Yes, they fit around my ears nicely. I've always been uncomfortable with the padding touching my ears the way you describe. I'm pretty picky, but I've found the HyperX headset to be way more comfortable than some other, very expensive headsets that I've tried. I've had them for almost two years now and they've been very durable as well.


Have you tried the Logitech G930? $70, wireless USB with great microphone and great range.

My ears are huge, they fit very well within these earpads. By comparison I use the Beyerdynamic DT770 as my music headphones (vs travel) as those are the only ones I've tried so far that fit my ears and are super comfortable.


+1 My HyperX Cloud 2's are the only over-ear headset I'll ever use again. Fantastic sound, build quality and comfort.


Has he also duct taped over the brand name? 2016 San Francisco, I see you!


The duct tape is made to look decorative, but it's standard practice in Film / TV / Advertising to obscure brand names.


A good way to recognize a stock photo is to look for brand names or logos. If you see any, it's not a stock photo.


Is this Amazon's dress-code? I bet black cape + helmet means you're at the top.


I'm at re:Invent currently, and this morning's keynote by the CEO has a superhero theme! :)


s/rockstar/superhero/g


You don't wear a cape?


I put my cape away years ago. But duty may call again...


And you know for sure that the model is sporting a top knot.


Are people wearing capes these days?


Hipsters will soon be.


Capes are company policy after yesterdays jumper. https://www.bloomberg.com/news/articles/2016-11-28/amazon-wo...


This comment is off-topic and extremely uncivil. It's not OK to post on Hacker News like this.

https://news.ycombinator.com/newsguidelines.html


Wow you're dark!


This is a non-starter without IPv6. I know some of you will be tempted to jump ship from DO and Vultr, but please remember that by doing so you will continue enabling Amazon to hold back progress. While ignoring AWS is not really an option, as it offers some very unique features that are quite useful, this thing does not.

DO is not great in this regard as their butcher their allocations, but Vultr gives each VPS a proper /64. Scaleway has partial IPv6 support (not for their bare metal could, but their VPS's do support it).

I urge you to vote with your wallets, unless you really like paying $1/month or more per IP for the foreseeable future.


It's coming (one region now, more to come soon), not sure about Lightsail though.

https://aws.amazon.com/blogs/aws/new-ipv6-support-for-ec2-in...


Yeah, Scaleway has IPv6 on the VPSes and the AMD64 dedicated servers, but not on ARM servers.

2core/2GB/50GB is a better offering than DigitalOcean's. Sadly, it seems like they still don't have FreeBSD.


> enabling Amazon to hold back progress

what do you mean?


We're running out of IPv4 addresses and need to move to 100% IPv6 ASAP or else we're going to end up with a system where it's NAT all the way down.


IP V6 is progress over 4.


Here is a comparison of $20 instances across DO, Linode and LightSail

All have 3TB transfer

Linode(Best): 4GB RAM, 2 Core, 48 GB SSD

DO: 2GB RAM, 2 Core, 40 GB SSD

LightSail(Worst): 2GB RAM, 1 Core, 40 GB SSD

I use this exact instance on Linode for my site https://dictanote.co; that one extra core makes a lot of difference when you want to take a backup of db or something intensive like that.


Such comparisons are nearly meaningless.

  - What kind of SSD?
  - What kind of "core"?
  - How much are you over-provisioned on the physical hardware?
People are treating cloud resources like they're commodity, but they are not (cloud service providers make it very hard to compare apples to apples). You can have first-class PCIe SSDs on RAID 10 underneath that virtualized storage, or you can have consumer-grade non-RAIDed SATA, but it's all "40GB SSD" to you, the customer.

vCPUs are even worse. Buy a 20-core Xeon box, split it across 60 tenants, each with 2 vCPUs each, or via 40 tenants with 1 vCPU?


And it doesn't look good for any of those.

t2 instances notoriously throttle way down to 10% if your workload is too consistent. The full performance is quite good from the 2.40 GHz E2676 v3, but at 10% it's worse than a Raspberry Pi. And this will always hit you at the worst time (when under load).

As for the SSDs, they're just GP2 storage, which is an order of magnitude slower than DO and nearly 2 orders of magnitude slower than Joyent Triton.

So AWS is providing a seriously inferior product here, but it's under the AWS umbrella and peers with the rest of their services. The question is: will the target market care about any of AWS's other services enough to make the inferior $/perf worthwhile, and will they integrate easily enough for the average developer to use them?


Luckily, the prices are low enough that you can just test your workload on each.


Right but one of those 3 (linode) has been pretty brutally hacked (a number of times, at the control plane level) while DO & AWS have a different security story.


> while DO & AWS have a different security story

That we know of. For better or worse, we're relying on these companies being transparent about account security issues and compromises.

Perhaps it's just me, but I doubt we are being kept in the loop, especially over something which has such a major impact on customer confidence (as evidenced by the sibling comment & replies).


True, but that is a little exaggerated. Yes, they were hacked in the past (over a year ago); one of their small data centers went down. Their US East center has been solid for the past 8 months that I have been using, no hiccups.


But they were hacked and didn't tell anyone for 6-8 mo even though they knew. So you don't really know they aren't deeply compromised now?

Then the ddoss and offline hosts. it's basically just amateur hour over there.


As opposed to AWS and DO which have never had comparable security issues. In this case, "never" is what I'd expect versus "yeah it's been a few months since last serious incident".

I wouldn't host an ethereum or bitcoin node on Linode. If I were a PCI auditor I would think pretty hard about compliance on the platform.

That said, blog you don't care too much about -- sure!


wow... 8 months? Not being able to get in to your servers for days at a time is not "a little exaggerated".

It was less than a year ago - 11 months. December in to January, Linode data centers, but especially Atlanta, were heavily attacked.

https://blog.linode.com/2016/01/29/christmas-ddos-retrospect...

There was another moderate attack a few months ago.

Caused no end of headache for some clients of mine.

I like Linode, and have used them for some projects in the past, but there are more options now.


I wish I could say the same for US South (Atlanta). Constant DDoS attacks, outages, power failures, hardware failures, and more. I don't think I've ever gone longer than 3 months of uptime there across multiple linodes.


Exactly, Linode still has _much_ better deal. Been using Linode for 10+ years, no reason to jump based on what Lightsail has.


Except they're the only one that's gotten breached a number of times and locked out customers for days and weeks at a time. For me, that's a non-starter, AWS has a much better track record.


Your are not considering the servers location. If you aren't in the USA, and your site isn't resource intensive, how near the server is from your users will make the greatest difference in access speed.


Yup. More of these providers need to have Central US datacenters instead of on the coasts. Linode's Dallas DC has been pretty good in terms of latency to both coasts.


Yeah the LightSail pricing doesn't look attractive at all. $40 if you need to get 2 cores - and dual core really should help with anything non-trivial.


All low-end VPS offerings use shared cores, so # of cores is a fairly useless comparison unless they also tell you the oversubscription rate or all of your neighbors are idle.


BuyVM offer non-shared cores at a cheaper rate than either Linode or DO.

https://buyvm.net/kvm-dedicated-server-slices

(No affiliation, happy customer with minimal downtime for ~5 years now.)


Took a look, their terms of service are awful. Here's a few:

3,1,2 - P.O. Boxes and non-residential or mail forwarding addresses are not accepted.

3,3,1 - Clients may not open multiple personal accounts under any circumstance.

3,3,2 - Clients may not give other persons access to their accounts.


Looks like pretty standard anti-fraud stuff.


Any personal account I have, at a minimum, I give access to my SO.


I don't know about the others, but on AWS with the T2.* instances it always seems like I've got the server to myself. They are either well provisioned, or well managed. The burstable CPUs are great.


You do have them to yourself, as long as you don't use them.

T2 instances work on a credit system, where you are allocated X credits per minute. If you use less than you're allocated, you will rack up hours of CPU time (there is some cap). You can see the credits in the AWS dashboard.

Problem is, that buffer you have is also the delay between when you start doing too much work and when you find out you're doing too much work because the instance isn't responding. And it's the delay between when you fix the problem and your balance goes back to "normal".

I recently hosted my personal tree of Fast-Growing-Programming-Language on a T2, and "accidentally" became the #1 google search result for the entire long-tail of every stack trace and every error message. Took me awhile to piece together what was wrong.


Right of course - I assumed most all top tier providers are good at managing capacity and you do get the two cores to yourself more or less.


I was hoping their pricing is competitive at least if not better given they are so late to this I just need one VPS market.


Isn't that what ec2 was 10 years ago?


FWIW, bandwidth overages at Linode and DO are $0.02 per GB, LightSail is $0.09.

Ow.


That makes sense to me.

Lightsail should be a entry-level product to upsell more AWS stuff.


Meanwhile, in France: https://www.scaleway.com/

Same thing with OVH, you can get a powerful dedicated server with unlimited bandwidth for the price of a medium EC2 instance. I hope they will open their datacenter in California soon.


Amazon sells at prices people are willing to buy.

Keep in mind that in France, very few people know that Amazon is more than a website selling books and random stuff. But in the US, very few people know about OVH and Scaleway.


Not to mention that Amazon is selling a large ecosystem of services and you pay for it accordingly. If someone doesn't need that, they'd be crazy to pay Amazon's premium. OVH and Scaleway are selling fundamentally different products than AWS is.


I don't understand how they can live on those prices.

I have their 3 euro arm server and have no problem pushing more than a 100 megabit a second from it.


It's two fold. Amazon bandwidth is, I assume, premium tier 1. Where as ovh and scaleway are probably lower tier. Lower tier bandwidth is dirt cheap, like below $0.25 per mbps.

Edit: 3 things, it's also shared among a lot of users that don't use what they can. For everyone using 100mbps there's probably 20 who use less then 1mbps.


OVH has a pretty solid network and are very transparent about peering and utilization.[1] The network is definitely much smaller than Amazon, but never had any problems with their peering, the network seems to have ample capacity (whereas Hetzner had less reliable peering). I have several servers in the OVH network in Europe and I get consistently high speeds for Europe&US, independent of the time I test (don't have much traffic from Asia so can't really test that).

They will offer you better peering if you pay extra, but even then you're paying much less than at AWS for high traffic.

I suspect that AWS makes most profit with bandwidth while other services run with a very low margin.

[1] http://weathermap.ovh.net/


Right. We have a 3EUR a month VPS serving as a backup server, it is a MySQL slave, doing a snapshot regularly and sending the snapshot off to a storage VPS. It certainly doesn't use 1mbps except when it rsyncs to the storage.


I think a million VPS hosting companies just cried out in terror.

The major reason to use a VPS host instead of AWS is that AWS is complicated. This seems to be just as simple as DO or a million other VPS hosts, with the added benefit that it's easy to hook up to Amazon's other services if you need to.


> The major reason to use a VPS host instead of AWS is that AWS is complicated.

The major reason to use a VPS host instead of AWS is the order of magnitude price difference. That’s not going away with LightSail either.

Look at all the French/German/Dutch hosters, they’re 10-20 times cheaper than AWS, and you get far better storage performance and cheaper traffic.


Until their clients see the iops of Lightsail's non-local SSDs. Don't know of any other VPS provider that uses network as their only storage. Small boxes will likely not run a memory-only job (they only have 1GB), so I doubt that Lightsail will be used for much more than reverse-proxying traffic from AWS to get huge discounts there.


Smaller hosts will make a decent margin on managed services. That's just not a game Amazon want to get into.


There's a lot of managed services on AWS and it's growing rapidly. But they're delivered through partners, not AWS directly. I didn't write down the numbers from this segment of the Re:Invent Global Partner keynote yesterday but, as I recall, "next generation managed service providers" was highlighted as a big growth opportunity.


I think the point was that Amazon's customer service is awful, compared to smaller, much more nimble companies where you can get a technical person on the line and get much more personalized service from someone in the US.


This is the first cheap bandwidth option I've seen on AWS. Transferring 1TB out of S3 or EC2 is costs about $90, but is included in a $5 server with Lightsail.


Looks like you can spin up three of these instances as a proxy and pay $15 for 3 TB of bandwidth from AWS.


Where do you see that? To me it looks like bandwidth is plan-wide. For example, from the FAQ: "If you delete your instance early and create another one, the free data transfer allowance is shared between the two instances."


Full quote from FAQ : "How does my data transfer allowance work?

Beyond the free data transfer IN and between instances, every single Lightsail plan also includes a healthy amount of free data transfer OUT. For example, using the cheapest Lightsail bundle you can send up to 1 TB of data to the Internet within the month, at no extra charge. Your data transfer allowance resets every month, and you can consume it whenever you need within the month.

If you delete your instance early and create another one, the free data transfer allowance is shared between the two instances. Data transfer overages above the free allowance are charged at $0.09/GB. "

I assumed that each light sail instance would get the 1 TB bandwidth. But, the FAQ seem to suggest that each plan gets 1 TB. I can't tell if I can provision one VM of each plan and proxy bandwidth through. Say for $5 + $10 VM costs, I get 1 TB + 2 TB bandwidths..


That sounds a bit vague to me, too. Does that mean I have to upgrade to the next bigger instance (with more bandwith included), or can I just create a new equal-sized instance before(!) deleting the previous one?


Yeah, but max 20 Lightsail servers per month. No big clients will be able to use this to export/leave.


Is the transfer from the aws instances to the lightsail instances not billed if in the save data center?


Transfer to and from other AWS resources is free (see their FAQ).


I'm not seeing that in the FAQ. In my reading it pointedly leaves open the possibility of incurring AWS egress charges when transferring to LightSail.


https://aws.amazon.com/de/blogs/aws/amazon-lightsail-the-pow... - VPC peering.

Which means it's free since transfering data from one VPC to another one or peering is free (at least it was with bare aws)


Worth noting: $0.09 per GB for bandwidth overages. So, your $5 server with 1TB out becomes a $95 if you have 2TB out.


Yet, this is the same cost on EC2 for the first 10TB, except you don't get 1TB for free with EC2.

So essentially you're getting $90 worth of egress traffic for $5. It's even more obvious now that EC2 egress pricing is ridiculous.


Also worth noting their competition (DO/Linode) only cost $0.02/GB overage.


Not terribly impressed. It's like Digital Ocean and Vultr but with no IPv6 and no direct network interface.

What I love about VPSes as opposed to AWS, Azure, or Google is that you get a completely a la carte box with a direct interface right to the Internet and both IPv4 and an IPv6 /64. You can instantly provision "servers" that you can do anything you want with -- you can treat them like "pets" to run a personal blog or a legacy app, or you can herd them like "cattle" with your favorite management and provisioning tools. The pricing is great and the infrastructure is mix and match.

Many VPS providers (Vultr and I think DO as well) will even let you upload and install an ISO directly onto the KVM instance over the web. That means you can install OpenBSD or even weird OSes. I've heard of people putting wacky stuff like OS/2 in the cloud this way. Some even allow nested virtualization.

A VPS is ideal for a large number of common work loads, but not all. For things where I want to make extensive use of AWS's managed services or where I want to have something more akin to a private data center, EC2 and similar offerings from Microsoft and Google are great. But for those I want the whole enchilada. If I'm going there I want everything the EC2 management console and API gives me including full-blown VPC, etc.

This seems to occupy an uncanny valley. Without IPv6, direct networking, etc. it's a crummy VPS, but it's not as rich as EC2. The only pluses I see are direct access to AWS services (but if I want that I probably want EC2) and AWS's security and uptime "guarantees."

Problem with the latter is that it's largely marketing. I've routinely clocked 300-day-plus uptimes on Digital Ocean and I've also had EC2 instances mysterious die or go into a coma. They might have something to say on security, but I've never seen any real proof that AWS security is intrinsically superior to their competition. Neither DO nor Vultr has had a recent major breach AFIAK and they all seem to use the same virtualization tech.


Noob question. How's this different from an EC2 instance in a VPC?


It isn't meant to be different, it is just "packaged" to make setting up a working AWS instance cheap and easy.

If you're familiar with AWS then you can get a similar offering directly, particularly using reserved instance pricing.


My quick 5 second take is that lightsail gives you shared vCPUs/memory and cheaper bandwidth. But I'm still confused myself.


Same thing. You get a VPS, but more non-AWS-expert friendly with less complicated pricing structure.


Just announced on AWS Reinvent Keynote.

It's nicely packaged their existing products (EC2, VPC, ...). So you can get Digital Ocean like experience on AWS. You can still tune the underlying services.


It's already too expensive compared to DigitalOcean and Linode. An $80/month instance only gives you 2 CPU cores. For the same money you can get 4 @ DigitalOcean and 6 @ Linode.


Still don't understand why people use $80 vps. You can get at least 2 dedicated servers with each way more performance for that. Vps make sense if you need low performance or quick resizing, but I doubt that's what most $80 instances are used for. Don't see how they make sense financially..


Digital Ocean was the first thing that I thought.


I suspect that it may have been the first thing that Amazon was thinking about while building this too ...


It just dumps you into a revamped, less intimidating AWS console so I wouldn't be too concerned about this for now.

The thing that keeps me away from AWS services is the depth of the service - I need to be an expert in AWS on top of knowing how to configure my servers, which for now is maybe a non starter.

It does show you the power of packaging: with a simple domain and thrown together marketing page, you too can target another market segment.


My guess is Amazon wants to rope people into their ecosystem, and they've recognized that many startups have opted for the cheaper VPS providers in their bootstrap days.

This is an intriguing move and one that I'm sure DigitalOcean, Linode, and Vultr have been fearing may happen.

The pricing is on par with these alternatives in the VPS space.


> The pricing is on par with these alternatives in the VPS space.

Linode offers the double of the RAM for the same price for $10, $20 and $40 VPSs.


A lot of people, myself included, don't seriously consider Linode anymore given their security problems (March 2012, 2013) and infrastructure problems (Dec-Jan 2015-16).


The biggest show stopper for me is the lack of block storage, while both DO and Vultr have it.


They do, however, offer over 2x disk space in some of their plans. 80GB in LightSale and DO vs 192GB in Linode


Digital Ocean's pricing has remained the same since 2013.

It'd be nice if they bumped the $20 plan to 3GB RAM (or go full Linode at 4GB)


I wish:

  $20
  4GB RAM
  2 CPUs

  $40
  8GB RAM
  4 CPUs
Some VPS providers already have similar offerings.


I hate to deviate from the topic, but... is the guy on the front page wearing a red cape?


Are you saying you don't wear a superhero outfit when you're coding??


I wear a supervillain outfit. Goatee, white lab coat, menacing laugh.


He is a ninja rock star hacker.



A quick look at the pricing shows these are plagued by the same problems as DO's offerings: only RAM scales linearly with price, while CPU, storage, and data transfer do not.

Linode and OVH, while not as prestigious as AWS and DO, offer much more fair pricing when you need more resources.


Keep in mind if you are a current EC2 customer and are excited about a cheaper VPS in your region, the VPSs are only available in Virginia. I was pretty excited about a cheaper VPS I could provision in the Sydney area, but these are restricted to a single datacenter.


Thanks for pointing that out. Seems like a weird decision given that they have the underlying product in many regions.


Damn! I got excited at the prospects of cheap VPSes in Sydney :(


What about Vultr?


How is Vultr compared to EC2? Latency is my biggest concern since I do all of my dev work over ssh typically.


Also Binary Lane (except bandwidth allowances are much smaller).


It's funny to me that Amazon has looped all the way back around to this, while a bunch of smaller providers who've been doing this for over a decade have been trying to catch up with AWS on all the other fronts. But, realistically, for a lot of users, AWS is a stupidly complex beast just to get a website up and running. I've written a bunch of code that interacts with AWS APIs in two languages, and I still require a couple of hours to spin up anything new there.

But, as others note, the variable cost factor seems to still be a sticking point. I can setup a Digital Ocean droplet, or Linode, or one of a dozen other low-cost VPS providers, for $5 or $10 a month, and I know it will never cost more than that. Maybe I'll bump into memory, disk, or bandwidth limits...but, AWS is a killer if you aren't careful. I used to maintain (and pay for, out of pocket) a non-profit's website on AWS, and the price ballooned while I wasn't paying attention, due to automated backups to S3 and some other stuff, and by the time I noticed was costing me $183/month, for a website that could easily run on a cheap VPS. My fault for not paying closer attention, not setting up cost alerts, etc., but I moved the site off of AWS and onto one of my own web servers, where it literally costs me single digit dollars to run (it has many GBs of email but otherwise is a small site with very low traffic).

So...unless they're giving me some reason to think I won't end up with a massive bill one month because of a popular post, or something, I probably still won't think "I know, I'll use AWS!", unless it's a situation where I need the scaling capabilities of AWS.


When I started out webdev, I was told that I should use AWS. But not having known how private/pubkeys work, or how ssh workds, how servers worked, it felt a lot like being thrown in the deep end of the pool.

Then I used digitalocean because of the free 1 year server time github gave me and everything was a breeze. They had tutorials for a lot of stuff, like how pubkeys and privatekeys worked, how to use ssh, how a server works, how to use nginx/apache, and even node.js stuff. I got up and running quickly even though it was my first time using a VPS. It was super easy, and the best part was with my knowledge gained from DigitalOcean, I was able to start using AWS with relative ease.

I think Lightsail is a good competitor to DigitalOcean, good for newbies who can't exactly figure out how much their server will use and charge them. But imo, with the same stats and stuff as DigitalOcean as a newbie I'd stick with DigitalOcean just because of how helpful their tutorials are in general and how helpful their interface is.


I can almost feel the pucker over at DigitalOcean, Linode, etc. It may not be a better product, but that doesn't necessarily matter, given the brand power.


Agreed. Everything else equal, I'd personally go with the more stable IaaS provider as opposed to fringe ones that can go out of business or be acquired out of existence.


I just ran speedtest.net on a $5/mo 512MB DigitalOcean droplet in the San Francisco (SFO1) datacenter.

    ~ speedtest-cli
    Retrieving speedtest.net configuration...
    Testing from DigitalOcean (192.241.229.48)...
    Retrieving speedtest.net server list...
    Selecting best server based on ping...
    Hosted by Monkey Brains (San Francisco, CA) [5.93 km]: 2.132 ms
    Testing download
    Download: 921.09 Mbit/s
    Testing upload
    Upload: 705.31 Mbit/s
Can anybody run speedtest-cli[1] on a 512MB LightSail instance to compare network throughput?

[1] https://github.com/sivel/speedtest-cli


The problem with speedtest-cli is that the servers are set up to test domestic accounts, not other servers. You can be lucky and get ~1GBs bandwidth, but just because you don't doesn't mean that it's the fault of the provider. The server specs and speedtest only require a 1Gbit/s port, so you will probably not get higher results than yours anyway.

Network speed should generally not be the issue with AWS, it's disk iops where the non-local SSDs will make a major impact.


Agree speedtest-cli is not perfect, but you can see from my test I got near the 1Gbps that DigitalOcean advertises. I am curious if AWS LightSail even breaks 100Mbps.

In terms of network speed not being important, that's not true. Lots of workloads are network bound not i/o bound (load balancers, web servers, etc).


That's what I meant. The results show that the DO box is fast enough, but a slow result doesn't indicate that you can't saturate traffic. It's really hard to test without real-world traffic, haven't found a reliable way to do so yet.



What I was hoping for was a bare-metal (not-run through VMs) container runtime like Joyent Triton[0] but with more pay-for-what-you-use pricing.

Unlike VMs, which statically allocate mem whether you use it or not, containers have the chance to grow and reduce mem as workfloads go up and down, which means you could pay for GB/h of mem usage on the right-sized # of vCPU base.

Not sure if any IaaS/PaaS is doing this.

Joyent pricing[1] is still for static resource allocation and not cheap compared to these larger players.

[0] https://www.joyent.com/triton [1] https://www.joyent.com/pricing


The more you pay the worse performance (per $) you get. Pretty much a "take 2 pay for 3" kind of deal. No matter if it's a good price or not I would feel like a schmuck if I went for any other option than the 5$ one.


Just ran a bench on then, and the Disk per is bad.

https://gist.github.com/xfalcox/3b99beac4935fd154a4cbeb540dc...


What Amazon doesn't advertise is that their "block" volumes are files on NFS NAS storage.


Just for the record, I did the same on one of my small azure machines:

  CPU model:  Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz
  Number of cores: 1
  CPU frequency:  2394.441 MHz
  Total amount of RAM: 3439 MB
  Total amount of swap:  MB
  System uptime:   6 days, 15:49,       
  I/O speed:  20.1 MB/s
  Bzip 25MB: 4.27s
  Download 100MB file: 6.15MB/s


what did you use for those benchies? would like to run it.



Interesting. Pricing is worse than Digital Ocean here. Looks like Linode is still best bang for buck strictly for price to power.

Random thoughts from shameless noob:

- I like Digital Ocean's OS/app images for speed / smaller projects. Looks like Lightsail offers this with Bitnami. Not sure how complicated that is in their console.

- Amazon IAM comes with a bit too much overhead for noobies like myself. Glad this doesn't require you to set that up when you just want a quick and dirty VPS.

- Bah. Resizing/upgrading requires you to do it through API currently. That kind of sucks.

- Nice! 30 day free trial.


Lightsail: $5/mo 512 MB, 1 core, 20 GB SSD, bandwidth cap -- Scaleway: $3.2/mo 2 GB memory, 2 cores, 50 GB SSD, unmetered bandwidth.

Lightsail 2: $10/mo 1 GB memory, 1 core, 30 GB SSD, bandwidth cap -- Scaleway: $10/mo 8 GB memory, 6 cores, 200 GB SSD, unmetered bandwidth.

OVH virtual servers have also always been cheaper than Lightsail, and bandwidth is of course unmetered: https://www.ovh.com/us/vps/vps-ssd.xml


It looks like OVH doesn't charge hourly for those.


The OVH public cloud offers hourly instances (50% discount if monthly).

https://www.ovh.com/us/cloud/instances/prices.xml


The FAQ is extremely unclear, but if I'm understanding it right, doesn't this make transfer out to Internet significantly cheaper if you insist on using AWS? Instead of paying $0.09/GB, transfer to Lightsail for $0.02/GB then transfer out for $5/1000 GB=$0.005/GB. You do have to deal with changing IP, but you get a 1.26 order of magnitude decrease in price, and that's assuming you don't bother actually using the extra compute.


The "transfer to Lightsail" step (from another AWS resource) is free, as long as you use the private IP.


No, unless you setup VPC peering between the shadow lightsail VPC and your AWS VPC, otherwise you'll be charged at least the $0.01/GB on egress, and maybe the $0.01/GB on ingress into lightsail too (not sure, the wording makes it ambiguous, one would have to try it.)


There are no ingress charges on lightsail, so the cost should max. be $0.01/GB. Still cheaper than paying egress on AWS.


Nice: a much needed simplification of the AWS product surface area. While their plethora of functions and features are tremendously useful for big shops, but are a pretty high bar newer entrants.

Now if they could just get to the point where $5 gets you a running Docker container running on the equivalent of the Lightsail VPS (without setting up the backing EC2 infrastructure like ECS), I suspect that's closer to the platform that many users really want to have ...


This is going to be a hard blow for DigitalOcean.

I don't have huge hopes for their business going forward.


This seems like a great option for a utility server for existing AWS customers. With VPC peering you get free and fast transfer between one of these and your existing AWS infrastructure, but in a nice one-price VPS.

I also wouldn't be too surprised to see some people using these as middle-man boxes to reduce transfer costs associated with EC2 - $5 for 1TB is darned cheap for AWS. Using one of these to back up data from some EC2 hosts would be a win.


Any word on if this includes IPv6 addresses?


No IPV6


I'm a little confused. How is this different from EC2?


As far as I can tell, it's "just" the relevant AWS services, repackaged in a more non-expert-friendly frontend, simplified for a DigitalOcean-like usecase.

Doesn't expose the full, intimidating complexity of the AWS management console and workflow.


I'd assume the guarantee of the box being up, unlike ec2.


This is very appealing to me just because of the amount of bandwidth that is included. I currently serve 20TB/month of static assets for only $160 spread among Softlayer, Digital Ocean and Linode VPSs. If I can host my static assets on the same network as AWS for the same price then that is a huge win, especially since all new Softlayer VPSs only come with a measly amount of bandwidth.


Still isn't enough for me to pull my sites off DO. Plus, DO don't have a reputation for treating staff like absolute dogshit.


And DO has a nice established UI too, plus the plethora of tutorials for newbies on how to setup postgresql, nginx, node, etc.


Not to mention...

- no regions outside U.S. - no floating IPs - no Debian images

DO appears to have a considerably better offering.


I'd love to give this a try for my mail server that is currently hosted on Linode. Half the price would be great.

Does anyone have experience with how "clean" the AWS IP addresses are? I'd hate to switch to Lightsail and have to deal with deliverability and spam blacklist issues. I've been fortunate to have had zero issues on Linode.


I just ran a few of my IP's through http://www.anti-abuse.org/multi-rbl-check/ and didn't get any hits. It's likely luck of the draw.

That said, you have to ask them to remove their port 25 throttle and set rDNS: https://aws.amazon.com/forms/ec2-email-limit-rdns-request


I think your IP address issue can be an issue to any hosting provider. I have experienced that issue with a VPS from SiteGround. May I know what mail server do you install on Linode server?


postfix with dovecot, dspam, and postgrey


I have a growing list of cloud / vps providers I like to kick the tires on. I'll be adding this to the list as well.


Our data centers span across Linode, DigitalOcean, and Vultr. Ranking my satisfaction with each, would be in that order, best to worst.


I have to say I have lost all faith in Linode when our hosts keep getting "restarted in the same state" when they have hardware failure yet they just do a hard reboot.


Sadly, I'm used to that on all three at this point. In order of most frequent "Surprise! Node Reboot!" emails from most frequent to least, it's been Vultr, Linode, DigitalOcean, for me.

(for anyone curious, it's ~50 servers across 8 different data centers, roughly equally distributed)


Care to post your list? After it was filled out with details it'd be a nice resource to share.


What are your thoughts on Vultr?


Support isn't the best. I was attempting to restore a very large database and they shut down my instance for using excessive disk and CPU. I didn't get any response of what sort of limits to stick under.

Otherwise they are very similar to DO but with more data centres. The only complaint otherwise is their instances start a little slower. The API is very easy to use though.


I had both Vultr and RAMNode servers, but shut them both down and moved them to DigitalOcean.

The control panels in Vultr seem like 3rd party reseller ones akin to the cPanel/WHM days. I was not able to upgrade my Vultr VPS to a larger one with a few clicks like I expected to be able to (and am able to on DO and Linode). I guess you are expected to put in a support ticket or something. It felt very much like a basic Xen setup, though they have made recent strides to help with that.

With DigitalOcean, I can upgrade my VPS with a few clicks, and even downgrade them as well if the storage disk is not expanded when you upgrade. I am very impressed with their offerings, and the ease of use of their control panel setup.


I maintain VPS across different providers (Linode, DO, vultr, budgetvm, etc) and haven't had any real issues with them. Based on my interactions with their customer support, the level of support you receive is about what you'd should reasonably expect from providers in that price category.

The only real gripe that I have is that they don't statically route the /64 they assign to your VPS so I have to run a ndp proxy daemon.


I colocated with a sister company of theirs for a year and a half and I found them to be very reliable. Vultr provides free DDoS protection in their piscataway location but if you do not require that I don't see a reason to use them.


Why are people comparing CPU Core and Transfers? Come on, this is HN. What CPU, What Speed? KVM / Xen? What SSD? IOPS? etc.

It used to be Linode had the better of everything comparing to DO. Faster CPU even if they are same core count. SSD had much faster IOPS. Less oversold therefore bandwidth were good. And there is no point giving you 100TB transfer per month if you are limited to 10Mbps port speed. Linode ran on shared 40Gbps Port and peak bandwidth were great ( for its price ). Then there is the quality of the Network, ping time between different ISP and Exchanges. Linode has consistently been better then DO. And not it offer double the memory.

But many are worried about Linode's security issues and therefore would not even touch them with a ten foot pole.

I have yet to see quality VPS that offer a whole package better then Linode. Vultr, OVH, Online.net and Scaleway included.

I am hoping Lightsail bring some competition here.


Hopefully Google Computer Engine will also compete at this price now. Even though it seems to be cheaper than AWS in addition to having a good interface, it'd be good to get a bunch of cheaper servers to do the small projects and also give it out to developers in situations where static ips are a requirement.


I use AWS for a bunch of things, but the console has always been immensely painful to use, especially to create a no frills simple VPS for a fiver. For that reason alone, I use DigitalOcean and fiddle with AWS authentication if I need to use some of its services there.

Lightsail looks excellent since the setup is just a gazillion times more user friendly than the standard EC2. A single page affair, a launch script, authentication, it's all there. Once launched, I get all the info and metrics I need.

I kinda wish they streamlined their usual console to this level, but this way it's fine as well. I don't tend to use S3 and EC2 as much as I'd like given its non-existent UX, but this gives me hope that Amazon is taking user experience seriously.

Sure, it may be underpowered compared to DO or Linode, but having all services under one roof is worth it to me. I'm happy.


Price breakdown LightSail vs DigitalOcean, Vultr, Linode, OVH, and Online.net / Scaleway: https://gist.github.com/justjanne/205cc548148829078d4bf2fd39...


This puts DigitalOcean in a tough spot. They aren't going after the top-tier get your hands dirty customers AWS usually caters to, and they aren't going after the no-tech skills audience of godaddy/bluehost/dreamhost etc. So now it becomes a marketing battle for the middle.


Quick feedback for the AWS team:

* First, this is great. The simplified interface vs EC2 is terrific. This is the direction EC2 (and RDS and S3 and basically everything) needs to be going.

* Instances you start in LightSail don't show up in your EC2 console. I would expect there to be some kind of data sharing there.

* Similarly, creating a "static IP address" doesn't show up in your elastic IP list. I'm not sure if this is intentional, but to manage two different views of products that you're billing me for is... troublesome.

* Last, if I could migrate elastic IPs from EC2 to LightSail I'd be migrating all of my instances immediately. The bandwidth savings are massive. (Related: when is the 2TB limit for a t2.small going to be migrated over to EC2?)


I am the only one interested in performance? Do these perform like t2 instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instan...) which are burstable and capped? What's are the network limits? Are the disks physical or using EBS?

I don't understand why everybody on this thread is complaining about overages? If you use more than the allocated amount you pay overages, simple idea. Why in the world, would you want your server to just shut down when you reach a limit?


While on this topic, does anyone have any experience with DreamCompute (https://www.dreamhost.com/cloud/computing)? I was thinking about trying them out, but could not find any reviews/thoughts about them. They seem to offer a lot, for very low prices. Here's more -- https://help.dreamhost.com/hc/en-us/articles/217744568-What-...


Wow, what perfect timing. I just got approved for AWS Educate and was prepared to do the work to batch auto-provision EC2 instances for my students, but this is much easier to deal with. And at $5 a month for the cheapest machine, it's about what I would have paid to set up a t2-nano instance (~$4.70). I didn't need anything fancy, just wanted students to have their own machine to deploy public-facing code (e.g. APIs).

The in-browser SSH also deals with the problem of students who are on Windows machine. You haven't hated PuTTY until you've tried walking a student through it on their Surface Pro.


I just created two 512MB Ubuntu instances and here are my thoughts. (+ indicates positive, - indicates negative, ? question)

    + You can use private networking for cross-instance communication.
    + You can even communicate with other AWS services by using VPC peering.
    + Hourly billing.
    - The firewall rules are not shared. You can't create a single rule and attach to multiple instances.
    ? What are the network throughput caps? Can't find it anywhere?
    ? Are the disk physical or using EBS?
    ? Can you snapshot a running instance, or does it have to stopped?


The FAQs mention that the disks are EBS.


Very expensive. The 40$/mo plan on Linode, gives and you 4 cores and 8GB ram.


It depends on the processors the machines use and the actual sharing of these processors. I agree it looks more expensive but it might not be upon further analysis.


People focusing just on pricing miss the point. The key factor here is that it is easy to get started but if you need to scale you have the full power of AWS offering and ecosystem that cannot be matched by VPS providers


Very smart move from Amazon.

The main advantage of Amazon LightSail over DigitalOcean are: built-in firewall (instead of messing with iptables), managed database with AWS RDS, and using S3 with a low latency and no networking cost.


DigitalOcean must implement a centralized firewall. This is the missing piece for them.


The best way to thing about security groups is that they are switch based firewall rules.


I still don't get the VPS craze. For similar prices from OVH, online.net, Hetzner, etc. you've got vastly more powerful physical servers with vastly larger storage... What's the point really?


Infrastructure as code. Use the API the build machines from scratch on demand.

It can be a very powerful model if done well, or done poorly cost a lot for less performance.

I use Hetzner myself for my large projects but the machines were configured using the same scripts I practiced over and over again on DO.


How much of a threat is this to Digital Ocean, Linode, and others?


Looks like a direct threat to me. I'm sure DO will have better support though, but AWS will have a much more potent upgrade path for start-ups as they mature.


Linode still gives you more RAM for the same $. But it does feel like a serious threat to them, Amazon can use their size to outcompete them.


Customers (non-tech guys) trust Amazon more than others.


Tech guys too. Linode is a joke and Digital Ocean isn't a whole lot better. I wouldn't trust either one to host a serious business.


Hardly a Linode or DO killer yet but could become interesting if/when they:

1). support more linux distros (hint: Debian)

2). let you place instances in other AWS regions

3). let you pool bandwidth quotas across instances

4). improve the cpu/memory competitiveness of their $20+ plans

Also, don't like the firewall being configured via the dashboard (reminds me of the same crappy approach used by scaleway). Alarming too that their SSH console auto logs you in (even if you started an instance with your own public key rather than Amazon's).


tl;dr Only available in us-east-1 (N. Virginia)


Wish this was more obvious. Signed up only to find out during creation that this was the case. Was hoping to see how the Seoul location routing was as I'm looking for a cheap VPS with low Asia-EU latency (ie not across the US).


OVH has good connections in that direction. Don't have a lot of data to back this up (not much traffic from the region), but the tests I ran looked good. And it's certainly cheaper than lightsail


I could see the appeal of some of the lower range hardware plans, but especially the higher tiers seem way off in terms of pricing. Can someone clarify why a 2 core/8GB machine costs 80$, where other IaaS providers charge far less for such rigs? (DigitalOcean gets you 4 cores for 80$/m, TransIP gets you 4 cores, 8GB and 300GB SSD for <55$/m...)


Speculation, but DO may be overselling their servers at a higher rate than AWS. I know for a fact that lots of other VPS providers do this.


Small print: Some types of data transfer in excess of data transfer included in your plan is subject to overage charges.


Given the long term strategy of DigitalOcean and how vastly different it is from what AWS seems to be executing on, I don't think this announcement actually changes things for DO that much. There is a mass consolidation about to happen in the IaaS space and it's smart for AWS to capture some amount of that.


It seems that you can only create instances in Virginia and you can only choose between Ubuntu 16.04 and Amazon Linux.


This is how aws trying to re:invent ...a lot of stuffs from them this year is trying to crash those smaller saas provider and the open source world like chalice... It seems like they are reinvent the wheel and try hard to grab every single tiny piece of market. I am moving to gcp...at least they handle oos better


With how versatile the AWS ecosystem is and the quality of the Amazon brand this has the potential to absolutely demolish most of the small time VPS providers. Unless you provide a niche such as DDoS protection or PCI/HIPAA compliant hosting I do not see how you can compete for legitimate customers.


I meant to say that it was about time someone comes up with something like this, but seems that in the details AWS complexity can still bite you.

Is there a service, that you know of, that would simplify AWS so that we can just use it with predictable expenses and ability to grow? Maybe I am asking for too much.


Heroku? They've basically just a usability shell on top of AWS.


This is exciting. In case any AWS folks are reading this, are there any plans to support Debian 8 as Bare OS?


I don't see anything disruptive here. I already have roughly the same cost/benefit with Linode.


Correction, you see a piss poor attempt at competing with Linode/DO/Vultr with no guarantees you won't piss away thousands by mistake or suffer magical and impossible to understand performance issues.


If you read the fine print, the price and performance isn't actually as good as Digital Ocean and a few other similar services, so unless you actually are using AWS services AND need your VM to be in the same VPC or data center as the services, then it doesn't make sense.


This is pretty good. I'm currently paying about $3.50/mo for my existing t2.nano-based reserved VPS, and this provides a lot more space and bandwidth for not much more money. If they offered it with a reserved-instance discount, it would be even more compelling.


i'm still using linode. been using it for 10 years.


Strange, I can't seem to find the price of bandwidth when you exceed the included 1 TB.


$0.09/ GB ($90/TB). Good luck if you host a service which then gets HN or Reddit hugged!


I've been on the front page of HN multiple times, you won't even come close to the $5 limit of 1 TB. This is ridiculous.


Different if you're on the front page of reddit, or on some popular subreddit. If you don't have all static files on a CDN you can easily exceed 1TB within a few days. Assuming it's a side project that you're not constantly monitoring and you're away for a few days, you could find a hefty bill when you come back.


One day their pricing and performance might actually be competitive. Today is not that day.


This is certainly more in my price range for personal stuff (family sites, etc). Definitely going to take advantage of the free 30 days to kick the tires. Curious to see what sort of server monitoring is included.


I guess this service might prevent people from migrating off Amazon services to DigitalOcean (or other VPS providers), but I don't really see a compelling reason to use this service instead of DigitalOcean.


I think this could quickly change in the 6 month to 1 year horizon as Amazon (potentially) adds onto this product with other AWS services built into this product.

Imagine if you could easily spin up DB instances and create LBs within LightSail - they would offer more features than DigitalOcean (or any other "VPS Provider"), while being price and usability competitive.

I would definitely be nervous if I was DigitalOcean - there are still advantages (support, tutorials, etc), but this closed the gap significantly.


So, it is basically the EC2 "t" family with a simpler (but almost equal) pricing and simpler administration.

  t2.nano = $5
  t2.micro = $10
  t2.small = $20
  t2.medium = $40
  t2.large = $80


Digital Ocean still has a better pricing, for example for $20 on Digital Ocean you are getting 2 CPU server vs 1 cpu on amazon. Also $80 Digital Ocean offers 4CPU server while Amazon gives you only 2CPU.


Confirmed that LightSail works with ServerPilot (https://serverpilot.io/).

LightSail's default firewall opens ports 22 (SSH) and 80 (HTTP) but has 443 (HTTPS) closed. That seems like a terrible default for making a developer-friendly service. Hopefully they fix that and open 443 by default. Otherwise, a lot of wasted time is going to be spent by developers who have configured SSL on their sites and don't know why it isn't working.

LightSail feels very similar to DreamCompute that DreamHost launched, including the approach of only allowing SSH public key auth without any option of using password auth. So, they're intentionally leaving out some users with that approach.


It seems to me the exact same service existed already, by picking a stack on Bitnami's website, and single-click-launching it on AWS. The price structure is clearer on Lightsail though.


Lightsail also uses Bitnami stacks underneath, so it is very similar indeed. It is an easier way to get started with AWS but with the potential to graduate to the full offering


Why would I use that? With Hosteurope I get 4 vCores, 6 GByte guranteed memory (burst to 12), 200 GByte SSD, traffic flat (but only 100 MBity/sec), monthly cacelable for 20 euros.


Having upper ceilings in charging is a big deal for me.

For this I prefer https://www.nearlyfreespeech.net/


OVH is still cheaper, with half the disk space but 4 time the RAM.


Why are a full half of the front-page stories about Amazon?


AWS re:Invent is currently happening, they're announcing all of these products right now (and they are summarily posted to HN)


Thanks! It wouldn't be so bad if there weren't so many duplicates :)


Just created a "Magento" instance and attached a static IP. It still unreachable with a browser after a few hours. What did I do wrong?


Am I understanding this correctly?

DO and LightSail are close (if not) exactly the same spec wise?

All that's I've read so far essentially states that.


They advertise a 99.95% EC2 SLA. Does that mean that an instance in a single AZ can go down anytime for any length of time?


Anytime but not any length of time.

99.95% SLA means the following amount of downtime: Daily: 43.2s Weekly: 5m 2.4s Monthly: 21m 54.9s Yearly: 4h 22m 58.5s


But in the context of ec2, downtime must occur in two AZ at the same time to count IIRC. There is no single AZ SLA.


This is super awesome... Lesser devops requirement particularly in early stages trying to validate an idea.


No nonsense predictable pricing is what drove me to Rackspace years ago. Good for Amazon figuring this out.


meh the pricing fails in comparison to say linode.com's offerings. https://www.linode.com/pricing

granted with AWS you probably get access to other AWS products so there's that...


Sigh. US-only. That kills it for me.


I really like the web based firewall configuration feature. Hope Linode and DO adopte it.


Really wanted to get more information on this, but all of their Docs links currently 404.


What is this service? I can't figure out how different it is from Linux on EC2.


I pay ~25$ for 8 gb - 1tb - unlimited bw From online.net.. 80$ is ridiculous


European data centers only though, correct?


OVH has similar prices with a dc in Canada. Worth looking at. Latency will still be high to the west coast, but lightsail is also us-east only, so no real difference there.


Is there a way to quickly/easily migrate EC2 instances to LightSail??


This is extremely, really, amazingly, exciting news! Except for the potential charge gotchas in the FAQ which seems to fly in the face of the banner text on the main page but still...very exciting!


Wow I didn't know Amazon bought DigitalOcean ;-)


The performance of DO is far superior compared to Lightsail. Disk speed is a joke, network seems to be limited to 50MB/s (which is not that bad) and who knows how much they throttle the CPU.


This site can’t be reached

I guess I google virtual private servers


Seems like a play to compete with Digital Ocean.


PSA: The chap in the photo is wearing a cape.


Wow they're going after digital ocean


No Windows Server? Surprising.


Digital Ocean is in trouble!


Don't think so. Not for any disk-heavy or bandwidth-heavy users.


The pricing/specs are suspiciously very similar to one of the largest VPS provider.


Time to move from DigitalOcean finally


Fbdnnd


Is there anything preventing me from using these servers as proxies with incredibly cheap bandwidth? I assume it stacks? When we reach transfer limits I can just spin up an additional $5 instance to add another 1TB?

$5/mo per 1TB of bandwidth = $5.00 / 1024 = $0.0049/GB compared to EC2's normal $0.09/GB -- That's a 91-95% discount on egress data!


Precisely right. It will be fascinating to see if Lightsail creation can be automated via Terraform or Cloudformation, and the $0.09 per GB for outgoing bandwidth can be easily subverted with this new pricing structure.


Sort of. Lightsail service limits:

"You can currently create up to 20 Lightsail instances, 5 static IPs, and 3 DNS domain zones in a Lightsail account."

So obviously there's a hard cap within one account. The $80 account comes with 5tb, so one would have to be burning a bit of bandwidth - 100tb - to cap it out. Frankly though, at $1600, Amazon is still printing a massive profit margin on that 100tb of transfer.

These seem like they'd be good front-end servers hooked up to RDS etc.


Transfer is how they really make money. If you use a lot of services (especially spot pricing) and only pay 10% of bandwidth charges it will likely hurt them, so not sure if they'll allow it if many people start doing that.


Immediately thought about this "solution" to bandwith overage charges as well. Would not be surprised if it ended up getting you disabled for abuse or going against the TOC, etc.


The documentation for LightSail specifically calls out integration with other AWS services, how to get them connected via private IPS, etc.

https://lightsail.aws.amazon.com/ls/docs/overview/article/us...

Of course, after your initial allotment of cheap bandwidth, LightSail switches to the regular AWS egress prices.


What a day for AWS on HN. 8 AWS products made it to front page. I've learned new AWS product like Cognito today just from the comments (ironically in the "Google is Challenging AWS" thread).

I feel as excited as I was for Azure's Build 2016. Now I'm feeling pulled to AWS. This is great for AWS, not so much for Google Cloud which further fades into obscurity in my mind. I'd love to see that change, more competition in the cloud space = more options for us developers = more conditions in our favour.

Amazon LightSail just killed digitalocean for me which has been steadily getting more expensive (for instance I can't downsize to a less expensive plan once I resize my image, meaning I forked out $100/month for something that would work for $5/month + multiple DO images now cost monthly fee.

$5/month + tight integration with AWS products is enough for me to move completely off DO. If only AWS had DO's community style documentation, I'd definitely question DO's future viability.

Now a killer IDE from AWS that lets me deploy and configure AWS without leaving the IDE, that's a checkmate move, which I think will be very difficult for me to switch to another cloud provider. Right now things are in flux but I think an in-browser/desktop IDE like Cloud 9 with one click deploy to AWS would be the end game for other cloud providers.


Their yearly conference, Re:Invent, is going on this week, so there will probably be more announcements to come.


Yep, there's two keynotes at AWS, one Wednesday by the AWS CEO, and one tomorrow/Thursday by the AWS CTO Werner Vogels. I'm expecting quite a few more announcements during the Vogels keynote.


Sorry how has AWS killed digitalocean? It's more expensive. With poor disk performance.


Not to mention much less user friendly.


In what way? (I'm legitimately curious to hear about your experience using both products)


You have to learn about and configure their IAM security tools before you can use anything else. Configuring most of the services is NOT straightforward but typically fine-tunable. Lightsail is expected to be different.


Compare disk performance before you switch. You could be severely disappointed if you have disk heavy applications (which is very likely with small vps)


Can I migrate EC2 instances?



When I saw the $5/mo price, I know it's targeting lowendvps customers from Digital Ocean. But if the management UI of LightSail is still the EC2 one, I will give up. Their UI feels laggy and less intuitive.


hardly




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: