Hacker News new | past | comments | ask | show | jobs | submit login

AWS employs cost obfuscation by design otherwise the default view when you open the console would show you all of your current active services. Not only is that not the case, a single screen to show you all of your current active services doesn't exist. You need to take a deep dive into cost explorer (assuming you have access in corporate land) and try to decipher in what that all means.



They definitely need a senior executive to stand up and say, "The Customer wants us to be transparent in billing, fix that now."

Then they need to start a team dedicated to finding a good way to let customers halt spending at a given limit with minimal impact on their operations.

They already win on UX (okay, okay, it's an opinion ffs), but unlimited liability makes a lot of people very uncomfortable. Those two actions would go a long ways towards demonstrating good faith in that area.

If it would cost too much, maybe they could present it as an easy way to cut expenses at the same time that they introduce a small price increase. This is a common and long-standing complaint/feature request.


"They already win on UX"

As someone that has tried and failed to get some small personal sites running on AWS a couple times, I'm going to have to tag this snippet with [citation needed].


From my admittedly limited experience with GCP and Azure (and this is obviously subjective), the UX in the most successful competition is, at best, no better than AWS. It isn't that AWS has good UX, just that all the cloud providers have bad UX.


Oh man, I have long been joking about the AWS UI. Like how it will gladly walk you through all the steps for launching a server and only at the very last step says “oh you don’t have permission to do that lol, get permission and start over”.

I compared[1] it to a bartender who walks you through an entire sale and only at the last second rejects your purchase for being underage (instead of denying you the at the initial request for something alcoholic).

Of course, that’s one of the minor things in the grand scheme.

[1] http://blog.tyrannyofthemouse.com/2016/02/some-of-my-geeky-t...


agreed especially when compared to say Heroku or Digital ocean. I see many new comers struggling to deploy a small website. It's overwhelming. I understand that Heroku exists for such users and they are using AWS under the hood, but is there any service which takes AWS cloud APIs and simplifies it with a leaner UX?


> agreed especially when compared to say Heroku or Digital ocean.

Funnily enough, I've never ever been able to understand what Heroku is, or how to deploy anything useful on it :) But I'm old, and I've always found it easier to tinker with nginx configs.


Heroku is very simple - you just create an app and "push" code to a repository tied to the app. You need to write code of course, but they auto-detect the language and prepare the environment or platform for you (so platform as a service). They tried to appease the geeks by making everything powered by CLI - but otherwise its just point and click for deploying apps.

Tinkering with nginx is good, but what if you want to add a database, a cache, a logging service? What if you want to automatically build code when you push to github? What if you want to easy clone multiple environments (for staging, prod etc) without never doing an SSH into server? What if you want to have versioned releases that you can easily rollback to a specified version or push a branch into one environment and another branch into another environment? And you get a workable subdomain for every environment you build with SSL enabled. All of that can be scaled down and up with literally few clicks.


> but what if you want to add a database, a cache, a logging service?

I... just add it. How do I add it in heroku where everything is "just create an app and push it" and everything is separate "dynos" (whatever that is), each with different pricing, storage, bandwidth etc.


that is true. everything is a separate dyno - means separate machine/vm to which you need to connect to. You get free tier for almost all databases and you only need to know how to connect to it, not tinkering with the storage, tune the parameters, scheduled backups, automatic scalability etc - that's done for you. it will become expensive once you go past free tier or the cheap tiers though.


> not tinkering with the storage, tune the parameters, scheduled backups, automatic scalability etc - that's done for you.

Thing is: if we're looking for free, that may make sense. you still need tinkering though: you need to figure out what the whole dyno thing is about and how to connect from one dyno to another, and so on.

However, if we expand this a tiny bit to cheap and/or free, then simply running the lowest-tier server at Digital Ocean will be a better proposition. And installing a database these days is basically the same thing: run a command to install, all you need is to connect to it. No tinkering. And it will still probably scale significantly better than a free tier at Heroku :)

Once again, I'm biased and old, and can afford to spend money on side projects. I know that sometimes even $15 a month is out of reach (oh, I've been there). But yeah, this sorta prevents me from ever understanding Heroku :)


I think that's basically what Lightsail is supposed to be. I haven't used it myself, but from glancing at it they're pretty clearly targeting DO/Linode/Vultr etc.


I think there's some verticals - players that simplify/aggregate SES, or S3, or whatever - but none that handle being a layer on top of most/all of AWS.


AWS have a multitude of services that do this already. I’d recommend to go through their services one by one and learn what they offer.


It’s not you. It’s pretty rough for new users.


Have to agree with the others. For example, while setting up ELB it's possible to select at least one option (un-checking the Public IP box) that causes the setup to just fail with a nonsensical error message. Turns out ELB requires a public IP to communicate with the nodes. That's just the most glaring one off the top of my head.

I also remember trying to set up SFTP whenever it was first released. It was literally impossible to do what they were advertising (S3-backed SFTP with jailed homes for each user) without writing a manual json config (I never got it to work). I had built my own solution for this exact thing on EC2, using a bash script more or less, and thought the hosted option would be less of a maintenance burden. Needless to say I quickly gave up.


Unless you understand IAM, the scoping of individual users from the sftp service can be a bit confusing. What you need is a scope down policy:

https://docs.aws.amazon.com/transfer/latest/userguide/scope-...

The more annoying part is that the service only supports public/private key logins. If you want user/pass you have to write a lambda. The lambda is pretty simple though, it checks the credentials (so it can hit any backend you like), and if they pass it returns a 200 with a json doc of role (which is just sftp assume role), policy (the scope down from above), and home dir.

https://aws.amazon.com/blogs/storage/enable-password-authent...

This touches on a larger issue with AWS though. It's that they are trending to leave out functionality and point to lambda as the solution. On one hand, I get it. The lambda solution is infinitely more flexible, but what if I just wanted an sftp for a couple users that uses user/pass?

To your final point, it is so much less maintenance. There is no server to manage, and since I want all the data in s3 anyway, it's already there. This solution replaced a server with chroots, EBS, scripts to move data to s3, etc...


How are people not building in Terraform for an easy ‘destroy’ at the end?

I know it’s rhetorical and a lesson learned myself, but yeah… I would expect folks to learn to use this tool to help manage costs this way.


terraform 'destroy' isn't infallible. There are certain resources that trigger the creation of other resources (for example, lambda functions will create cloudwatch log groups, and dynamodb tables create cloudwatch alarms) and when terraform destroys the resource, it doesn't necessarily clean up all the associated resources.


Terraform has it's shortcomings for sure, but I don't think it's reasonable to expect Terraform to go out and clean up second-order effects of it's resources.

I'm not doubting that the situations you describe are true, but abandoning resources like that is an AWS-lifecycle problem, not really a Terraform one.


Sure. My point is just that `terraform destroy` doesn't necessarily solve the problem at hand. And you could still end up continue paying for those second-order effects after running a terraform destroy.


I am pretty senior and can thus sometimes afford to do a new thing and try doing it the right way, at the same time. Not even always. Many people just take one learning curve at a time...


Typically Terraform takes longer to get something working than mindlessly clicking through the console. In my experience those mindless clickthrough things end up sticking around for years even when they weren't intended to.


This is why you have separate development and production accounts: a development account where you mindlessly click through so that you can learn through the UI what's available and how it works; cleaned up on a regular basis by something like aws-nuke, and a production account where you have the discipline to only create resources through Terraform / CloudFormation etc.


cdk is promising for things like this, way safer and easier than rolling your own scripts with bash or python.


Even CDK sucks though because if you’re still kinda new to it all, you want to login and make sure that it’s all hooked together correctly. And you’re back using their shitty UI.

Why you can’t look at the load balancer, look at the listener and then show what’s in the target groups is beyond me.


I have a different take on it. I started out doing everything through the console, then learned the cli and boto3, and very recently CDK.

CDK is another tool that build on the cli and boto3 concepts, and also manages the orchestration and dependencies.

Having to go back to the console isn't a fault of CDK. Learning the right tool to use for a situation is part of the learning curve. I go back to the console all the time to look something up quick or to understand what config options are available. Or I repeat the same steps in the console enough times that I get bored with it and automate

Edit: also I tried and have given up on CloudFormation more than once. CDK is like a wrapper layer around it, and has been pleasant to use.


I’m not trying to be snarky here, would like an honest opinion: how do they win on UX? Are there specific things you like there?

I’ve used all three major clouds in production now and I dread using the AWS console. Or really any part of it, over Azure or GCP.

I’ve always thought of them as purely winning the “nobody got fired for…” mindshare thing despite having a thoroughly mediocre product.


It either works great or barely depending on the service you're using - some AWS teams have dedicated dashboard teams, eg 'ec2 dashboard team' which solely focus on the dashboard experience, while others touch it as an afterthought.

I'm pretty sure something along the lines of this^ was posted on HN by a former AWS employee but I can't find it now.


Their web console UIs vary widely by service. IMO, the UX for most of their simpler services, specifically SQS, S3, Lambda, DynamoDB, are really easy to use and work nicely. If you want to start with a Docker image, send it to AWS, and have it get spun up and attached to a DNS, well that's a huge complex mess to get set up.


Unless you use something like Terraform or Pulumi, in which case it’s almost a one liner.


The AWS default view feels more crisp for me, compared to the GCP one which feels very "floaty".

Their UI toolkit for that at least is on point.


This gives the other vendors a wedge in.


I don't understand why so many people want to use AWS for such small budgets. At that scale wouldn't it be easier to just build everything out of a few VMs at a cheap service?

AWS is awesome when you have a large number of resources, that are created programmatically and reproducibly, with redundancy and duplicate environments.

The budgeting tools are really amazing at letting you categorize your costs and create appropriate alerts. The permissions system lets you define very specific roles.

Its complex. But if your system is complex, it gives you the tools to keep track of it all. If you have a <=$5000/month budget, it is probably too small to make sense in AWS. You can probably run your system on a couple servers.


> At that scale wouldn't it be easier to just build everything out of a few VMs at a cheap service?

I have a handful of AWS Lambda functions with a DynamoDB backend serving hundreds of clients, my bill for the month of April was $0.01.

No, a VM wouldn't cut it.

But you are right that there are certain slots where AWS doesn't make sense: There's one in the lower middle range where you can save a bunch of money by using a VM or two with your own DB servers. And there's the one where you're so big it might actually be worth it to implement the whole stack yourself.


> I have a handful of AWS Lambda functions with a DynamoDB backend serving hundreds of clients, my bill for the month of April was $0.01.

What kind of thing do they serve? Somewhere I could read more about this kind of project?


It's a common backend for chat bots (Discord, Matrix, IRC, Telegram).

Basically it takes different inputs and commands and uses different APIs to fetch data based on the input efficiently without the need for web scraping. DynamoDB is used mostly as a cache for common queries so I don't go over API quotas.

Most of the bots are made by me, but with AWS API Gateway I can easily generate API keys for anyone else and keep track of their usage.


This is considered a cloud native pattern.

https://docs.aws.amazon.com/apigateway/latest/developerguide...


> At that scale wouldn't it be easier to just build everything out of a few VMs at a cheap service?

No. I looked into AWS vs Azure vs Cloudflare vs Digital Ocean for something I'd like to build this year and (for my thing) Cloudflare (Workers) was the best balance of cost, scalability, and maintainability.

> I don't understand why so many people want to use AWS for such small budgets.

Serverless (I hate the term) makes a lot of sense for small budgets and projects. You're investing very little, so the downside of (severe) vendor lock-in is somewhat low. The upside is HA infrastructure, horizontal scalability, zero maintenance, ignorable monitoring (at least initially), consumption based costs, etc..

The biggest problem with all of them, except Digital Ocean, is that it's difficult to "own" your data in the sense that you can download a copy of a DB and keep using it somewhere else. IIRC Digital Ocean has a fairly nice managed PostgreSQL offering, but it still scales similar to a traditional DB (ie: not automatically).

The next biggest problem with AWS and Azure is that you can't figure out what anything costs, at least not easily. For example, I know for a page I'd want to serve via Cloudflare Workers, I could do 1 hit = 1 point read from Azure CosmosDB, but I couldn't figure out if the pricing includes egress. Just look at the pricing page for CosmosDB [1]. It's ridiculously complicated and that's one service on Azure.

1. https://azure.microsoft.com/en-us/pricing/details/cosmos-db/...


Linux server administration requires a certain amount of know-how and determination. I can't tell you how many times I had to rebuild a DigitalOcean server because I messed something up and wanted to start with a clean slate.

A lot of hackathons, workshops and courses ask you to use AWS these days. Whether it's to run machine learning instances, win a sponsor prize or learn how to use Lambda, students are often encouraged to learn one of the major cloud providers.

Also it's a resume boost. It's another buzzword you can add to your resume.


Disclosure: I'm the Co-Founder of Vantage.

This is exactly what we do with Vantage: http://vantage.sh/

We give point-in-time run-rates of all active resources based off of the region and resource/service configuration.

In addition we try to simplify people's understanding of where their costs are coming from. If anyone needs help with this, they can personally reach out to me at ben@vantage.sh


With all due respect, I think the issue here is that your company should not need to exist. Amazon should provide this feature as part of the UX.


Yea they should, but if it doesn't, are you gonna keep waiting or do something about it?


I would be more worried that if you do something about it, Amazon is gonna do something about you.

Starting with obnoxious API changes, followed by ToS changes. Hopefully that's as far as it has to go.


Agreed. Nothing new about this request.

It's like hiring a plane to fly over your house repeatedly to tell you whether or not it's on fire. There has to be a better way.


Same respect, nice tool, but it is sad it is needed. But I guess it's similar to lawyers, police, army, I don't like the need for them, it, but good they exist.


Azure does this a little better, but best would be to see a breakdown on the invoice with links directly back to the resource.

Maybe there are discounts or other processing that makes this hard.

Or, less charitably, this would lead to people optimising their costs a lot better and canceling unused services much sooner.


Azure groups most things into resource groups which greatly simplifies things.


Yes. Also split out into sub categories. How much of an ec2 spend was on bandwidth and how much on compute for example.


Enterprise Agreement accounts aren’t billed on demand, so there’s little use for that in accounts spending a lot of money.


Why the f do I have to use a service like billgist to get a breakdown of my services?

AWS billing IS complicated but could be made so much simpler...


I'll bite...

Because, if you are a company that uses AWS at scale (deploying literally thousands of resources at a time), you care more about meeting demand and getting resources to spec, than you do about the cost of an individual elastic IP...

The price of each individual resource isn't something that you want to see on every screen you touch. It literally clutters the console with information that you couldn't give a shit about when your company is making millions (or billions) on the services you provide. You care more about your service's reliability/scalability/uptime/etc than anything else. This is priority numero uno.

If and when cost analysis becomes priority, you look to see where you may be overprovisioning resources - hence, the billing console.

But for the larger players on AWS (the multi-billion dollar ones that AWS cares more about making happy than you or I), an extra $100k in AWS expenses in a year isn't a worry, it's a write-off.


You also get much better negotiated rates for everything when you are big.


I agree, having set up serverless and lambda for an api/app that was to be used a few times per day, the billing made no sense at all, it would increase even though the services were used less and was difficult to find what the costs were or whether they could be reduced, eventually I had to shut it down and create a non-lambda solution somewhere else because I had little control of the cost.


If I go to billing I can see an itemized breakdown of every service I'm using. What is wrong with that?


Students and self-learners want to experiment with doing complicated things on AWS without getting a surprise $3,000 bill because their script didn't shut down those instances like they thought it did. They want a hard fail-safe to protect them from surprise bills. And they want a solution that's easier and more reliable than checking the AWS itemized breakdown twice a day.


I agree with that. That is why there are "Billing Alarms" in big letters on the billing page. But that wasn't the point of the reply: the person above me was complaining that there's no way to tell what services are being billed, which is bunk.


The breakdown isn't granular enough by far. ec2 is a very big bucket to go digging into for costs.


This is what keeps me from tinkering with the free tiers and never even attempt to host my own projects in the cloud.

I want a free tier with a lock on it. I'd very much appreciate advice for what tiers would fit my monthly use if I hit free tier caps, but if I got hit with $500 right now I'd be ruined.

I think my Google compute stuff is safe, but I really don't like having any doubt.


Until AWS fixes this the best thing to do is just use their service as little as possible. There are plenty of other cloud providers out there these days which don't employ this hostile practice.

I closed an AWS account for this reason just a few days ago, we hadn't used it for a while but there was no clear way to remove our credit card so it felt like a risk just to have it open, what if a developer logs in to mess around and accidentally flips some switch that smacks us with a charge of a couple grand... unlikely, sure, but the fact that it's even possible is terrible system design. Better to just nuke the account and move on to other cloud providers that don't make it harder for me to sleep at night.


Not disagreeing with you (a view with all the active services would be great) but one of the many benefits of using Terraform is that it allows you to know what you running.


Yep and is a huge SaaS market place now so mid to large size companies can figure out what they are spending money on.

I don’t use them for any projects mostly because I already don’t enjoy having to manage AWS at work all the time. I also don’t want to live/work in a world where AWS is my only option so I try and use smaller hosting providers and services like tarsnap.


AWS is literally trying to trick you. With friend like these, who needs enemies?

The Trough of Disillusionment for AWS is going to be Bastille Day levels of ugly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: