Hacker News new | past | comments | ask | show | jobs | submit login

They definitely need a senior executive to stand up and say, "The Customer wants us to be transparent in billing, fix that now."

Then they need to start a team dedicated to finding a good way to let customers halt spending at a given limit with minimal impact on their operations.

They already win on UX (okay, okay, it's an opinion ffs), but unlimited liability makes a lot of people very uncomfortable. Those two actions would go a long ways towards demonstrating good faith in that area.

If it would cost too much, maybe they could present it as an easy way to cut expenses at the same time that they introduce a small price increase. This is a common and long-standing complaint/feature request.




"They already win on UX"

As someone that has tried and failed to get some small personal sites running on AWS a couple times, I'm going to have to tag this snippet with [citation needed].


From my admittedly limited experience with GCP and Azure (and this is obviously subjective), the UX in the most successful competition is, at best, no better than AWS. It isn't that AWS has good UX, just that all the cloud providers have bad UX.


Oh man, I have long been joking about the AWS UI. Like how it will gladly walk you through all the steps for launching a server and only at the very last step says “oh you don’t have permission to do that lol, get permission and start over”.

I compared[1] it to a bartender who walks you through an entire sale and only at the last second rejects your purchase for being underage (instead of denying you the at the initial request for something alcoholic).

Of course, that’s one of the minor things in the grand scheme.

[1] http://blog.tyrannyofthemouse.com/2016/02/some-of-my-geeky-t...


agreed especially when compared to say Heroku or Digital ocean. I see many new comers struggling to deploy a small website. It's overwhelming. I understand that Heroku exists for such users and they are using AWS under the hood, but is there any service which takes AWS cloud APIs and simplifies it with a leaner UX?


> agreed especially when compared to say Heroku or Digital ocean.

Funnily enough, I've never ever been able to understand what Heroku is, or how to deploy anything useful on it :) But I'm old, and I've always found it easier to tinker with nginx configs.


Heroku is very simple - you just create an app and "push" code to a repository tied to the app. You need to write code of course, but they auto-detect the language and prepare the environment or platform for you (so platform as a service). They tried to appease the geeks by making everything powered by CLI - but otherwise its just point and click for deploying apps.

Tinkering with nginx is good, but what if you want to add a database, a cache, a logging service? What if you want to automatically build code when you push to github? What if you want to easy clone multiple environments (for staging, prod etc) without never doing an SSH into server? What if you want to have versioned releases that you can easily rollback to a specified version or push a branch into one environment and another branch into another environment? And you get a workable subdomain for every environment you build with SSL enabled. All of that can be scaled down and up with literally few clicks.


> but what if you want to add a database, a cache, a logging service?

I... just add it. How do I add it in heroku where everything is "just create an app and push it" and everything is separate "dynos" (whatever that is), each with different pricing, storage, bandwidth etc.


that is true. everything is a separate dyno - means separate machine/vm to which you need to connect to. You get free tier for almost all databases and you only need to know how to connect to it, not tinkering with the storage, tune the parameters, scheduled backups, automatic scalability etc - that's done for you. it will become expensive once you go past free tier or the cheap tiers though.


> not tinkering with the storage, tune the parameters, scheduled backups, automatic scalability etc - that's done for you.

Thing is: if we're looking for free, that may make sense. you still need tinkering though: you need to figure out what the whole dyno thing is about and how to connect from one dyno to another, and so on.

However, if we expand this a tiny bit to cheap and/or free, then simply running the lowest-tier server at Digital Ocean will be a better proposition. And installing a database these days is basically the same thing: run a command to install, all you need is to connect to it. No tinkering. And it will still probably scale significantly better than a free tier at Heroku :)

Once again, I'm biased and old, and can afford to spend money on side projects. I know that sometimes even $15 a month is out of reach (oh, I've been there). But yeah, this sorta prevents me from ever understanding Heroku :)


I think that's basically what Lightsail is supposed to be. I haven't used it myself, but from glancing at it they're pretty clearly targeting DO/Linode/Vultr etc.


I think there's some verticals - players that simplify/aggregate SES, or S3, or whatever - but none that handle being a layer on top of most/all of AWS.


AWS have a multitude of services that do this already. I’d recommend to go through their services one by one and learn what they offer.


It’s not you. It’s pretty rough for new users.


Have to agree with the others. For example, while setting up ELB it's possible to select at least one option (un-checking the Public IP box) that causes the setup to just fail with a nonsensical error message. Turns out ELB requires a public IP to communicate with the nodes. That's just the most glaring one off the top of my head.

I also remember trying to set up SFTP whenever it was first released. It was literally impossible to do what they were advertising (S3-backed SFTP with jailed homes for each user) without writing a manual json config (I never got it to work). I had built my own solution for this exact thing on EC2, using a bash script more or less, and thought the hosted option would be less of a maintenance burden. Needless to say I quickly gave up.


Unless you understand IAM, the scoping of individual users from the sftp service can be a bit confusing. What you need is a scope down policy:

https://docs.aws.amazon.com/transfer/latest/userguide/scope-...

The more annoying part is that the service only supports public/private key logins. If you want user/pass you have to write a lambda. The lambda is pretty simple though, it checks the credentials (so it can hit any backend you like), and if they pass it returns a 200 with a json doc of role (which is just sftp assume role), policy (the scope down from above), and home dir.

https://aws.amazon.com/blogs/storage/enable-password-authent...

This touches on a larger issue with AWS though. It's that they are trending to leave out functionality and point to lambda as the solution. On one hand, I get it. The lambda solution is infinitely more flexible, but what if I just wanted an sftp for a couple users that uses user/pass?

To your final point, it is so much less maintenance. There is no server to manage, and since I want all the data in s3 anyway, it's already there. This solution replaced a server with chroots, EBS, scripts to move data to s3, etc...


How are people not building in Terraform for an easy ‘destroy’ at the end?

I know it’s rhetorical and a lesson learned myself, but yeah… I would expect folks to learn to use this tool to help manage costs this way.


terraform 'destroy' isn't infallible. There are certain resources that trigger the creation of other resources (for example, lambda functions will create cloudwatch log groups, and dynamodb tables create cloudwatch alarms) and when terraform destroys the resource, it doesn't necessarily clean up all the associated resources.


Terraform has it's shortcomings for sure, but I don't think it's reasonable to expect Terraform to go out and clean up second-order effects of it's resources.

I'm not doubting that the situations you describe are true, but abandoning resources like that is an AWS-lifecycle problem, not really a Terraform one.


Sure. My point is just that `terraform destroy` doesn't necessarily solve the problem at hand. And you could still end up continue paying for those second-order effects after running a terraform destroy.


I am pretty senior and can thus sometimes afford to do a new thing and try doing it the right way, at the same time. Not even always. Many people just take one learning curve at a time...


Typically Terraform takes longer to get something working than mindlessly clicking through the console. In my experience those mindless clickthrough things end up sticking around for years even when they weren't intended to.


This is why you have separate development and production accounts: a development account where you mindlessly click through so that you can learn through the UI what's available and how it works; cleaned up on a regular basis by something like aws-nuke, and a production account where you have the discipline to only create resources through Terraform / CloudFormation etc.


cdk is promising for things like this, way safer and easier than rolling your own scripts with bash or python.


Even CDK sucks though because if you’re still kinda new to it all, you want to login and make sure that it’s all hooked together correctly. And you’re back using their shitty UI.

Why you can’t look at the load balancer, look at the listener and then show what’s in the target groups is beyond me.


I have a different take on it. I started out doing everything through the console, then learned the cli and boto3, and very recently CDK.

CDK is another tool that build on the cli and boto3 concepts, and also manages the orchestration and dependencies.

Having to go back to the console isn't a fault of CDK. Learning the right tool to use for a situation is part of the learning curve. I go back to the console all the time to look something up quick or to understand what config options are available. Or I repeat the same steps in the console enough times that I get bored with it and automate

Edit: also I tried and have given up on CloudFormation more than once. CDK is like a wrapper layer around it, and has been pleasant to use.


I’m not trying to be snarky here, would like an honest opinion: how do they win on UX? Are there specific things you like there?

I’ve used all three major clouds in production now and I dread using the AWS console. Or really any part of it, over Azure or GCP.

I’ve always thought of them as purely winning the “nobody got fired for…” mindshare thing despite having a thoroughly mediocre product.


It either works great or barely depending on the service you're using - some AWS teams have dedicated dashboard teams, eg 'ec2 dashboard team' which solely focus on the dashboard experience, while others touch it as an afterthought.

I'm pretty sure something along the lines of this^ was posted on HN by a former AWS employee but I can't find it now.


Their web console UIs vary widely by service. IMO, the UX for most of their simpler services, specifically SQS, S3, Lambda, DynamoDB, are really easy to use and work nicely. If you want to start with a Docker image, send it to AWS, and have it get spun up and attached to a DNS, well that's a huge complex mess to get set up.


Unless you use something like Terraform or Pulumi, in which case it’s almost a one liner.


The AWS default view feels more crisp for me, compared to the GCP one which feels very "floaty".

Their UI toolkit for that at least is on point.


This gives the other vendors a wedge in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: