Hacker News new | past | comments | ask | show | jobs | submit login
A Second Conversation with Werner Vogels (acm.org)
120 points by mpweiher on March 7, 2021 | hide | past | favorite | 13 comments



I had the pleasure to work with Werner in many occasions, from my early AWS days as Tech Evangelist in Europe, then in Asia, then in America, from 2008 to 2014, when I left.

He has always been one of the nicest person I've met at Amazon (and I've met a few like these! They aren't rare), and has helped me a few times over the course of my career while I was there.

I can safely recall a particular memory to illustrate how nice Werner is: one evening, back in 2011, after a very successful event in Australia (here's one of my videos [0] of that event, after being introduced by him as "the magician" [1] - something I didn't expect at all!), a few of us, including the head of AWS Asia-Pacific, the head of marketing, etc, were in his hotel room celebrating the event with a glass of wine.

Out of the blue, without any need to do it and without me asking, he decided to praise my work in front of them, and even humbly suggested that he was going to take inspiration from a couple of things I did during the past few presentations. I've rarely seen such humility in someone so accomplished.

Little did he know that I was going through a tough period where I didn't feel fully recognized or rewarded for my work, and his praise helped me significantly as I felt much better about it, and possibly my bosses saw me in a different (better) light.

I think it was the first unexpected praise I have received in my life, and that's probably why I remember it so fondly. My job was high pressure, and at that point I had been at AWS for more than 2 years, worked crazy hours, and had my fair share of tough moments in my career (mostly with PR, but that's for another time).

I even considered the idea of working for him, but unfortunately in these years he didn't want to manage an organization because he didn't want to handle the related stress, so he stayed an Individual Contributor, and I believe he still is to this day.

Nevertheless, many great memories about him, both as a human being and as a colleague. Amazon is lucky to have him, and I am lucky to have worked with him.

[0]: https://www.youtube.com/watch?v=vJ6XvQ94UnM

[1]: https://www.youtube.com/watch?v=7BZoq_NrwTw&t=6m18s


Thank you for sharing this awesome personal story.

And not just because it was nice to read, but also because it reminds those of us in leadership roles that giving sincere praise to a colleague costs us virtually nothing and yet has lasting positive impact.

Here we are years later, and this anecdote you've shared will no doubt inspire some of us to follow Werner's example.

And so the ripple effect continues.


> A few things around S3 were unique. We launched with a set of ten distributed systems tenets in the press release. The ten tenets were separate from S3, stating that this is how you would want to build distributed systems at scale. We just demonstrated that S3 is a really good example of applying those skills.

In my own naive way I believe that AWS, today, probably embodies James Hamilton's vision [0] far more than most of those original 10 tenets.

-

> One of the keys to the success of S3 was that, at launch, it was as simple as it possibly could be, offering little more than GetObject and PutObject. At the time that was quite controversial, as the offering seemed almost too bare bones... most technology companies at the time were delivering everything and the kitchen sink, and it would come with a very thick book and 10 different partners that would tell you how to use the technology. We went down a path, one that Jeff [Bezos] described years before, as building tools instead of platforms.

Most folks don't realise but starting somewhere around 2002 "Web Services" (SOA, UDDI, WSDL etc) was really hot and every company including Yahoo!, Google, and Microsoft were betting big on it. Amazon simply turned around and took a radically different approach to it all. This describes why Azure's initial push in 2008 [1] and Google's attempts with AppEngine [2] fell relatively short, even post AWS' IaaS play.

-

> We needed to build the right tools to support that rate of radical change in how you build software... you have to work with your customers, to wait to see how they are using your tools and see what they do. So, we sat down and asked, "What is the minimum set?"

I think, this might one of the key differentiators in how AWS thinks about Cloud versus the rest. AWS v1 launches are sometimes laughably inadequate, bare, buggy; and in some cases continue to remain so, but often is the case that the products take-off in all sorts of directions as the feedback trickles in and the roadmap is reshaped in response to it. James Hamilton talks about this as he contrasts AWS' approach with "other IT companies" (which, I like to think, is him talking about his former employer, Microsoft). [3]

-

> You have to be really consciously careful about API design. APIs are forever. Once you put the API out there, maybe you can version it, but you can't take it away from your customers once you've built it like this.

GCP's triggered? On a serious note, I was part of a team that "deprecated" a public-facing AWS API (an entire service, really) and the world didn't even notice because the team live-migrated all customers to a newer API (service), grand-fathered their (cheaper) pricing model in, left all their historical artifacts intact, ran both older and newer deployments side-by-side just in case, and so on... A painfully slow process for engineers, but I guess, customers must love it?

-

> Being conservative and minimalistic in your API design helps you build fundamental tools on which you may be able to add more functionality, or which partners can build layers on top of, or where you can start putting different building blocks together.

Here's a fun presentation from re:Invent 2014 on the distributed systems primitives (aka "building blocks") that AWS engs built early-on for S3, EC2, DDB etc: "Under the Covers of AWS", Al Vermeulen and Swami Sivasubramanian, https://www.youtube.com/watch?v=QVvFVwyElLY

-

> ...in December 2004, we decided to take a deep look at how we were using storage, and it turned out that 70 percent of our usage of storage was key-value. Some of those values were large, and some were really small. One of those drove in the direction of Dynamo, in terms of small keys, a table interface, things like that, and the other one became S3, with S3 more as a blob and bigger value store, with some different attributes.

Hm, storing really small objects in S3 is an anti-pattern to this day (look at how slow the List operation becomes once you do this to your buckets) [4], we walk into this landmine all the time.

-

> I think that much of this conversation is going to be about evolvability... "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."... It doesn't hurt that much if you make long-term decisions on very simple interfaces, because you can build on top of them. Complex systems are much harder to evolve.

Demonstrated by AWS designing a simpler system from scratch because they couldn't evolve existing complex systems: SDB and DDB; CF and CDK; SWF and Step Functions to name a few. One of the few times Joel's Law [5] is justifiably worth breaking.

-

> ...mostly a back-and-forth with our customer base about what would be the best interface—really listening to people's requirements. And to be honest, immutability was a much bigger requirement than having a distributed lock manager, which is notoriously hard to build and operate. It requires a lot of coordination among different partners, and failure modes are not always well understood. So, we decided to go for a simpler solution: object versioning.

This is again, in my eyes, one of the unique aspects of Amazon's approach to acting on customer feedback. The management tends to always favour inventing simple solutions that can be accomplished with fewest possible resources [6][7].

-

> When we designed S3, one of its powerful concepts was separating compute and storage.

Separating compute and storage is how Amazon Aurora came about as well. I wonder if Lambda then is about separating compute and the CPU (server utilization, to be precise) [8]?

-

> ...the team "is completely responsible for the service—from scoping out the functionality, to architecting it, to building it, and operating it. You build it, you run it. This brings developers into contact with the day-to-day operation of their software."

Interesting contrast with SRE orgs. Besides, a lot of new joinees I knew at Amazon weren't really happy in eng teams where operations load was heavy because they'd feel they didn't really build those services. That's the only downside I could think of. Werner's right though that "you build it, you own it" does inculcate in everyone a deep empathy for their customer's pain-points. [9]

----

[0] https://mvdirona.com/jrh/talksAndPapers/JamesRH_Lisa.pdf

[1] https://www.zdnet.com/article/microsofts-azure-cloud-platfor...

[2] http://highscalability.com/google-appengine-first-look

[3] https://perspectives.mvdirona.com/2016/03/a-decade-of-innova...

[4] https://news.ycombinator.com/item?id=19475726

[5] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

[6] https://www.google.com/books/edition/Working_Backwards/jgn5D...

[7] https://news.ycombinator.com/item?id=26180074

[8] https://youtu.be/dInADzgCI-s?t=541

[9] https://news.ycombinator.com/item?id=26252902


All this talk about simplicity but still many AWS services are way more complicated than they should be. Fargate is one example. To just run container, I need a ECS cluster, Service, Task Definition, etc. Compare this to Docker-compose. Compare it to Heroku.

To make it flexible, it's made very complicated. I know there are tools and CLIs which take this pain away. But even if you deploy it with a tool, you still need to deal with complexity when you want to deal with problems in production.


> All this talk about simplicity but still many AWS services are way more complicated than they should be.

Agree. It isn't that AWS hasn't gotten anything wrong. There's plenty services languishing in mediocrity or worse (Ever see anyone recommend Flexible Payment Service over Stripe, or CloudFormation over Terraform?)

For services that are popular, it so happens that AWS favours iteration: There's always a tension between what to leave out and what to build, but as Werner explains, AWS prefers its customers drive the roadmap rather than having to invent everything all at once which makes it too difficult to later change course (see: SimpleDB vs DynamoDB) in an industry that changes very fast.

Also, simple and easy don't quite mean the same things :) but I get your gripe. That's why I believe platforms like Cloudflare Workers, Firebase, Heroku, Fly.io, StackPath EdgeEngine will thrive despite the existence of larger cloud providers.


> I wonder if Lambda then is about separating compute and the CPU

I think lambda is more like the glue that sticks all these separate services together again.


The First Conversation was back in 2006 when he was interviewed by Jim Gray

https://queue.acm.org/detail.cfm?id=1142065


> don't lock yourself into your architecture, because two or three orders of magnitude of scale and you will have to rethink it.

A statement that is already worth a long article. Glad to see an example of straightforward communication!


Getting a 500 error, sad. I wonder if this HN link overloaded it.



The size of aws codebase makes me wonder if the successor to amazon will write its own code, or will build an AI to write it for him


The tech giants in China are writing all this same code again.


lol!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: