> Yes, there was a mandate for everything to be SOA. However, AWS built almost every service from scratch.
I think it's more true than false though; tldr: there was effectively a super-fugly internal set of services that were the prototype for many of the real AWS services.
Some of the computing capacity AWS itself used was built on the internal services that Amazon had been using. Provisioning capacity on those internal services required weeks as the API was very primitive compared to EC2. I think also it wasn't until VPC became quite stable that they could start to move Amazon to AWS.
So for a long time, you essentially had AWS service teams having to file paperwork for capacity (decommissioning capacity could sit in a queue for months just like in a regular data center) while AWS customers could spin up capacity via EC2. As you can imagine, AWS would build on AWS, and then they promptly ran into problems when they had to stand up new regions because they had dependency cycles... There was also a ton of tooling built around requesting all the capacity and dependencies, and maintaining the configuration for all that.
So the SOA story is really that there were a bunch of AWS-like services that were absolutely awful, and those were often the prototypes for the AWS services themselves. And many services like Lambda are now possible because they've been solving all the obscure security issues and dependency problems associated with having AWS be their own customer.
For instance, the genesis of one service was that a large customer was using EC2 and had some specialized hardware; they were big enough they said, "hey, we'd like to move this hardware out of our datacenters" and AWS stood up an incredibly primitive service that literally routed some specialized racks into this customer's VPCs. There was no API, you'd email the service team and they'd run some scripts. That's gradually morphed into a real service that's unrecognizable from the early days.
I think it's more true than false though; tldr: there was effectively a super-fugly internal set of services that were the prototype for many of the real AWS services.
Some of the computing capacity AWS itself used was built on the internal services that Amazon had been using. Provisioning capacity on those internal services required weeks as the API was very primitive compared to EC2. I think also it wasn't until VPC became quite stable that they could start to move Amazon to AWS.
So for a long time, you essentially had AWS service teams having to file paperwork for capacity (decommissioning capacity could sit in a queue for months just like in a regular data center) while AWS customers could spin up capacity via EC2. As you can imagine, AWS would build on AWS, and then they promptly ran into problems when they had to stand up new regions because they had dependency cycles... There was also a ton of tooling built around requesting all the capacity and dependencies, and maintaining the configuration for all that.
So the SOA story is really that there were a bunch of AWS-like services that were absolutely awful, and those were often the prototypes for the AWS services themselves. And many services like Lambda are now possible because they've been solving all the obscure security issues and dependency problems associated with having AWS be their own customer.
For instance, the genesis of one service was that a large customer was using EC2 and had some specialized hardware; they were big enough they said, "hey, we'd like to move this hardware out of our datacenters" and AWS stood up an incredibly primitive service that literally routed some specialized racks into this customer's VPCs. There was no API, you'd email the service team and they'd run some scripts. That's gradually morphed into a real service that's unrecognizable from the early days.