That feels like they used the ground work laid with AWS Outposts to enable such smaller auxiliary local zones as well: In both cases the control plane still resides in the "real" region, but the data plane for a local zone and an AWS Outpost is located somewhere else (customer data center or in the case of this local zone somewhere in LAX). I'd even bet that the hardware that they use to power AWS Outposts and such a Local Zone is the same. Of course obviously the scale and the offered services are different.
There's definitely similarities, but I could see it being totally different too. Outposts in only 16 racks max; I'd assume this is much more than that. Outposts is for single user and this is for everyone. Billing models presumably differ. Outposts goes in user's DC whereas I assume this is hard connected to AWS network backbone.
IDK if making these based on the same infra would be the right abstraction layer.
This could also be big for web UIs which leverage server-side rendering. I've been ramping our use of blazor server-side for web interfaces, and putting an application server in one of these local zones near where we all work/live could have a really positive impact on perceived performance.
Right now, I ping ~50ms out to us-east-1 and things feel "pretty good" in our server-side web UIs. If I could drop this by a factor of 10, we are getting into gaming monitor latency territory, and pure UI state changes could be resolved in timeframes that would be perceptually instantaneous for most users. I.e. things like clicking a button to pop a modal you wouldn't even worry about trying to make a client-side interaction anymore. You'd just wire it up using some trivial @if(showModal) inclusion block on the server-side html page template.
Granted, this imposes a pretty harsh geographic constraint if you have just the 1 server, but it is likely feasible to separate the view layer from your persistence/stateful layers, so you could host your view rendering services in multiple local zones, with all the business logic and state kept in one of the primary regions. Not all things can always be instantaneous, but if the UI is highly-responsive there are countless UX approaches for indicating to a user in a friendly way that they simply need to wait for a moment. Being able to build your web UI around blocking calls into business logic seems like a powerful place to be in terms of simplicity and control.
AWS seems to want to discourage the use of us-west-1, which is in San Francisco AFAIK? us-west-2 probably has a lot more capacity. Historically it's tended to be cheaper, too.
Is there a specific reason us-west-1 is more congested? Is it somehow more desirable, or is it just what people pick because they're on the west coast, and it's first on the list?
us-west-1 only has two availability zones available to new customers (they have three total, but one of them only runs legacy customers). From my experience it is also slower to get new hardware and services.
My personal belief is that running the "region" is far more expensive than running their other regions. Real estate in the bay area is expensive, as is electricity.
Unlikely it’s just San Jose unless there’s only one AZ (and there’s two, plus one for legacy customers only I thought). Each AZ is customarily separated by at least a distance of a few miles.
us-west-2 seems to be the anchor region on the west coast - more AZs/capacity than us-west-1, and nearly identical pricing and instance class availability as us-east-1.
I was wondering the same thing. Seems like an odd choice.
My hunch is that us-west-2 is more popular (literally every company for which I've worked that used or - currently - uses AWS chose us-west-2 over us-west-1, even when us-west-1 is geographically closer).
Probably because for the non latency- sensitive parts of your app you would want to move them to a large cheap region anyway, and us-west-2 is both larger and cheaper.
ping -c 5 70.224.224.253
PING 70.224.224.253 (70.224.224.253): 56 data bytes
64 bytes from 70.224.224.253: icmp_seq=0 ttl=235 time=16.018 ms
64 bytes from 70.224.224.253: icmp_seq=1 ttl=235 time=13.120 ms
64 bytes from 70.224.224.253: icmp_seq=2 ttl=235 time=23.026 ms
64 bytes from 70.224.224.253: icmp_seq=3 ttl=235 time=13.656 ms
64 bytes from 70.224.224.253: icmp_seq=4 ttl=235 time=15.517 ms
Orange County, Spectrum cable. I used to get sub-10ms to us-east when I lived in Fairfax County, VA.
> I used to get sub-10ms to us-east when I lived in Fairfax County, VA.
AWS East is in Ashburn, VA, just up Route 28 -- a stones throw from Dulles Airport. Not sure if they're still in the Dupont Fabros buildings or the Equinix campus.
Who's excited about this? What's your use case that just became viable because of it? Definitely don't mean these questions in a condescending way, just want to get a read on the pulse from the folks here that will use it :)
All of my Asian bandwidth comes through LA or Vancouver. Big, cheap peering with other telecoms to cross the ocean. We already have a decent colo presence in LA, this will allow us to consolidate some of that, and move other VMs.
My understanding is that things like online games could take advantage of it, for the latency. Anything that has high latency concerns would be made better by having a closer endpoint.
Dreamworks are in Glendale and have been experimenting with cloud rendering last I heard. It's always been difficult due to bandwidth and latency bottlenecks.
Its mostly higher bandwidth from being closer to source. Latency is definitely improved but so is bandwidth if you are in peered in the same exchange. Peek bandwidth is going to be much higher, especially if you are pulling/pushing north of 10G.
I used to work for smilebooth, a portable photo/videobooth thing. I could see this in theory being used for realtime video processing like greenscreen? That said, it's not so hard to build greenscreen into the device itself (we did).
But, maybe it needs to do super high quality realtime 4K greenscreen that would overload an embedded CPU, and there's a built-in AWS library that does it, and the bandwidth is high enough, and you don't want to buy dedicated greenscreen HW that sits next to your booth (which is the other thing we did) then maybe.
An AWS region consists of several availability zones (AZ's) which consist of several data centers running AWS' hardware. Each region is designed in a way that services provided by it can tolerate the loss of a availability zone. A local zone is now something like an additional availability zone with the important difference that it only runs a subset of services of a regular availability zone and doesn't feature its own control plane (which are the services AWS needs to run all this infrastructure including API endpoints, etc.). Instead the control plane of the local zone just runs the so called data plane which is what contains the services used by their customers.
The blog post contains a list. Quote: “Services – We are launching with support for seven EC2 instance types (T3, C5, M5, R5, R5d, I3en, and G4), two EBS volume types (io1 and gp2), Amazon FSx for Windows File Server, Amazon FSx for Lustre, Application Load Balancer, and Amazon Virtual Private Cloud. Single-Zone RDS is on the near-term roadmap, and other services will come later based on customer demand. Applications running in a Local Zone can also make use of services in the parent region.”
I also think of Ohio as being in the middle of the country, so I was surprised to find out that Columbus (where us-east-2 is located) is only about 300 miles from Washington DC, but over 2000 miles from San Francisco.
This would be exciting but I'm already having trouble managing subnets and this just makes it worse. If these new Local Zones support IPv6 then that makes it much easier to plan and support new networks.
It’s extremely intuitive, it’s part of the ‘us-west-2’ region, but an extension in ‘lax’ and it’s the first availability zone there ‘1a’.
Your suggestion makes no sense, it implies an entirely new region ‘us-west-3’ but with an entirely different naming scheme (no regions have an ‘a’ on the end of them) and the ‘local’ provides zero context about it with no expansion capability.