Hacker News new | past | comments | ask | show | jobs | submit login

I was under the impression that AWS needs to do mandatory guest power cycles in order to update underlying infrastructure (e.g. to mitigate Meltdown, Spectre, L1TF, MDS, etc.). Is that not the case? Has the instance actually been up since 2016?



I had a Windows Server 2008R2 instance running SQL 2008 for over 10 years from 2008-2018 that I only rebooted myself a handful of times. It had ephemeral disks because that's all there was when it was launched so the hardware never changed. There was an unexpected reboot once in 10 years. In 2008 people told me I was crazy to run SQL let alone Windows in AWS. I never had a complaint.

I eventually switched to RDS to save some money.

I have another 2008R2 server running IIS that was launched at the same time and is still in service on the same hardware.


I don't have inside knowledge on AWS but I do know OpenStack (open source cloud infrastructure project) and it's had the ability to live migrate guest VMs for years so I'd be shocked if AWS can't move VMs and cycle hosts to swap failing hardware or apply firmware / ring 0 hypervisor updates.


This is an infamous gap, and AWS has had to reboot customer instances in the past: https://www.quora.com/Why-cant-Amazon-AWS-migrate-a-live-ins...

The question is whether something has changed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: