Hacker News new | past | comments | ask | show | jobs | submit | more waz0wski's comments login

Firecracker has been running AWS Lambda & Fargate for a few years now

https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...

There's also similar microVM project with a bit more container-focused support called Kata

https://katacontainers.io/


Not a hypervisor expert by any means but what's stopping projects backporting the super fast startup time of Firecracker into regular VM hypervisors?

I'm assuming that Firecracker is somewhat constrained in some way.


It’s written specifically to host the Linux kernel, and doesn’t use a bios or a boot loader. If you backported that into another hypervisor, it would probably have to be something like “are we loading a compatible Linux? If so switch to Firecracker mode”. But of course you can do that yourself, with a small shell script that either starts the traditional VM or Firecracker.

Or they could do what QEMU has done and put out a separate product/mode: https://github.com/qemu/qemu/blob/a082fab9d25/docs/system/i3...


That QEMU doc says:

> The recommended way to trigger a guest-initiated shut down is by generating a triple-fault, which will cause the VM to initiate a reboot

Doesn’t that mean it can’t distinguish an intentional triple fault to trigger reboot from an accidental triple fault caused by a guest kernel bug which corrupts the IDT? I think it would be better if there was some kind of call the guest could make to the hypervisor to reboot-one is less likely to invoke that service by accident than to triple fault by accident.


I'm used to QEMU VMs being slow and annoying to work with due to them being full VMs, so I was quite surprised to see that this is really just as fast as Firecracker!


Can Ventura run x86 VMs on M1/M2?

I thought it was only able to run ARM VMs which can _utilize_ rosetta to run x86 code?

https://developer.apple.com/documentation/virtualization/run...


You're correct, there is no support for directly running non-ARM VMs.


That's a generic and well documented stack that utilizes GCP defaults and works out of the box. An "expert" should not take a month to fail to set it up.

I've deployed similar, additionally including GKE, via terraform in a day - Checking TF code for an example 3-env GCP/GKE/CloudSQL stack it's less than 300 LoC

That said, it's not all good - my ongoing complaint with terraforming GCP is that the provider lags behind the features & config available in GCP console - worse than the AWS provider - especially w/r/t GKE and CloudSQL


Maybe we should have hired you instead of a "terraform expert" ;)

and yes, which features worked in GCP but not in terraform GCP were not clear, and there was always a "this works in beta" thing going on.


not sure why this kind of 'sky is falling' post is allowed, particularly when there's no useful information provided

- status boards are rarely up to date, make sure you have your own internal/external monitoring

- a single vm instance does not indicate the health of an entire region. instances can be stopped for a variety of reasons, check the maintenance log and vm console

- when using cloud resources, architecture should not be dependent on a single vm instance

- your cloud architecture and app code should be built to handle transient failures from the start - queueing, retries, backoffs, load-shedding, graceful state failure, etc. use cloud-native services when possible, lift-and-shift of traditional vm architecture to the cloud can be operationally difficult and expensive to run


Joke answer: Because dumping on AWS is fun!

Serious answer: People will confirm or not in the comments, this post seems pretty reasonable, if a little trigger happy. It could just be a random launch failure.


There's alot that goes into a mail server stack, but it's no more complicated than k8s or other stacks these days. My preferred setup is rspamd/postfix/dovecot/roundcube. The docs are good and the mailing lists are active & archived for easy searching

For a pre-packaged mailserver environment, take a look at mailcow or mailinabox

https://mailcow.email

https://mailinabox.email/

There's a variety of ansible/chef/puppet up on github that can also be used to setup the invididual component


> it's no more complicated than k8s

That's not really saying much.


I've been using matterbridge[1] to bring all sorts of 'modern' non-standard-compliant chat services back to my irc client[2]

[1] https://github.com/42wim/matterbridge

[2] https://xkcd.com/1782/


Don't let "media manager" apps have direct read-write access to files - they tend to spew metadata all over files, and if there's a bug in the software it can corrupt your data. Doubly-so for an internet-facing dependency dumpsterfire like Plex. It's also worth having at-least a DMZ with ingress/egress filtering for any internet-facing services such as Plex - only allow them to connect to what they need.

A filesystem which supports snapshots and rollbacks is good to have underlying your media collection as well (ZFS, BTRFS, etc)


I used FreeBSD on desktop for a number of years and switched over to OSX and I've been through some of the window management pain. I generally do as much as I can with keyboard hotkeys but I do have a trackpad connected to my desktop now as well.

As far as window management, I use contexts for my switcher, rectangle for hotkey-based window management, and stay for automated per-app & per-display window management

https://contexts.co

https://rectangleapp.com

https://cordlessdog.com/stay

There's also alternate window managers for OSX such as Yabai or Amethyst

https://github.com/koekeishiya/yabai

https://github.com/ianyh/Amethyst


This isn't a good example of an RCA - as other commenters have noted, it's outrightly lying about some issues during the incident, and using creative language to dance around other problems many people encountered.

If you want to dive into postmortems, there are some repos linking other examples

https://github.com/danluu/post-mortems

https://codeberg.org/hjacobs/kubernetes-failure-stories


The salty sysadmins will note that Microsoft has a track record of botched patches to things like this, with effects ranging from not-remediated to majorly breaking other OS components


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: