Hacker News new | past | comments | ask | show | jobs | submit login
Bottlerocket – Minimal, immutable Linux OS with verified boot (bottlerocket.dev)
318 points by akyuu 12 months ago | hide | past | favorite | 63 comments



This seems to still be very much an AWS/Amazon project with no clear path to becoming its own independent thing. For example, you want vulnerability scanning on the OS? Well you can use an Amazon product for that, otherwise *shrug* [1]. So I guess as long as you plan to run Bottlerocket in AWS, you're fine.

I wish the Bottlerocket team would do 1 of 2 things. Either own up that this is just an AWS project, or start to solve for things like this and actually be a product that "runs in the cloud or in your datacenter" as they suggest on their website.

[1] https://bottlerocket.dev/en/faq/#4_2


To be fair, I think "VM" on the OS for Flatcar / BottleRocket / CoreOS is not a requirement in the same way as on RHEL etc.

Do you want to know if you are patched? Are you running the latest version? If so, you have all the available patches.

I appreciate this can cause difficulties in some regulated domains because there's a "vm" box that needs to be ticked on the compliance worksheet.

Most of the reason we need VM on a "traditional" OS is to handle the fact that they have a very broad configuration space and their software composition can be - and often is - pretty arbitrary (incorporating stuff from a ton of sources / vendors and those versions can move independently).

But that's not how you're supposed to use a container OS.

If you do "extra work" to discover vulnerabilities in "latest", you are not really doing the job of a system owner (whose job is to apply patches from upstream in a timely fashion), you are doing the work of a security researcher.


If you’re interested in something not AWS check out Talos https://www.talos.dev/

It’s been around longer than Bottlerocket


It's not like something is stopping one from doing a vuln scan, right? Like, there's something that SSM's in (or uses the admin container) and then runs the scan. Couldn't you just do the same thing?

Genuine questions, I don't know if this is the case or not.


That's a good point. And it sounds like it would work to me as well. I don't know the answer either.

I guess my point is the project should be providing a clear path that doesn't involve AWS instead of just stopping short.


I just wrote a post on this. We have an eBPF + SBOM based security tool and it runs great due to hooking the kernel directly via Kube DaemonSet: https://edgebit.io/blog/base-os-vulnerabilities/

tl;dr: Amazon prioritizes patching really well, fixing real issues first


Why would one need vulnerability scanning on an immutable OS image? It can be done before deploying the image to the host machines.


Indeed, but it's just an example. Imagine it said "For example, you want Feature X on the OS? Well you can use an Amazon product for that, otherwise shrug" instead if it makes it easier.


I missed it was just an example. That’s a fair concern otherwise. Thanks!


Bottlerocket does not off FIPS mode like most other enterprise *nix distributions.

Just to save anybody the trouble who needs FIPS approved encryption for host OSes that you use at work for various compliance programs. This makes Bottlerocket a non-starter for us. A very active issue has been open for over 2 years on this and the dev teams don't seem to be convinced that this is important. We even communicated with the dev team through our dedicated AWS reps and they have no interest in adding this.

Here is the open 2+ year thread on this: https://github.com/bottlerocket-os/bottlerocket/issues/1667


Disclosure: I work for Amazon. I’m also the principal engineer for Bottlerocket.

FIPS support continues to be the top customer ask by a wide margin. Unfortunately the timing here is not kind for a new distro with no previous FIPS offering. New FIPS 140-2 certifications are no longer available, and new FIPS 140-3 certifications have to make it through a lengthy queue as the entire industry switches over.

If this were something the dev team could just power through, I assure you it would have happened by now. I apologize for giving the impression that it’s not important. It is, but that doesn’t help the timeline in this case.


In my experience with FIPS certification usually requires some changes that undermine security.

If you need it, then you need it, but having the certification is a mildly bad sign in my opinion.

I’m not the only one with this opinion. For instance, the Microsoft Windows team seems to agree:

https://techcommunity.microsoft.com/t5/microsoft-security-ba...


In the GitHub issue, there is a mention of replacing rustls and Go's crypto library with OpenSSL. That seems like a serious security downgrade.


Having been around a bunch of former-government people and bumping into FIPS myself a few times (like yubikeys) and reading about it, that's also been my sense, but it's nice to see a formal writeup with examples.

Thanks for the link.


It makes no sense if your goal is "have the most secure system feasible under your resource constraints and usage requirements", which is a reasonable goal.

However, the whole FIPS and USG compliance in general mindset is not that; the goal instead is "be aware of ways in which your system is known to not be secure". The idea that a known flaw is better than an less-known fix is infuriating to devs, but from a business standpoint it makes some sense.


That might make sense, but I’ve never seen it with FIPS.

I’ve only seen them force changes were ones that weaken or remove defenses against known attacks.

I’ve never seen them require additional standard defenses, or identity and propose fixes against attacks that were not already considered and addressed by the existing system.


That's... what I said. It's not about making systems more secure but about documenting known insecurities.


"FIPS is the answer to the question, 'How can we force all cryptographic software to be approved by a government committee?'" [1]

Here's your reminder that even if FIPS itself isn't evil, it moves in evil circles with evil people. [2]

[1] https://twitter.com/matthew_d_green/status/41279364233232384...

[2] https://threadreaderapp.com/thread/1433451378391883782.html


This looks very interesting but as other commenters pointed out, the path to running it yourself seems to be obscured. Even the GitHub page is listed only on the main page.

I found the VMware instructions at https://github.com/bottlerocket-os/bottlerocket/blob/develop...


Very similar to CoreOS'[1] directive

[1] https://fedoraproject.org/coreos/


And Flatcar Linux, derived from CoreOS https://www.flatcar.org/


Forgive me for the dumb question, what are the benefits of CoreOS over the alternatives ex : Alpine?


Alpine isn't immutable, meaning it opens up for more user error, and security issues, by allowing changes to its system partition.

We run immutable container hosts in production because we want to minimize the level of admin interaction. Basically it goes like this. Terraform idempotent setup of VMs with immutable Linux server OS, running containers.

We even disabled login on these in production, only keep it enabled in staging. All changes are tested in staging. If anything happens in prod, instead of logging in and making manual changes we just revert to an earlier state.

There is less need to configure files and services on the OS when everything runs in a container. You set it up once and start the VM.


How does ostree compare with the A/B partition scheme used by Bottlerocket for updates?


Neither "Get Started" nor the FAQ tell me how to run this.



The site should probably say somewhere that this was built for AWS and AWS only.

Instead it says:

> Bottlerocket is installed as the base operating system on the machine or instance where your containers themselves are running.

> Bottlerocket runs in the cloud or in your datacenter.



On one hand it seems like an ncurses tool to install to a disk seems appropriate. On the other hand, the number of times one of these images would be configured for a company is probably pretty small.

I’ll have to spend a bit more time, but this seems like a nice option for orgs that want to run on-prem (e.g. not in cloud), and have a low maintenance container host.


1. Sign up to AWS with your credit card.

2. Easily spin up as many instances as you need!


> 2. Draw the rest of the owl!

Is this available as an AMI I can use when launching an EC2 instance? If so, how do I specify which container or containers it should run? Do I paste a docker-compose.yaml file into the User Data field in the EC2 launch wizard? Do I send configuration to a certain reserved port with a specially authed HTTP POST? About the only thing I know atm is that I can’t use ssh until a container is deployed.


Yes, its listed in the Community AMIs section. It's more common to use this alongside Elastic Kubernetes or other similar AWS services though, where you can opt to use Bottlerocket at the host during configuration.

https://aws.amazon.com/bottlerocket/faqs/#Using_Bottlerocket


OK cool, thanks for that information, but I do wish that someone would explain the mechanism for deploying this OS. Like, if it’s part of an ECS/EKS scheme I’ll tolerate some magic, but at the end of the day I’m a curious person and I’d like to know the mechanics behind how my software is getting deployed. In general if I personally can’t deploy something to EC2 I feel weird about trusting higher level abstractions to do something I don’t know how to.


Well, the link I provided references the Bottlerocket docs which explains the control container and the admin container and also how you can configure Bottlerocket via the User Data field when launching it as an AMI. All the information appears to be in the docs

https://github.com/bottlerocket-os/bottlerocket/blob/develop...


Thank you! I’ll check this out, I’d like to try integrating bottlerocket into some of my Terraform workflows.


Great project, but it's been around since 2020: https://aws.amazon.com/about-aws/whats-new/2020/08/announcin...


Is anyone successfully using this outside of AWS?


Just one data point, but I wasn't able to get it to work on hetzner cloud


This seems really useful for stuff like AMD SEV-SNP, where we want a measurement of the (kernel + initrd + arguments) to guarantee certain behavior from the machine. Ideally, we could use this as the container hypervisor, and have it produce attestations that bind to the hashes of the running containers. This relies on not having container escapes; not sure what the state of the art on that is right now.


I'd much rather go with CoreOS. Is there anything they've open sourced widely used outside of AWS?


Website says that the OS does not have a shell. I cannot imagine a useful docker container without at least one shell script inside. So, if there is no shell, doesn't it mean that Bottlerocket is generally unusable except niche scenarios?


The docker containers can have shell scripts inside. The host machine doesn't have a shell. You can bring a docker container with a shell, and run it privileged, to have a shell on the host machine.


You can also launch an admin container and type `sudo sheltie` in it to get a root shell on the bottlerocket host OS if you need to debug things.

We've been using Bottlerocket together with its update operator on K8s for about a year now and we are really happy with it as it solves patch management by swapping out an immutable host OS image instead.


Containers which contain shell scripts also contain the shell itself. It is not typical for the host machine's shell binary to be made available to containers running on the host.


The idea with Bottlerocket is that the host itself does not have a direct shell nor a way to access it via SSH or any other method. Instead this responsibility is delegated to the admin container which is where you would actually connect to via SSM/SSH. From here if you needed a root shell you would use the `sheltie` utility to do so.


It's not uncommon to have docker containers without a shell for security reasons. For example distroless.


Or scratch containers, which work fine if you have a tool chain that can easily do static linking (GoLang, rust, for example).


Here's a diagram on the subsystems. Bottlerocket has API that can be called from a shell in a container.

      shells
        |
    containers
        |
    Bottlerocket
        |
     OS kernel


Yes, It's for worker nodes in orchestration services (k8s, ecs).


Verified boot?


It means there is a full trusted boot chain from the TPM to loading the immutable root filesystem: https://github.com/bottlerocket-os/bottlerocket/blob/develop...

Regular Linux distributions don't have this, even if Secure Boot is enabled: https://0pointer.net/blog/brave-new-trusted-boot-world.html


I still don't understand why people are so keen to shoot themselves in the foot and make everything sandboxed containers with virtual filesystems and networks.

Just use the damn OS and hardware directly. SSH into the host whenever you need to see how things are performing.

Kubernetes only works so long as you don't really care about resources being used well.


Reproducibility, automation, scaling.


20 years ago I managed thousands of machines through ssh and was able to maintain them all to the same setup.

Nowadays I see people spend man-years developing tools to ensure consistent deployment on 10 machines. Not only do the tools not even work, they take months to land a change that could be done manually in two minutes.


Bet you didn't do canary or blue/green deploys, or deliver automated telemetry data, or guarantee resource quotas, or provide network attached storage, or etc. etc.


Because Kubernetes is both abstraction and de facto standardized platform across infrastructure providers. All deployments to customers who are large institutions start with provisioning, OS alignment (there are huge differences between RHEL, SLES, Debian or Amazon, then customisations like hardenings are put on top), networking, storage, access rights. You don't want to deal with that from scratch each time. It costs both time and money. Direct dealing with hardware is long gone (Hyper-V and VMware), and now it's time to cut out the upper layers. Also, Kubernetes allows better resource utilisation and scaling.


> SSH into the host whenever you need to see how things are performing

These SSH people is how EC2 instances get hacked. Please stop and use telemetry.

There's also the SSM agent for people that feel the need to touch their instance.


While I understand the appeal of SSM, in practice it's just sub-par tooling.


how does this compare to nix?


Nix has no security guarantees, nor sandboxing primitives. So not really comparable.



I think Nix intention is more general purpose OS and tooling. Bottlerocket is about being just enough of an OS to run containers, and that's it.


You could use Nix to build (and manage/update) an OS similar to this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: