Hacker News new | past | comments | ask | show | jobs | submit login
Bottlerocket: An operating system designed for hosting containers (github.com/bottlerocket-os)
280 points by ecliptik on March 10, 2020 | hide | past | favorite | 113 comments



A link to the actual source code (90% Rust) and README: https://github.com/bottlerocket-os/bottlerocket

And here is a post from AWS with more technical details: https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os...


> To improve security, there's no SSH server in a Bottlerocket image, and not even a shell.

Consider my interest to be piqued.


Given that Red Hat recently killed CoreOS, it's great to see new alternatives coming up. I cannot wait to give it a spin!



There is MicroOS from openSUSE: https://en.opensuse.org/Kubic:MicroOS


https://github.com/coreos/fedora-coreos-tracker has some activity, but it's far from robust.



That first needs to prove itself in practice, so far it seems more like a desperate try to tie it to Fedora and have it catch up with the popularity in cloud of the likes of Ubuntu and to eventually have people paying by locking into RHEL CoreOS.


No, let's pay a premium and lock ourselves into a trillion dollars company's last day project instead :-)


From the AWS Compliance Shared Responsibility Model: https://aws.amazon.com/compliance/shared-responsibility-mode...

Operating system maintenance falls under the Customer responsibility side. With using this new OS, would this responsibility shift back to AWS?


Unlikely, I'd say. If you fail to upgrade when a new release is made, are they at fault?

You'd need to look towards providers that specifically take on more responsibility like https://compliantkubernetes.com/ (disclaimer: I have worked at Elastisys, the company behind Compliant Kubernetes).


No, although it might reduce your effort needed. AWS offers that responsibility shift under the Fargate ECS/EKS launch types, which might run this underneath.


Thanks!

I miss the part where it explains how it works. Is it all just containers?


It's buried a bit in the GitHub docs, but it looks like it runs two containerds. One is for Kubernetes. The other is for running administrative containers, including the API server for normal interaction with the host, a server that implements AWS's remote command protocol (enabled by default), and a container that runs sshd (disabled by default).

Packages are built with RPM but RPM isn't used at runtime. Instead, the system is image-based and reboots for updates.


I'm really sad that my comment was removed, not even the usual [flagged] edit can be seeing anymore.



Glad you're enjoying them! We have a glossary just in case: https://github.com/bottlerocket-os/bottlerocket/blob/develop... (my personal favorite is Laika, the first dog in space and our pre-init binary)


Honestly, reading the glossary gives me Urbit [1] vibes :(

> bork: A setting generator called by sundog to generate the random seed for updog, determining where the host falls in the update order.

[1]: https://urbit.org/docs/glossary/


Agree. It's cool so long as the number of names is small, and the names actually are a pun on the function and not just e.g. names of planets. If "updog" is what brings something "up" that's a good name.

WiX (windows installer creation) has a multi-phase command line interface where the compiler/linker/.. has different names indicating the order they are applied: candle, light, smoke... Also a working system I guess.


"bork" is dog-talk for "bark", and so something that randomly gets an updog going being named such makes sense too, it's just a slightly more obscure joke.


Opening line from their announcement blog post:

>It is safe to say that our industry has decided that containers are now the chosen way to package and scale applications.

Curious how the HN community feels about that statement. Not so much about the truth of the statement but about the fact that containers are becoming the de facto method of packaging applications.


As far as I'm concerned, this is an obvious truth. Linux containers are processes with better sandboxing -- who would not want this?

As kinks in the kernel support and tech get worked out, and OSs deepen support I can't imagine that it will ever make sense to say something like "I could have run the process with cgroup and namespace isolation but I chose not to, choosing to make a new user-level isolation or run everything as root instead".

Arguments against containers as the future based on the complexity may have weight but not for long.


The runtime and packaging format are orthogonal things .

Containers the package format (docker) is completely rubbish in my opinion


"I could have run the process with cgroup and namespace isolation"... using systemd.


Or skip the million non-container related dependencies introduced by systemd and focus on a container centric init system...


> Linux containers are processes with better sandboxing

simply not true: https://github.com/google/gvisor#why-does-gvisor-exist


The word "sandbox" is a bad choice -- "isolation and resource limiting" might have been a better term to use, but the idea that containerization does not sandbox at all is not a fair characterization.

It's not a good sandbox, but if we are pedantic about the definition of a sandbox, it fits, especially when we think of the benefits of namespacing (effectively removing access to resources like networks, filesystems, etc).

gVisor is a more focused on sandboxing processes specifically, so it's relevant but gVisor is not relevant to the wider discussion about a packaging format -- unless you're suggesting to run gvisor'd processes instead of containerized ones and containerization is still beneficial in that scenario.


It's about damn time? I, for one, do not like having to deal with shared library hell, conflicts, and compiling from source because some unpaid repo maintainer is responsible for integrating code into my system. Not to mention the security enhancements.

The only thing that would make them better is if we stopped over-complicating them and made them portable [0].

[0] As in, could be moved to different disks and run from there without a bunch of hoop-jumping.


For application code the ecosystem around containers is pretty handy.

For load balancers, databases, and other stateful stuff I still run the binaries on VMs.


You need to prefix "applications" with "web" or "cloud."


Desktop and mobile is actually where you want containers most. Servers rarely run untrusted or semi-trusted code because everything comes from a trusted source, usually open source, or in house.

But users want to run lots of shady apps, either that they find on random websites or places like the Google Play store.


It's also the rare case where you can't accept a 5% performance hit because that's 5fps in a game or 5 seconds on a 100 second render time or 5ms instead of 95ms wait in an interactive app.

I find that the key to running desktop OS/apps is never use sensitive data and always be ready to wipe your machine and start over.


Containers don't have a 5% performance hit.


That sounds good but I’ll believe it when I see it. There aren’t many desktop container/sandbox implementations out there and most are “vm light” e.g the windows sandbox and sandboxie. I haven’t seen anything more lightweight that can run desktop apps (in Windows at least).


Windows store apps are one example.


Not sure if that changed since its inception but originally windows store was exactly this: no SLI/Crossfire, v-sync always on, no overlays. Basically it was isolated from using the driver properly.


Unless you want to roll the traditional telecom industry under the "web" or "cloud" label, you have to add "telecom" to the list as well.

Containers have been gaining ground in telecoms since at least 2015 (https://www.sdxcentral.com/articles/analysis/telecom-opens-u...). Network function virtualization solutions rely increasingly on containers.


Flatpaks and snaps, while maybe not as popular, are also containers


They're also pretty terrible ways to distribute a native application and can be crippled due to their containerization.


Unfortunately, they're over-complicated as a means of application packaging and distribution, at least in my opinion.

Now AppImage, that's pretty great. It doesn't really bring any of the security benefits of containerization, but that can be tacked on separately.


We rarely use containers for our deployments because you can get the same features that containerization provides by other means. The biggest issue with containers is the stability of Docker both the operational stability of containerd and the API stability with the tooling (like to rename command-line switches). As far as Kubernetes goes, my problem is visibility. I have very limited knowledge about what the containers are doing, the provided metrics that you can access are much less than I need to try to operate a k8s cluster. Once you need to look at the actual host-level metrics (CPU, IO, mem, ...) you need to have a map of what runs where. One of the reasons people are pushing for k8s is that you do not actually need to know what runs where.

As far as complexity goes, I would much rather have a nodes where a single application is running and using 100% of resources (instead of having containerd or k8s services running) and do simple autoscaling, having access to host-level metrics that I can map back to applications easier than use Docker, k8s & co. Maybe is it only me, but I care about efficiency. Why waste energy?

The counter-argument is that developer time is more valuable than setting up clusters or autoscaling groups. Well, this breaks down when you have SRE team(s) maintaining the k8s clusters (literally every company I worked for). If you already have SRE people either embedded into your dev teams or separately then you can just build out a CI/CD pipeline that produces that production setup based on blueprints. We usually use Terraform and Ansible with tempalte variables (stage = test|qa|prod, cluster size = x, version = y) that makes it easy for everybody to provision clusters on their own. Does this mean more work than k8s deployments? Yes. Does this mean we have less complexity we need to care about? Yes. In my experience containerization is a development tool to make it extremely easy to achieve fast development cycles but right now the accidental complexity to take that with you to production is not worth it. There are very nice projects like LXC/LXD that I would consider using for security separation and resource management but we usually have clusters where 100% of resources go to a single services. Example: Hadoop cluster, Elasticsearch cluster, Web application (mostly API) clusters. I need to care about the underlying hardware because of financial reasons (what is the cheapest node type I can use to run workload X). k8s would not help here.

To sum it up: I do not think that the industry has decided on this. I also think that we are in the era of wasteful computing which will be finished soon because of reliability and unnecessary CO2 production reasons. Running containers has to be much less fragile and efficient to be considered the way to scale applications. I personally think that Firecracker is a step in the right direction in this while Docker & k8s in the wrong direction.


> As far as complexity goes, I would much rather have a nodes where a single application is running and using 100% of resources (instead of having containerd or k8s services running) and do simple autoscaling, having access to host-level metrics that I can map back to applications easier than use Docker, k8s & co. Maybe is it only me, but I care about efficiency. Why waste energy?

Hallelujah! I'd imagine inefficient containered architectures do more for the cloud provider's bottom line rather than help the customer.


The subject says "for hosting containers" but the README says "for AWS EKS Kubernetes" which sounds a little less general...

How tied in to the AWS model is this? Are the places that would need to be expanded known?

Also at least at a glance, this is a neat use of real-world Rust


> This is a reflection of what we've learned building operating systems and services at Amazon.

I don't know how actually tied in it is, but it's not totally surprising to me that it's built for AWS infrastructure.

That being said:

> To start, we're focusing on use of Bottlerocket as a host OS in AWS EKS Kubernetes clusters. We’re excited to get early feedback and to continue working on more use cases! > > Bottlerocket is architected such that different cloud environments and container orchestrators can be supported in the future.

Very exciting!


While our first variant is focused on Kubernetes and EKS, we have designed Bottlerocket in a way that new variants can be built that work with other orchestrators, or even without one (we have ECS support on our roadmap already). Also, we really enjoyed working in Rust for big chunks of this!


> Also, we really enjoyed working in Rust for big chunks of this!

Glad to hear it! Please reach out if you need anything :)


>"While our first variant is focused on Kubernetes and EKS..."

So is the idea that people create a Bottlerocket AMI and use that as their EKS worker node images? Is that correct?


Correct! Or you can spin up one of the AMIs we've already built. You can find current AMIs via public SSM parameters: https://github.com/bottlerocket-os/bottlerocket/blob/develop...


Their take on using cargo for packaging is quite interesting: https://github.com/bottlerocket-os/bottlerocket/tree/develop...


They are using tradicional RPM for packaging.

The cargo.toml workspaces relates more to make IMHO.


Correct, that toml file allows you to build everything from that level, and open editors on that folder with the language server functioning. RLS requires a cargo.toml file in the root of the workspace even if it just points at other directories


Wouldn’t something like Bazel make more sense? Using cargo to track inter-dependencies seems a bit weird to be honest.


well it's interesting since they only need to learn the rust toolchain, not something else.


Is this intended to become AWS's version of GCP's Container-Optimized OS?

https://cloud.google.com/container-optimized-os


Project is up on GitHub along with its other components! https://github.com/bottlerocket-os


Looks like it is free software. Dual MIT/Apache license.

https://github.com/bottlerocket-os/bottlerocket


This is somewhat false advertising. This is not an operating system in the sense of being a new kernel. It looks like it's a set of build tools for building a Linux distribution.


An operating system is not defined only by its kernel. It's kernel + APIs + user land. If one of those components is changed radically, it's indeed a new operating system. That's why Ubuntu and Debian are distinct, even if both are based on Linux.


You're correct. Debian and Ubuntu are technically different operating-systems. Due to the similarity of 'operating-systems' built around the Linux kernel they're typically referred to as 'Linux distributions'.

In the context of software development, if you tell someone you're developing a new operating-system you're probably going to conjure up images of writing a new kernel. If you tell people you're developing a new Linux distro, this is closer to what they'll imagine.


Exactly. It is yet another Linux distribution added into the list of many, but specifically tied to AWS. But the magic word that changes everything is something called 'Rust' but not what you actually think it is used for in terms of implementation when first looking at the HN title.

> ...a new Linux-based open source operating system that we designed and optimized specifically for use as a container host.

Even the points made for creating this distro was really for stripping out the unnecessary software in a default Linux distro install and to also increase the startup time for the essential userland processes and optimizing the OS from any possible bottlenecks and security pot-holes.

It's a shame really that it is built on top of Linux rather than an actual new Rust operating system by Amazon. I'm not sure why I would use this particular one if it is based on Linux while it also promotes another lock-in opportunity for AWS.


It’s not that simple. I wouldn’t use the word distribution for GUIX, NixOS, Android and several OSes that use the Linux kernel.


This is true. Android is a good example of where the line between a Linux distribution and an outright OS blurs. I'm not convinced that this really affects my overall point all that much though.


Stallman is that you?


Sadly that plane sailed about 15 years ago when everyone who made a new Linux distribution decided to call it an OS.


Fascinating project! Anyone know of possible overlap with Firecracker? Really digging these Rust projects.


No overlap per se, but we are looking at how to integrate Firecracker as a potential target for "container" launches: https://github.com/orgs/bottlerocket-os/projects/1#card-3386...


How does this compare to Linuxkit? At first glance it seems almost identical but I may be missing something.

https://www.github.com/linuxkit/linuxkit


Linuxkit allows you to build your own appliance like OS, while Bottlerocket is more of an end user project. A project that is more similar to Bottlerocket is https://www.talos.dev or https://www.projectatomic.io


I see. Thank you.


Any insights on the design decision to go with Wicked instead of systemd-networkd which is already provided by systemd and better integrated with the remaining systemd tooling/components/conventions?


It looks like this supports automated updates within a specific time window, but it's not clear to me how the "waves" are defined. (Note that this is something that is currently lacking in Fedora CoreOS: https://github.com/coreos/zincati/issues/34.)

I do wonder if the dual partition approach was deemed more stable than using OSTree or why the latter wasn't used.


Does anyone if or where Bottlerocket is being used at AWS? Is it used to run their Fargate/ECS/EKS service?


Question for any Amazon folks here that may know. Is this akin to something like Atomic or CoreOS that is used within Openshift as the Master or Worker node OS or is this more like the UBI (Universal Base Image) that can be used as the base image of a container via "FROM" within a dockerfile?


You don't need Amazon folks, just read the material.

This is not a container base image, it's a container host OS. It is somewhat similar to Atomic or CoreOS, but in some ways it seems to be a bit more of a radical redesign than those.


So, the main feature is that updates happen for all packages at once and not for each package individually. Sounds interesting, even for non cloud setups.

How does that work? The explaining image does not explain that. How is that different from rolling back on file system level?


The update system is image-based; when an update is downloaded, it's written out to an alternate set of partitions, and then it can flip over to those partitions with a reboot. That makes it easy to roll back with the same kind of single flip, too.

It's different than filesystem-level rollbacks because it's all-or-nothing, so you don't have to worry about update failures after a few packages, and because all of the components in a given image are guaranteed to be tested together, whereas with package-based systems, your combination of packages may have never been used together by anyone else. In addition, for builders, it's easier to sign, distribute, and verify a single image.


How does this compare to something like nix or Fedora Silverblue?


Nix has upside that you just need to flip a symlink to do the same . Downside is that you don't have thinks like dm-verity that can prove that your update wasn't tampered with.

In nix the nix store is remounted over itself read-only, but nothing stops someone from ripping out the disk and flipping bits. This is not possible with these kind of 2-partition schemes if you have dm-verity set up


Isn't Nix a Merkle-Tree system of hashes? Doesn't that allow you to easily verify everything that you're running?


This is possible in nix with "nix-store --verify --check-contents" .


That's different though. that's verification _after_ the fact, whilst dm-verity does it any time during block-level access.

Also an attacker could modify the nix-store sqlite database and spoof the hashes, rendering this check moot


I'm curious about why Bottlerocket is building base packages like glibc, bash, util-linux, etc. from source, rather than just pulling binary RPMs from CentOS or Amazon Linux.


Amazon Linux and CentOS are general-purpose distributions and need more features built into the base packages than we do for Bottlerocket. We’re able to simplify the spec files and produce smaller RPMs with only the content and dependencies necessary for our more narrow goals.


So it's AWS's version of CoreOS?

I wish somebody'd take some VC or R&D money and build distributed computing features into the kernel itself, so we could quit wasting our collective engineering talent, time, money and energy on distributed applications that run on non-distributed-operating-systems. It's like nobody wants to work on creating a round wheel, so instead we're spending all our time building custom roads for square wheels.


Erlang is probably the closest thing to this, but unfortunately everything it runs also needs to be written in erlang. If erlang could run containers as processes it would take over the world, all of the features from kubernetes has been baked in since the late 90's.


The closest things really are distributed operating systems from the 70's and 80's, and cluster operating systems from the 90's, and 00's.

Unfortunately, all of them were either research projects, proprietary products, or patches that never made it into the mainline kernel. Nobody has since tried to get the functionality into mainline, so people keep hacking together these non-standard pseudo-operating-systems and jumbles of disparate applications.

You could eliminate 80% of the need for K8s by adding OS primitives to connect and operate namespaces and control groups between nodes, as well as native i/o (block, file, "N-way pipes") between nodes. Once that was done, systemd (or something like it) could manage services across an entire cluster. Applications could communicate between arbitrary nodes without any added functionality. Virtually all of the complexity would be in the kernel and systemd, so apps could be simpler and we wouldn't need 100 layers of userspace junk just to keep an app running on 3 nodes.


I am from AWS. Could we please change the title to say “Bottlerocket from AWS“. Like Firecracker [1] it’s explicitly not AWS branded.

1. https://github.com/firecracker-microvm/firecracker


Speaking of branding, I've some questions:

1. When are services branded as AWS (AWS Fargate) vs Amazon (Amazon DynamoDB)?

2. Is BottleRocket a nod to SkyRocket [0] or a movie of the same name?

3. Why is it called Fargate [1]?

[0] https://en.wikipedia.org/wiki/Skyrocket

[1] https://youtube.com/watch?v=ye3-gUwu9tI&t=44m28s


To answer your first question, AFAIK, standalone services that can be used on their own are prefixed with Amazon (eg S3, EC2, DDB), whereas services that are deeply integrated into AWS ecosystem are prefixed with AWS.

Disclaimer: I work for AWS, but this is not an official answer. What I said is correct to my best knowledge, but I cannot guarantee its correctness/accuracy.


This is the correct answer, it was mentioned in an AWS Cert training.


Firecracker, Bottlerocket :).

On Fargate, you got the answer there.


Not just any movie, an absolutely terrific movie. Wes Anderson's first and best.


"Bottlerocket from AWS" sounds like a commercial. I've just taken AWS out of the title above.


That makes sense. Thanks a ton


You’ll have better luck emailing:

mailto:hn@ycombinator.com


[flagged]


Always. I just wanted to do it before msw :)


Honestly this reply was worth the karma loss.

Hi Matt!


wave :-)

We know this is how you show you care. I don't think there was any reason to downvote or flag. But not everyone on the Orange Site knows one another, so I can see how it could be misinterpreted.


Could you come up with a better name than bottle rocket?


Disclosure: I work for AWS, and I approve of the name "Bottlerocket". :-)

It's getting largely a positive response out there: https://twitter.com/alexwilliams/status/1237773085039722496


This looks pretty sweet. It seems to be a continuation of the same trend of "just enough Linux to run containerd" that CoreOS started, linuxkit continued, then Project EVE expanded to cover virtualization.

It is also interesting to note that every step on that journey seems to have picked the coolest runtime to implement it in (C/early Go, established Go, and now Rust)


Started to dig into the implementation. First impressions so far: loving all the Rust harness - really nicely done and way better than buildroot/yocto/etc for creating tight, single-purpose linux images. Speaking of tight, here comes the bad news: really NOT loving all the over-engineered upstream components like D-Bus and systemd that seem to be there by default. In that sense #linuxkit with its Alpine base and strong attention to how bit the image is still comes way on top.

One more thing on the good side: the TUF implementation in Rust seems really interesting. I'll be digging some more and may actually steal it for linuxkit (and by extension Project EVE)

Fun fact: a lot of the patches you will find in more system level packages like grub seems to trace their lineage to CoreOS (and potentially Project EVE) but I haven't seen acknowledgments anywhere. This is of course all fine from licensing perspective -- but I still would be curious to know whether it is indeed where it was taken from.


> really NOT loving all the over-engineered upstream components like D-Bus and systemd that seem to be there by default.

I'm happy with the usage of systemd if they take advantage of the hardening features in systemd units for core system services. I'm a bit less happy about the continued usage of Docker, but I get why that's happening for this (EKS and ECS both use it, so it helps support that infrastructure).


Bottlerocket does not package docker. It packages containerd instead for its container runtime.


containerd is the supervisor for the docker/moby container runtime environment. It is not used for crio (for k8s), podman (for non-k8s), or any other OCI container management engine (obviously excluding docker/moby).

You obviously know this, but for everyone else playing at home, "Docker" is made up of three distinct projects: moby (CLI and API), containerd (supervisor daemon), and runc (container runtime core).

Of the three projects mentioned above, only runc is used by nearly all major "container engines" as people call them.

And as pointed out by another poster, you do have the rest in the Bottlerocket tree.


What is the docker-engine package for then? https://github.com/bottlerocket-os/bottlerocket/tree/develop...


Sorry, I should have been more clear. The docker packages are there for the development build of Bottlerocket. The Kubernetes variant does not use the docker packages for its build. See more about variants here: https://github.com/bottlerocket-os/bottlerocket/tree/develop...


Ah ok, thanks, still finding my way around.


> way better than buildroot/yocto/etc for creating tight, single-purpose linux images

Can you say more about the advantages over Buildroot and Yocto?


I'll take Buildroot, and Bitbake first (Yocto itself is slight different we'll get to that later). Both Buildroot, Bitbake share the ultimate goal with all these news systems like linuxkit and bottle rocket. They all aim at producing single-image, pruprose built linux distros based on the very same usual suspects of upstream components. So the question really two-fold: out-of-the box availability of said upstream components (this is where we should stop talking about Bitbake and talk about Yocto -- Buildroot kind of commingles the two) AND the usability of the build harness itself.

So on #1, if you look very casually at Buildroot and Yocto -- they will clearly come on top over this next generation of systems. It appears they have WAY more upstream components already available for you to chose from. Compared to them the list here looks almost laughable https://github.com/bottlerocket-os/bottlerocket/tree/develop... and https://github.com/linuxkit/linuxkit/tree/master/pkg The problem though is combinatoric explosion of how you can compose all these upstream components. The canonical example here is the choice of your init system. You pick one -- and your choice in everything else gets severely restricted. So to some extent that apparent embarrassment of riches that Buildroot and Yocto offer is misleading.

These next generation systems, on the other hand, don't pretend that you can build a host OS in any shape or form you want (hence very few base packages) but rather that you build "just enough of Linux to run containerd" -- the rest of what you would typically put into your baseOS goes into various containers. This is a very different approach to constructing the bootable system, but subtly so -- which I don't think a lot of people on either side of this debate appreciate.

I honestly think that what makes Yocto and Buildroot difficult is that they want to be all things to all people and they want it at the level of baseOS -- complexity-wise, this is a wrong approach these days.

That scores one point for these next generation systems in my book.

The question #2 is not even a comparison. In Buildroot most of the integration/package logic is implemented in Makefiles and usability-wise (if you're trying to actively change the system or add a new package) it falls apart pretty quickly (it is still great if you're just using what's already there btw). In Bitbake -- the codebases is REALLY complex Python which suffers from the same issue. Contrast it with Linuxkit/Project EVE where all that logic is golang and bottle rocket which uses Rust and ask yourself whether you would rather debug a complex issue with a dozen of Makefiles all full of non-trivial recipes or look over go/rust codebase (yes, I know all these things are turing complete and thus equivalent -- but life is to short to debug Makefile).

If you don't quite believe me, there's been a number of studies about using Buildroot and Yocto for building containers. Pretty much all of them came back with the same conclusion -- the usability aspect of extending them makes it a non-starter. Here's the one from the last Kubecon that VMWare guys did: https://blogs.vmware.com/opensource/2020/02/27/distribution-...


> These next generation systems, on the other hand, don't pretend that you can build a host OS in any shape or form you want (hence very few base packages) but rather that you build "just enough of Linux to run containerd" -- the rest of what you would typically put into your baseOS goes into various containers. This is a very different approach to constructing the bootable system, but subtly so -- which I don't think a lot of people on either side of this debate appreciate.

This is a valid approach if you want to build something that can only run containers, but IMO is somewhat orthogonal to the Yocto and Buildroot goal of building distros for embedded platforms.

It's awesome that people are making new tools to do similar things to Yocto and Buildroot in this post-container world, but I don't think it's really fair to say that bottlerocket is a direct competitor to Yocto/Buildroot. It's probably fairer to say that bottlerocket makes it easier to do things that Yocto/Buildroot aren't really designed to do. Hopefully both live on, serving their own niche! I'm all for specialised tools rather than generic 'do it all' tools.


Remove SSH? Over my dead body!

What is container specific about all this? It just seems to be minimal images?


If you need to SSH into your cattle you’re either not in position to benefit from something like Bottlerocket or you’re doing things wrong.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: