Hacker News new | past | comments | ask | show | jobs | submit login
EasyOS: An experimental Linux distribution designed from scratch for containers (easyos.org)
136 points by alexellisuk on Sept 20, 2019 | hide | past | favorite | 86 comments



After spending a scant few minutes looking at it, here are the things I like about this project so far:

- It has a "How and why are we different" page. In the age of "I made a meta package on top of Ubuntu and called it a new OS", it's refreshing to see a Linux distribution come right out and say "here's what we do that separates us from the other 700".

- It actually is different. Recognizing the nigh-uselessness of separate user accounts (for personal computers), embracing of simple GUI tools over terminal wankery, eschewing of the legacy UNIX file hierarchy.

- They use the ROX filer, the only file manager for Linux with AppDir support and the centerpiece of the unfortunately long-defunct ROX Desktop.

I'm anxious to find out more.


Maybe it's because of my limited experience with containers but I can't imagine why on earth you would prefer GUI tools (especially a graphical shell) in a container.


I might be off-base, but when I hear someone talk about running GUI tools in a container, I think about being able to effectively fork application configs/intallations and environments. I run an extremely minimal arch installation on my production machine. It's... nice, I guess. I'd love to experiment with it more and make something even more customized, but I can't risk any downtime.

And a large portion of my computer is command line tools, sure. But I'm also running EXWM, I'm also running Firefox, I'm also running Tiled, and Blender, and so on, and so on. I don't want to get rid of my command-line tools, but I'm not only running command line tools. I want to be able to download a game, put it in a container, mess around with my drivers until it runs well, and then delete the container when I'm done and know 100% all of the customization just went away.

This was what originally got me excited about Docker, until I dug into it and realized Docker kind of didn't work particularly well for that.

My understanding is if I sat down and did the research, I could build something like that with regular Linux tools, but it's time consuming and as interested as I am in the underlying tech, I just know very, very little about how this stuff works.

But I always perk up whenever I hear someone talking about running graphical applications in containers, because in the back of my head I'm mapping that to some kind of fictional computing utopia where I can have complete isolation between processes and treat my computer like a Git repo.


I have been wanting something like you describe for forever. Honestly, the closest thing that I've found to this that works-out-the-box is Crostini in ChromeOS. I don't love ChromeOS, but if I started building what you're looking for, I'd look at Crostini for inspiration.

Crostini can spawn gui apps through Wayland instead of pure X. That takes effort and I wish they would contribute it back.


Application isolation and the ability to upgrade separate applications and components with their dependencies separately from each other. Less issues with "update X" now broke A, B and C apps... you now have to add a PPA for Apps A and C, but B you'll have to build from source, oh crap, no longer actively maintained... switching to D app which was forked a couple months ago.

Not to mention the ability to easily support different build tool chains combinations, etc. Right now, my preferences are flatpak, snap, ppa, repo in that order.

Not really looking to play with a new linux, Manjaro is next on my list. Currently running Pop!_OS, which has been nice (just jumped this past month, haven't tried a linux desktop in 5+ years before that). I've been relatively happy.

That said, my biggest issues so far:

* need to update kernel and new mesa drivers before putting the 5700 XT video card. * needed to update kernel for wifi support (intel ax). * rainbow puke from RGB controllers, the Gigabyte (X570 Aorus Master) support is all but worthless, and the open-source project I saw was actually for windows. For the Lian Li o11 Dynamic Razor edition case, there's open-source Razor drivers, but I'll need to setup a windows drive in order to capture some data in order to support the specific device. I haven't even looked into the Corsair ram yet (which is actually the biggest eye sore at the moment).

I really regret not building another black box. My first two choices of cases without windows were sold out, so I went for the "pretty" case option. Which would be great if I were running windows, but I have no intention to. A lot of money on RGB fans (all matching), water cooling, ram, etc... and none of the controllers have good linux support. Would switch to another controller, but the only one I keep finding is a German company and doesn't seem to actually be sold anywhere. Which wouldn't cover the ram or case.


Interesting point from their "How and why are we different" is the part about their heritage:

> Puppy heritage

> [...] it must be stated that Easy is also very different, and should not be thought of as a fork of Puppy. Inherited features include the JWM-ROX desktop, menu-hierarchy, run-as-root, SFS layered filesystem, PET packages, and dozens of apps developed for Puppy.


> embracing of simple GUI tools over terminal wankery, eschewing of the legacy UNIX file hierarchy.

So you're telling me I have no tools and no existing tools are going to work.

Good to get that out in the open, I suppose.


God forbid someone not create another Ubuntu derivative that isn't substantively different than every other Linux distro on earth, right?


Do you have a point to make about how this is better, or is this change for the sake of change?


GUI tools provide significantly better discoverability, for one. The file hierarchy is a mess to put it mildly.


Discoverability is only possible if the tool is trivial. Otherwise, you have to hide functionality inside the UI to prevent the creation of an unusable mass of buttons on the GUI's main screen. This also kills fluency, by preventing people from developing muscle memory.


> - They use the ROX filer, the only file manager for Linux with AppDir support and the centerpiece of the unfortunately long-defunct ROX Desktop.

I was so happy when ROX Desktop was still active and alive. It had exactly all features necessary to use a file browser, blazing fast and the UI was really intuitive. Actually it inspired me for years to maintain a ~/Apps folder, so reinstalling my system mostly just meant copying over my home folder.


Anyone know how this compares to CoreOS? Whatever happened to them?


The company coreos renamed their distro coreos to Container Linux (have fun googling help for that) then later they coreos were bought by redhat.

container linux is forked off my the community as flatcar linux. Redhats container linux is getting rolled into fedora silverblue or something? This paragraph I'm not really sure about.


There is Fedora CoreOS, it seems pretty active.

https://github.com/coreos/fedora-coreos-tracker


I'm actually looking for a good container hosting OS right now. I remember hearing some stuff about CoreOS being deprecated or something, but I can't find anything concrete?!

Do you know what the situation is?

I also looked a bit at RancherOS today, which looked pretty cool, but it seems to use 10x the memory of CoreOS...


Fedora CoreOS is intended to eventually provide a suitable replacement for ContainerLinux.

Red Hat ships a variant of RHEL called RHEL CoreOS, but the only way to run it is as part of OpenShift (for instance via https://try.openshift.com) where it is the default OS for machines which are managed as part of the cluster, so it’s not a real ContainerLinux equivalent (which you can run individually).


If you're mainly just looking for something with Docker preinstalled VMWare's PhotonOS[0] may be an option. It's a stripped down CentOS/RHEL based distro that's targeted at being a container host. I believe they've said they'll also package other container runtimes in the future as the market shifts.

[0] https://vmware.github.io/photon/


Thanks, this isn't one I've come across!

A quick look shows the ISO as ~4GB, which is a bit concerning though - CoreOS is around 450MB, and RancherOS only 135MB (although it's a bit difficult to compare, since they both download stuff during boot).


That's not the install size. From the documentation there are multiple install options ranging in size so they just decided to package it all in one. "Minimal" install is like 400-500 MB iirc.



I waved off Silverblue when I learned one must reboot to install packages. Maybe that's not a big deal with a server OS, but having to unlock the full-disk encryption, then restore my desktop and possibly reauth any apps is a bridge too far for me. To say nothing of the heartache involved when you realize that, haha, you didn't know the full package list you needed, so now you are iterating through that process.


If everything is a container, does that mean processes have failed?


I’d personally say that the concept of users and user groups have failed, at least on systems that have only one user.


This is what I'd agree with. The concept of user and group are hand-me-downs from systems built for sharing networked resources and pretty much requires some kind of centralized AAA to make any sense. They're a model designed for protecting the system from users and users from eachother, but that doesn't make any sense on a personal computer where the more pressing need is to protect the user from malicious applications.


> that doesn't make any sense on a personal computer where the more pressing need is to protect the user from malicious applications.

Of course it makes sense. Running applications as restricted users has been standard practice for decades, precisely because it makes sense.


> Running applications as restricted users has been standard practice for decades

...as a way of preventing users from interfering with the system or other users in multi user systems. Running applications as a user different from yourself is an ugly hack we've started doing because we don't have actual control over what our applications can access, so things like ransomware are possible despite not having system level access. Since Plan9 never took off, containerization of applications is the next best thing.

What I'm saying is, running in a restricted user account does absolutely nothing to protect the user running in the restricted account from malicious applciations. That's how the user/group model fails in personal computing.


Once you add backup in the picture, the local users are great. My main account can have all the ransomware it wants, all the backups are gong to stay intact, so I can restore the files.

* in the real life, there is “sudo hole”, but this can be fixed within the current user concept.


I'd rather prevent ransomware from working in the first place. Local backups are hardly sufficient anyway.


Why won’t local backups be sufficient against ransomware? Is it because of privilege escalation attacks?

I was under impression that even with zero days, using modern distribution and auto updates will minimize the amount of time the system is vulnerable, so for most of time, it will be sufficient.


And the advent of containerization is becoming standard practice now, precisely because it makes more sense for certain situations, where the user abstraction has proven less useful and more cumbersome. That was the point of this subthread.


I like being protected from writing to /dev/sda by mistake. If an OS is going to expose its guts to the world it makes sense to have permission controls on the vulnerable parts.


>the concept of users and user groups have failed

Hopefully we won't go back to the Win95/98 era of everything running as single user!

Having services run isolated as their own users is not merely a good security mechanics, it provides for a clear and simple mental model of what is what. A clear permissions barrier that's enforced pretty strictly by the OS.

Moreover we see separate user accounts more and more; even on small devices like phones it makes sense to have, for example, separate "private" and "business" accounts.

>does that mean processes have failed?

Nah, that's too general of a take. There are two more specific failures. First up, people fail to realize the present-day crop of containers are re-inventing processes. "Those who do not learn history, etc, etc."

Secondly, there's a significant failure of certain key features (like IP stack, FS handlers, etc. - in general, NAMESPACES) having been provided almost exclusively in kernel, and thusly requiring either superuser access or complex work-arounds (like FUSE) to manage. Plan 9 did it the right way; on P9, processes == containers.


> [...] services run isolated as their own users [...] provides for a clear and simple mental model [...]

How is that a clear and simple model? Are email or printing users?

I think the whole discussion is futile without having a common understanding of what we are talking about. That is:

- What is a user?

- What is a group?

- What is a role?

- What is an account?

- What is a service?

- What is a job?

- What is a process?

- What is a container?

- What is a namespace?

Moreover, you cannot say whether an abstraction is good or bad without knowing what our goals, use cases or target users are.


But having separate private and personal accounts on your phone is very different from application resource namespacing. They're essentially orthogonal concerns.

In the case you're making, a user (a real actual human user) has different settings when _using_ the phone in two contexts. In the latter case, applications are restricted to sandboxes with well-defined interactions between each other's memory, processes, devices, sockets, and files.


So, you run three different Gmail accounts in chrome, and two different office 365 logins in Firefox. Not only can the firefox process, under you user ID, read/write chrome's cache, local data etc and vice versa, but so can your calculator app, your cpu temperature widget and your solitaire game.

Lxc can improve a bit on this, as can "containers" (lxc or otherwise restricted processes).


And resource sharing these days for SaaS and PaaS either occurs on the hypervisor level or the application level — the (usually Linux) OS is seen as a liability and necessary dependency for the application rather than a secure environment.

Also, current isolation technologies on desktop tend to be a lot less secure than mobile. If you assume Fuchsia, Android, iOS to be the next generation of OSes, then the trend is definitely to "secure by default". Whitelisting permissions instead of everything being allowed out of the box. Even the current generation of Linux containers is more of a bunch of resource management hacks, compared to e.g. hypervisor sandboxing or to a lesser extent, BSD jails.


Or are at least misnamed. (Not that I have a better suggestion!)


Not really, for shared ressources, it was not that long before realizing that isolation was necessary for memory, storage has been has been managed via fs with rights since sharing files between processes is handy, shared networking as been a problem from day 1 (port numbers assignation).

Container just homogenise the paradygm for all resources: strict isolation by default, else explicit sharing.


Feels even more OS360/IBM :)


A container _is_ a process, just with cgroups associated to it on startup so it has no visibility of other processes in the system (aka. is sandboxed).

So no.. processes haven't failed. Anything that runs on your system is or is part of a process.


Nah, Linux catching up to HP-UX vaults, Tru64 execution enviroments, Solaris Zones.


Processes never promised to solve things like file system isolation or wrapping environment dependencies in a single distributable.


Containers run processes, in a context where those within a container only see those within the same container, by underlying OS accounting generalization.


Containers aren't even a thing like that though. They don't run anything per se. Implementations vary but it can be as simple as an extra struct field in the process list.


agreed that my language was sloppy. genuine thanks.


Only in the HN bubble.



They bought me on "No full install".

I've always been annoyed how Unix systems treat the system as "their property" with the user just being a temporary guest (if not an intruder).


TBH, it doesn't look "easy" at all. Plus, why re-inventing the wheel and create yet another containerization system?


> why re-inventing the wheel and create yet another containerization system?

Because that is how we make progress. Try different approaches, learn and evolve the ecosystem. That is how we got usable containers in the first place, it's not a new idea and variants have been around for decades. But only now we've seen it evolve in to something usable.

At least they are trying to solve problems. It might not be the best/right solution, hell it might not be an improvement, but if we don't try, we will never learn.


As a long time BSD and UNIX user, I disagree with the premise that containers have only recently been usable. Even on Linux, solutions like Proxmox made containers incredibly useful in the pre-LXC / pre-Docker days. And that’s discounting FreeBSD jails, Solaris Zones, etc which have existed a lot longer.

If anything, Docker just made containers trendy (they gave more talks at more conversations, etc) when before it was seen as a niche toy compared to virtualisation which few had heard of and fewer had bothered to look into. However being trendy doesn’t mean better nor easier to use.


I get that but this is not an "unsolved problem" IMO. Anyway anybody is 100% free to use their time as they prefer, it just makes me wonder if there are/were real technical, unpatchable limits in the already existing solutions or it's just a (totally reasonable) case of NIH.


The project is running at least since 2017. Maybe it was less « yet another containerization system » at that time?


Docker was in full swing in 2017, just like LXC. Even rkt predates it, I'd say


Yet the design of the website reminds me of the 90's


They had me after 90s web design


I could be misunderstanding, but isn't this actually just a new OS for use in any containerization system, rather than a containerization system itself? This seems more in line with Alpine Linux or Google's Distroless images than something like a new docker


> EasyOS is designed from scratch to support containers. Any app can run in a container, in fact an entire desktop can run in a container. Container management is by a simple GUI, no messing around on the commandline. The container mechanism is named Easy Containers, and is designed from scratch (Docker, LXC, etc are not used). Easy Containers are extremely efficient, with almost no overhead -- the base size of each container is only several KB.


Pretty interesting.. I'm almost tempted to run some test services (web server, etc) in containers on EasyOS in a DO or Vultr VM, etc..


Ah I see, I must have glanced over that paragraph there (which is impressive given that it is the one of the first of them). Thanks for the clarification.


OpenSuse has something like this as well. We are seeing more and more OS go in that direction.

https://en.opensuse.org/Kubic:MicroOS


> Run as root. This is controversial, however, it is just a different philosophy. The user runs as administrator (root), apps may optionally run as user 'spot' or in containers as a "crippled root" or user 'zeus'. The practical outcome is that you never have to type "sudo" or "su" to run anything, nor get hung up with file permissions.

Yeah, no thanks.


Quite, it's ridiculous.

You have individual account for individual people. Use sudo if you need to elevate permissions - that gets fired over to your syslog server, so if you screw up you know what you did. If someone else screws up, you can see who it was and either fix it, or contact them to find out what they were trying to do (likely both)


Based on the language ("the user") and the focus on the GUI, I think this OS is designed more for single-user workstations, rather than multi-user servers. This philosophy of "root by default" is also implemented by Puppy Linux, which was created by the same person.


But what would be the benefit of containers on a single-user workstation?


From what I can gather skimming instead of using users / groups to isolate processes they are using containers.

It feels like Linux and windows are converging into a single OS


Single-user workstations are usually pets, and it's much easier to manage a pet when every single unit of execution is separated from other units of execution.

It's also much easier to manage each application with it's own root for everything, rather than multiple applications installed into a single root.


What's ridiculous is taking a security model designed for multiuser university mainframes in the 1970s, riddling it with 40 years of hacks to get around places where it's inconvenient, and insisting that it is the One True Way to run a personal laptop. I'm glad that some people are willing to fight dogma and be experimental.


Iirc Clear Linux also started as an OS for containers and morphed into a full functioning desktop OS, let’s how this goes.


Run as root to avoid typing sudo or su? That is like having a handgun without the safety! No thank you!


Interestingly I've been told by handgun users that this is currently a popular idea.


That's because when you actually need the handgun you'll want it to function as expected when you pull the trigger. A safety is just extra complication that provides no significant benefit if you're already handling the firearm like you're supposed to (which is to say, never pointing it at anything you don't want to destroy). Even without external safeties, modern firearms often do contain internal safeties to ensure that they only go off when the trigger is operated, as opposed to being dropped or something.


Popular need not imply secure. :-)


So, like Qubes OS, but on an application-level isolation?


Yes, like qubes but without the security.


But also more lightweight, since it doesn't have to run a kernel for each application.


Running regular processes is even "more lightweight".


A containerized process is a regular process, which just happens to be in a different namespace from the init process.


So your definition of a container is simply the namespaces ?

No cgroups ?


Eh, for the purpose they're essentially the same thing: just kernel metadata on how to group regular processes.


well there is also rancher/os or k3os


I’m interested to know how this improves on Moby and ContainerLinux, both of which are “designed for containers”


If EasyOS is specifically for containers, I do not see a container image at the LXC/LXD repository at https://us.images.linuxcontainers.org/

Even Kali Linux has a container image at https://us.images.linuxcontainers.org/

edit: If anyone at EasyOS wants to add a container image, see as an example the PR for Kali, https://github.com/lxc/distrobuilder/pull/179


it's because EasyOS runs containers, it's the host. Did you read the post ?


Asking if the person read the post adds nothing to your comment and is against HN guidelines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: