After spending a scant few minutes looking at it, here are the things I like about this project so far:
- It has a "How and why are we different" page. In the age of "I made a meta package on top of Ubuntu and called it a new OS", it's refreshing to see a Linux distribution come right out and say "here's what we do that separates us from the other 700".
- It actually is different. Recognizing the nigh-uselessness of separate user accounts (for personal computers), embracing of simple GUI tools over terminal wankery, eschewing of the legacy UNIX file hierarchy.
- They use the ROX filer, the only file manager for Linux with AppDir support and the centerpiece of the unfortunately long-defunct ROX Desktop.
Maybe it's because of my limited experience with containers but I can't imagine why on earth you would prefer GUI tools (especially a graphical shell) in a container.
I might be off-base, but when I hear someone talk about running GUI tools in a container, I think about being able to effectively fork application configs/intallations and environments. I run an extremely minimal arch installation on my production machine. It's... nice, I guess. I'd love to experiment with it more and make something even more customized, but I can't risk any downtime.
And a large portion of my computer is command line tools, sure. But I'm also running EXWM, I'm also running Firefox, I'm also running Tiled, and Blender, and so on, and so on. I don't want to get rid of my command-line tools, but I'm not only running command line tools. I want to be able to download a game, put it in a container, mess around with my drivers until it runs well, and then delete the container when I'm done and know 100% all of the customization just went away.
This was what originally got me excited about Docker, until I dug into it and realized Docker kind of didn't work particularly well for that.
My understanding is if I sat down and did the research, I could build something like that with regular Linux tools, but it's time consuming and as interested as I am in the underlying tech, I just know very, very little about how this stuff works.
But I always perk up whenever I hear someone talking about running graphical applications in containers, because in the back of my head I'm mapping that to some kind of fictional computing utopia where I can have complete isolation between processes and treat my computer like a Git repo.
I have been wanting something like you describe for forever. Honestly, the closest thing that I've found to this that works-out-the-box is Crostini in ChromeOS.
I don't love ChromeOS, but if I started building what you're looking for, I'd look at Crostini for inspiration.
Crostini can spawn gui apps through Wayland instead of pure X. That takes effort and I wish they would contribute it back.
Application isolation and the ability to upgrade separate applications and components with their dependencies separately from each other. Less issues with "update X" now broke A, B and C apps... you now have to add a PPA for Apps A and C, but B you'll have to build from source, oh crap, no longer actively maintained... switching to D app which was forked a couple months ago.
Not to mention the ability to easily support different build tool chains combinations, etc. Right now, my preferences are flatpak, snap, ppa, repo in that order.
Not really looking to play with a new linux, Manjaro is next on my list. Currently running Pop!_OS, which has been nice (just jumped this past month, haven't tried a linux desktop in 5+ years before that). I've been relatively happy.
That said, my biggest issues so far:
* need to update kernel and new mesa drivers before putting the 5700 XT video card.
* needed to update kernel for wifi support (intel ax).
* rainbow puke from RGB controllers, the Gigabyte (X570 Aorus Master) support is all but worthless, and the open-source project I saw was actually for windows. For the Lian Li o11 Dynamic Razor edition case, there's open-source Razor drivers, but I'll need to setup a windows drive in order to capture some data in order to support the specific device. I haven't even looked into the Corsair ram yet (which is actually the biggest eye sore at the moment).
I really regret not building another black box. My first two choices of cases without windows were sold out, so I went for the "pretty" case option. Which would be great if I were running windows, but I have no intention to. A lot of money on RGB fans (all matching), water cooling, ram, etc... and none of the controllers have good linux support. Would switch to another controller, but the only one I keep finding is a German company and doesn't seem to actually be sold anywhere. Which wouldn't cover the ram or case.
Interesting point from their "How and why are we different" is the part about their heritage:
> Puppy heritage
> [...] it must be stated that Easy is also very different, and should not be thought of as a fork of Puppy. Inherited features include the JWM-ROX desktop, menu-hierarchy, run-as-root, SFS layered filesystem, PET packages, and dozens of apps developed for Puppy.
Discoverability is only possible if the tool is trivial. Otherwise, you have to hide functionality inside the UI to prevent the creation of an unusable mass of buttons on the GUI's main screen. This also kills fluency, by preventing people from developing muscle memory.
> - They use the ROX filer, the only file manager for Linux with AppDir support and the centerpiece of the unfortunately long-defunct ROX Desktop.
I was so happy when ROX Desktop was still active and alive. It had exactly all features necessary to use a file browser, blazing fast and the UI was really intuitive. Actually it inspired me for years to maintain a ~/Apps folder, so reinstalling my system mostly just meant copying over my home folder.
The company coreos renamed their distro coreos to Container Linux (have fun googling help for that) then later they coreos were bought by redhat.
container linux is forked off my the community as flatcar linux. Redhats container linux is getting rolled into fedora silverblue or something? This paragraph I'm not really sure about.
I'm actually looking for a good container hosting OS right now. I remember hearing some stuff about CoreOS being deprecated or something, but I can't find anything concrete?!
Do you know what the situation is?
I also looked a bit at RancherOS today, which looked pretty cool, but it seems to use 10x the memory of CoreOS...
Fedora CoreOS is intended to eventually provide a suitable replacement for ContainerLinux.
Red Hat ships a variant of RHEL called RHEL CoreOS, but the only way to run it is as part of OpenShift (for instance via https://try.openshift.com) where it is the default OS for machines which are managed as part of the cluster, so it’s not a real ContainerLinux equivalent (which you can run individually).
If you're mainly just looking for something with Docker preinstalled VMWare's PhotonOS[0] may be an option. It's a stripped down CentOS/RHEL based distro that's targeted at being a container host. I believe they've said they'll also package other container runtimes in the future as the market shifts.
A quick look shows the ISO as ~4GB, which is a bit concerning though - CoreOS is around 450MB, and RancherOS only 135MB (although it's a bit difficult to compare, since they both download stuff during boot).
That's not the install size. From the documentation there are multiple install options ranging in size so they just decided to package it all in one. "Minimal" install is like 400-500 MB iirc.
I waved off Silverblue when I learned one must reboot to install packages. Maybe that's not a big deal with a server OS, but having to unlock the full-disk encryption, then restore my desktop and possibly reauth any apps is a bridge too far for me. To say nothing of the heartache involved when you realize that, haha, you didn't know the full package list you needed, so now you are iterating through that process.
This is what I'd agree with. The concept of user and group are hand-me-downs from systems built for sharing networked resources and pretty much requires some kind of centralized AAA to make any sense. They're a model designed for protecting the system from users and users from eachother, but that doesn't make any sense on a personal computer where the more pressing need is to protect the user from malicious applications.
> Running applications as restricted users has been standard practice for decades
...as a way of preventing users from interfering with the system or other users in multi user systems. Running applications as a user different from yourself is an ugly hack we've started doing because we don't have actual control over what our applications can access, so things like ransomware are possible despite not having system level access. Since Plan9 never took off, containerization of applications is the next best thing.
What I'm saying is, running in a restricted user account does absolutely nothing to protect the user running in the restricted account from malicious applciations. That's how the user/group model fails in personal computing.
Once you add backup in the picture, the local users are great. My main account can have all the ransomware it wants, all the backups are gong to stay intact, so I can restore the files.
* in the real life, there is “sudo hole”, but this can be fixed within the current user concept.
Why won’t local backups be sufficient against ransomware? Is it because of privilege escalation attacks?
I was under impression that even with zero days, using modern distribution and auto updates will minimize the amount of time the system is vulnerable, so for most of time, it will be sufficient.
And the advent of containerization is becoming standard practice now, precisely because it makes more sense for certain situations, where the user abstraction has proven less useful and more cumbersome. That was the point of this subthread.
I like being protected from writing to /dev/sda by mistake. If an OS is going to expose its guts to the world it makes sense to have permission controls on the vulnerable parts.
Hopefully we won't go back to the Win95/98 era of everything running as single user!
Having services run isolated as their own users is not merely a good security mechanics, it provides for a clear and simple mental model of what is what. A clear permissions barrier that's enforced pretty strictly by the OS.
Moreover we see separate user accounts more and more; even on small devices like phones it makes sense to have, for example, separate "private" and "business" accounts.
>does that mean processes have failed?
Nah, that's too general of a take. There are two more specific failures. First up, people fail to realize the present-day crop of containers are re-inventing processes. "Those who do not learn history, etc, etc."
Secondly, there's a significant failure of certain key features (like IP stack, FS handlers, etc. - in general, NAMESPACES) having been provided almost exclusively in kernel, and thusly requiring either superuser access or complex work-arounds (like FUSE) to manage. Plan 9 did it the right way; on P9, processes == containers.
But having separate private and personal accounts on your phone is very different from application resource namespacing. They're essentially orthogonal concerns.
In the case you're making, a user (a real actual human user) has different settings when _using_ the phone in two contexts. In the latter case, applications are restricted to sandboxes with well-defined interactions between each other's memory, processes, devices, sockets, and files.
So, you run three different Gmail accounts in chrome, and two different office 365 logins in Firefox. Not only can the firefox process, under you user ID, read/write chrome's cache, local data etc and vice versa, but so can your calculator app, your cpu temperature widget and your solitaire game.
Lxc can improve a bit on this, as can "containers" (lxc or otherwise restricted processes).
And resource sharing these days for SaaS and PaaS either occurs on the hypervisor level or the application level — the (usually Linux) OS is seen as a liability and necessary dependency for the application rather than a secure environment.
Also, current isolation technologies on desktop tend to be a lot less secure than mobile. If you assume Fuchsia, Android, iOS to be the next generation of OSes, then the trend is definitely to "secure by default". Whitelisting permissions instead of everything being allowed out of the box. Even the current generation of Linux containers is more of a bunch of resource management hacks, compared to e.g. hypervisor sandboxing or to a lesser extent, BSD jails.
Not really, for shared ressources, it was not that long before realizing that isolation was necessary for memory, storage has been has been managed via fs with rights since sharing files between processes is handy, shared networking as been a problem from day 1 (port numbers assignation).
Container just homogenise the paradygm for all resources: strict isolation by default, else explicit sharing.
A container _is_ a process, just with cgroups associated to it on startup so it has no visibility of other processes in the system (aka. is sandboxed).
So no.. processes haven't failed. Anything that runs on your system is or is part of a process.
Containers run processes, in a context where those within a container only see those within the same container, by underlying OS accounting generalization.
Containers aren't even a thing like that though. They don't run anything per se. Implementations vary but it can be as simple as an extra struct field in the process list.
> why re-inventing the wheel and create yet another containerization system?
Because that is how we make progress. Try different approaches, learn and evolve the ecosystem. That is how we got usable containers in the first place, it's not a new idea and variants have been around for decades. But only now we've seen it evolve in to something usable.
At least they are trying to solve problems. It might not be the best/right solution, hell it might not be an improvement, but if we don't try, we will never learn.
As a long time BSD and UNIX user, I disagree with the premise that containers have only recently been usable. Even on Linux, solutions like Proxmox made containers incredibly useful in the pre-LXC / pre-Docker days. And that’s discounting FreeBSD jails, Solaris Zones, etc which have existed a lot longer.
If anything, Docker just made containers trendy (they gave more talks at more conversations, etc) when before it was seen as a niche toy compared to virtualisation which few had heard of and fewer had bothered to look into. However being trendy doesn’t mean better nor easier to use.
I get that but this is not an "unsolved problem" IMO. Anyway anybody is 100% free to use their time as they prefer, it just makes me wonder if there are/were real technical, unpatchable limits in the already existing solutions or it's just a (totally reasonable) case of NIH.
I could be misunderstanding, but isn't this actually just a new OS for use in any containerization system, rather than a containerization system itself? This seems more in line with Alpine Linux or Google's Distroless images than something like a new docker
> EasyOS is designed from scratch to support containers. Any app can run in a container, in fact an entire desktop can run in a container. Container management is by a simple GUI, no messing around on the commandline. The container mechanism is named Easy Containers, and is designed from scratch (Docker, LXC, etc are not used). Easy Containers are extremely efficient, with almost no overhead -- the base size of each container is only several KB.
Ah I see, I must have glanced over that paragraph there (which is impressive given that it is the one of the first of them). Thanks for the clarification.
> Run as root. This is controversial, however, it is just a different philosophy. The user runs as administrator (root), apps may optionally run as user 'spot' or in containers as a "crippled root" or user 'zeus'. The practical outcome is that you never have to type "sudo" or "su" to run anything, nor get hung up with file permissions.
You have individual account for individual people. Use sudo if you need to elevate permissions - that gets fired over to your syslog server, so if you screw up you know what you did. If someone else screws up, you can see who it was and either fix it, or contact them to find out what they were trying to do (likely both)
Based on the language ("the user") and the focus on the GUI, I think this OS is designed more for single-user workstations, rather than multi-user servers. This philosophy of "root by default" is also implemented by Puppy Linux, which was created by the same person.
Single-user workstations are usually pets, and it's much easier to manage a pet when every single unit of execution is separated from other units of execution.
It's also much easier to manage each application with it's own root for everything, rather than multiple applications installed into a single root.
What's ridiculous is taking a security model designed for multiuser university mainframes in the 1970s, riddling it with 40 years of hacks to get around places where it's inconvenient, and insisting that it is the One True Way to run a personal laptop. I'm glad that some people are willing to fight dogma and be experimental.
That's because when you actually need the handgun you'll want it to function as expected when you pull the trigger. A safety is just extra complication that provides no significant benefit if you're already handling the firearm like you're supposed to (which is to say, never pointing it at anything you don't want to destroy). Even without external safeties, modern firearms often do contain internal safeties to ensure that they only go off when the trigger is operated, as opposed to being dropped or something.
- It has a "How and why are we different" page. In the age of "I made a meta package on top of Ubuntu and called it a new OS", it's refreshing to see a Linux distribution come right out and say "here's what we do that separates us from the other 700".
- It actually is different. Recognizing the nigh-uselessness of separate user accounts (for personal computers), embracing of simple GUI tools over terminal wankery, eschewing of the legacy UNIX file hierarchy.
- They use the ROX filer, the only file manager for Linux with AppDir support and the centerpiece of the unfortunately long-defunct ROX Desktop.
I'm anxious to find out more.