Hacker News new | past | comments | ask | show | jobs | submit login

> across unix multiuser environments getting used anymore for servers

I guess it depends on the servers. I'm in academic/research computing and single-user systems are the anomaly. Part of it is having access to beefier systems for smaller slices of time, but most of it is being able to share data and collaboration between users.

If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.




> If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.

This is overwhelmingly the view for business and personal users. Settings like what you described are very rare nowadays.

No corporate IT department is timesharing users on a mainframe. It's just baremetal laptops or VMs on Windows with networked mountpoints.


Multi-user clusters are still quite common in HPC. And I think you're not going to see a switch away from multi-user systems anytime soon. Single user systems like laptops might be a good use-case, but even the laptop I'm using now has different accounts for me and my wife (and it's a Mac).

When you have one OS that is used on devices from phones, to laptops, to servers, to HPC clusters, you're going to have this friction. Could Linux operate in a single-user mode? Of course. But does that really make sense for the other use-cases?


you could potentially create multiple containers in that machine which are single user and give to every user who needs access. CPU/Memory/GPU can be assigned in any way you want(shared/not shared). Now no user can mess up another user.


Isn't that just reinventing multiuser operating systems? Normal Linux already has the property that no user can mess up any other user (unless they are root or have sudo rights)


no it is not


It's not "that machine" it's a cluster of dozens or hundreds of machines that is partitioned in various ways and runs batch jobs submitted via a queuing system (probably slurm).


Not containers, but cgroups, and that is how HPC clusters work today. You still need multiple users though.


is it? most HPC (if GPU clusters count) are probably in industry and managed by containers


HPC admin here.

Yes. First, we use user level container systems like apptainer/singularity, and these containers run under the user itself.

This is also same for non academic HPC systems.

From schedulers to accounting, everything is done at user level, and we have many, many users.

It won’t change anytime soon.


I thought most containers shared the same user, ie. `dockremap` in the case of docker.

I understand academia has lots of different accounts.


Nope, full usermode containers (e.g.: apptainer) run under the user's own context, and furthermore under a cgroup (if we're talking HPC/SLURM at least) which restricts the user's resources to what they requested in their job file.

Hence all containers are isolated from each other, not only at process level, but at user + cgroup level too.

Apptainer: https://apptainer.org


I think a admin would better understand the system if there was only one subsystem doing a particular type of security and not two. Two subsystems doing security would lead to more problems down the road.


For HPC, there are two different contexts where users need to be considered - interactive use and batch job processing. Users login to a cluster, write their scripts, work with files, etc. This is your typical user account stuff. But they also submit jobs here.

Second, there are the jobs users submit. These are often executed on separate nodes and the usage is managed. Here you have both user and cgroup limits in place. The cgroups make sure that the jobs on have the required resources. The user authentication makes sure that the job can read/write data as the user. This was the user can work with their data on the interactive nodes.

So the two different systems have different rationales, and both are needed. It all depends on the context.


If we forget how the current system is architected, we are looking at two problems: First problem is that Linux capabilities are also dealing with isolating processes so they have limited capabilities because the user based isolation is not enough. Second problem is that local identity has no relation to the cloud identity which is undesirable. If we remove user based authentication and rely on capabilities only with identity served by cloud or kubernetes, it could be a simpler way to do authenticating and authorization


I'm not sure I even follow...

The primary point of user-authentication is that we need to be able to read/write data and programs. So you have to have a user-level authentication mechanism someplace to be able to read and write data. cgroups are used primarily for restricting resources, so those two sets of restrictions are largely orthogonal to each other.

Second, user-authentication is almost always backed (at least on interactive nodes) by an LDAP or some other networked mechanism, so I'm not sure what "cloud" or "k8s" really adds here.

If you're trying to say that we should just run HPC jobs in the cloud, that's an option. It's not necessarily a great option from a long-term budget perspective, but it's an option.


there is no reason for users to be maintained in the kernel.


Can you elaborate on that?


Containers rely on many privilege separation systems to do what they do, they are in fact a rather extreme case of multi-user systems, but they tend to present as “single” user environs to the container’s processes.


> they are in fact a rather extreme case of multi-user systems

Are they? My understanding was that by default, the `dockerd` (or whatever) is root and then all containers map to the same non-privileged user.


Good software hides complexity. User does not have to understand user group permissions suid etc etc


> No corporate IT department is timesharing users on a mainframe

Not a mainframe perhaps, but this sentiment is flat wrong otherwise, because that is how Citrix and RDS (fka Terminal Server) do app virtualization. It's an approach in widespread use both for enterprise mobile/remote access, and for thin clients in point of sale or booth applications. What's more, a *nix as the underlying infrastructure is far from unusual.

I have first-hand insider knowledge of two financial institutions that prefer this delivery model to manage the attack surface in retail settings, and a supermarket chain that prefers it because employee theft is seen as a problem. It’s also a model that is easy to describe and pitch to corporate CIOs, which is undoubtedly a merit in the eyes of many project managers.

One of the above financial institutions actually does still have an entire department of users logged in to an S/390 rented from IBM. They’ve been trying to discontinue the mainframe for years. I’m told there are similar continuing circumstances in airline reservations and credit card schemes; not just transaction processing, but connected interactive user sessions.

This is what corporate IT actually looks like. It is super different to the tech environments and white-collar head offices many of us think are the universal exemplar.


I wonder if they might be more common than you think. You will never see someone standing up at a conference and describing this setup, but there are millions of machines out there quietly doing work which are run by people who do not speak at conferences.

Where i work, we have a lot of physical machines. The IT staff own the root account, and development teams get some sort of normal user accounts, with highly restricted sudo.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: