Hacker News new | past | comments | ask | show | jobs | submit login
High-process-count support added to master (dragonflybsd.org)
84 points by tiffanyh on Aug 15, 2017 | hide | past | favorite | 26 comments



My question is: why am I paying cloud providers for virtual machines with some imaginary virtual cpu count (or dynos or whatever) when I could be paying for M process running my server executable (capped at N threads per process)? Why can't I just write a server, bundle up the exe and assets, and run it somewhere? Why do I have to futz about with admining, patching, and hardening the OS when that's not what I care about?

Edit, to clarify, it just seems like an os with the capability to host a large number of user processes as here would really allow an order of magnitude reduction in hosting cost. Ie if a machine can host 1,000,000 paying accounts vs 10 vps/containered apps.


> I could be paying for M process running my server executable (capped at N threads per process)?

Isn't this essentially AWS lambda or shared hosting? The trade offs will of course be their limitations, max execution times, lack of flexibility, and vendor lock in.

I found that AWS was too expensive because of their bandwidth. Its cheaper to just get a dedicated server with 5-10x the capacity that I need.

Doing what you said requires developer time which is more expensive than hardware, which is probably why AWS is so expensive. Some day that price will probably come down but for now its probably cheaper to just throw hardware at the problem.

> do I have to futz about with admining, patching, and hardening the OS

I think this is a problem [partially] addressed by configuration management. For example download some pre-made ansible configuration for a lamp server, and change just the settings you care about. Or load up someone's docker image (you'd still need to do updates)


Or perhaps unikernel, where the OS and the "app" is pretty much one and the same.


Not to sound cheeky but it sounds like you want to pay for timeshare on a mainframe.

(and I agree, it'd actually be nice if cloud providers framed cost based on processes count not vCPUs. Heroku kind of does that with their concept of "dynos")


Given the incessant push towards treating the web as a front end for "cloud apps", timeshare is back baby. Only that now the terminal is replaced with the web browser.


Yes, that's exactly what I'd like, with something more modern than cobol of course :)


Why can't I just write a server, bundle up the exe and assets, and run it somewhere?

You can do exactly this easily on Azure and probably others too eventually

https://azure.microsoft.com/en-us/blog/announcing-azure-cont...


There is also Hyper.sh (https://hyper.sh/) that offers the same service as Azure Container Instances and it seems to work pretty well for a side project I'm trying it on.

Azure Container Instances were only available for testing through their web shell last I checked a while ago.


Welcome to the Docker revolution.

Historically it was hard to distribute software. You can't just copy over .elf files since these need dynamic libraries. Dependency hell is real, people invented .deb to solve many problems, but debs were always intended to be installed in global scope, making it hard to package user software.

Roll forward and nowadays with namespaces you can "containerize" also disk, which means shared libs. Docker images are a better delivery mechanism than raw elf files, or even debs. Hosting docker images is inherently cheaper than virtual machines. I think Heroku were first large service to realize that.


I don't do Docker but from what I've heard it's not designed for security? Which means it wouldn't work for this scenario? Or am I misinformed?


Security without context is ambiguous and vague. To the parent comment - which asks about shipping and paying for "M" processes, docker is a reasonable (if not great) solution as container/namespace/process isolation are all, one way or another, sharing the same kernel and have mostly the same benefits/drawbacks.


I think "Security" in this context would roughly mean "able to run code from >1 user as securely (or more) than if they were running on separate VMs". Which AFAICT docker & linux cannot provide, but something like triton can.


Docker is just regular processes limited by a bunch of Linux kernel isolation mechanisms, which means that you're subject to potential kernel exploits, which would allow a "neighbor" container to run code outside the container and then control your own.

There are some ways of mitigating this, but the simplest one would be for the provider to run a VM for each container, then you get the security guarantees of regular VMs (though you still have to trust the provider to keep the OS up to date).


Is a docker container simply a process or is it more heavy weight than that? It certainly can be, so isn't this characterization that it's merely a "process" a bit disingenuous?

https://unix.stackexchange.com/questions/216618/what-do-the-...


Processes, not process, and I was talking in terms of security, but even in terms of performance, yes, it mostly is. There are some Docker features that can be more expensive (NAT and layered filesystem), but they are optional. A "Docker container" itself is just a group of processes to which the kernel applies a different policy than the default.

I'm not sure what that link is supposed to show, can you be more clear?


> I don't do Docker but from what I've heard it's not designed for security? Which means it wouldn't work for this scenario?

Are you asking if that's what you've heard?


You can: https://www.nearlyfreespeech.net/

SSH in, upload your executable (or compile it on the server, they have many compilers pre-installed) and run it. The OS is managed by them.


That's pretty much what Docker is about.


  xeon126# uptime
 1:42PM  up 9 mins, 3 users, load averages: 890407.00, 549381.40, 254199.55
Seeing load averages of ~900,000 blows my mind.


> Seeing load averages of ~900,000 blows my mind.

More impressive is that you can run "uptime" and it actually responds with an output in reasonable amount of time with said 6-digit load-average.


Just make sure that your process destruction doesn't involve a lock in kernel space. 900,000 threads waiting for a lock... yikes!

https://randomascii.wordpress.com/2017/07/09/24-core-cpu-and...


6 digit PIDs? These aren't actually stored in decimal, right?


No, but they are constrained to be less than or equal to 999,999.

See http://gitweb.dragonflybsd.org/dragonfly.git/blob/586c43085f...


When FreeBSD moved from 16 bit PIDs to 5 digit PIDs back in about 1999, I got the impression that one reason for not using the full 32 bit space was compatibility with tabular formatting in lots of userspace tools.


signed int, IIRC (cf. pid_t)


> With the commits made today, master can support at least 900,000 processes with just a kern.maxproc setting in /boot/loader.conf, assuming the machine has the memory to handle it.

They are just four bits away from hitting a really big number.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: