Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Slim – Build and run tiny VMs from Dockerfiles (github.com/ottomatica)
389 points by chrisparnin on June 14, 2019 | hide | past | favorite | 139 comments



Can someone dumb this down for me? What _exactly_ is going on here?

> slim will build a micro-vm from a Dockerfile. Slim works by building and extracting a rootfs from a Dockerfile, and then merging that filesystem with a small minimal kernel that runs in RAM.

> This results in a real VM that can boot instantly, while using very limited resources. If done properly, slim can allow you to design and build immutable unikernels for running services, or build tiny and embedded development environments.


Docker images contain a filesystem for an operating system, minus the OS kernel. This project uses Docker to build a tiny OS, extracts all the files out of the Docker image, adds a small OS kernel and re-packages that as a VM image.


So why was Docker needed to create a lightweight VM? I thought Docker was supposed to replace VMs.


Docker is not intended to replace VMs. It provides lightweight isolation for processes within a host but shares a kernel between processes, and some handy tooling for building images that works well with version control. (Building a CentOS based image is under a dozen LOC in Docker, and a few hundred LOC in Packer+Kickstart.)

Kata Containers (https://katacontainers.io) is more along the lines of a "VM replacement", although it is doing so using VMs.


Docker containers do not yet offer the same security or “bring your own kernel” flexibility of full VMs, such as access to a CPU’s VT-x features.


You think they will, one day?


You can tighten containers but at the end of the day they are running as native processes on the same kernel. Any vulnerability and game is over. VM offers an easier (maybe heavy) mental model of security. Between guests and between guest and host.


A jail breakout, it being from a process namespace or a VM, is always a security-risk - whatever it's breaking out of. Both are sensitive to this. VM's are maybe a bit more mature and handle some stuff on hardware - but given the recent Intel oops thingies - I wouldn't rely on that too much...

"Containers are less secure" is just FUD. That VM's or containers alike are running on the same CPU is currently a much more real threat.


Docker- probably not. Other runtimes like Kata and Firecracker? Probably.


The docker daemon itself - sure, but on OS/kernel level, they're doing exactly the same thing, where docker is probably the more scrutinized implementation out-there...


If you have a Docker creation script (that many people distribute), you use this tool to turn it into a disk image you can run in a VM.


I assume it's mostly to make it easier for people to test this out with existing stuff. Docker containers are the standard for taking a base image and adding stuff on top of it.


Working with VM images was PITA, I don't know if anything changed. Having recipe (Dockerfile) and required files in VCS is useful as reference even when setting up bare metal machines. So Docker (and Puppet/Ansible) might have bigger impact on work organisation than on anything else.


You still have the Dockerfile and the required files in a VCS, but now what you have running is a full VM instead of a somewhat isolated process sharing kernel with other containers.

So far so good, for individual/standalone containers. But if you need thightly integrated containers (sharing networks, volumes, ports and so on) things may be a bit more complicated. And not sure about Kubernetes. YMMV


It's not such a PITA anymore. Most distros have automated build systems to create VM images from a config file. For example, we use FAI + Ansible to entirely automate the creation of a Debian AMIs for AWS, deployment, and provisioning.


It's a VM for Docker, not a VM using Docker.


I think it's a VM made using the Docker ecosystem (file system segregation, siloing of other dependencies to the container, etc)


Maybe going a bit off tangent here but really let's all agree that Dockerfile.s are just not good enough for building any kind of software through time. There's no lockfile system and caching is just awkward to work with.

Dockerfile.s must be progressively replaced with Nix recipes then we'd get lockfiles, upgradable dependencies, some day even reproducible builds!


Project Nix recipes into Dockerfiles, use the nix hash in the dockerfile name.


Recently on HN (I think) and related:

- https://micromind.me/en/posts/from-docker-container-to-boota...

- https://godarch.com/

Really like seeing these new usecases for containers -- would have never thought to mix the two technologies in this way.


Darch reminds me a bit of Tiny Core Linux. That uses loopback images for packages, and puts them together with UnionFS, IIRC.


They seem to do very little that SystemD cannot already do with services and overlayFS, with the added benefit of being already available on most systems.


SystemD does not run on Windows which ist still being used in many companies..


Author of Darch here, if anyone has any questions.

Here are my personal recipes: https://github.com/pauldotknopf/darch-recipes


I love that your website is “god arch[itecture].com”. Was that intentional?


lol, nope.

It was a pattern I saw with Hugo and other Go projects.


Well, containers and VMs are two different things (are they not?).


Yes, but the cool thing is definitely the single system image


Not a cluster SSI, though (shared environment, process migration between instances, etc.), as far as I could gather?

https://en.wikipedia.org/wiki/Single_system_image


They are different, yes.


Do some containers run outside a VM? Docker for example "uses operating-system-level virtualization to develop and deliver software in packages called containers."


That's not a virtual machine.

I'd personally blame marketing-speak on using "virtualization" at all (unless they refer to their windows/mac offerings, which can run a Linux VM as the docker host, on which the containers are run), but I can see how one could also stretch a definition of virtualization in a way that covers container.

Sometimes containers are run in VMs, but they are almost defined as "do not require a full VM running an OS, but instead talk to the host kernel".


Interesting. So what makes it virtualization? What's "virtual" or "virtualized" about it?


One could argue that the key of virtualization is that a piece of software is run in an environment that pretends to be something else than the actual base system. A VM hypervisor runs an operating system in a way that it looks like as it is running alone on a physical machine, with some fake devices. From inside a container, similarly the environment is fake: it can't see processes outside the container, it's view of the file system or devices is modified, and it looks as if the things in the container were the only things on that kernel.


So at its core it's just a set of access permissions + hiding of "forbidden" stuff? How about RAM and stuff, and hardware - does it get a true answer if querying its system? Or is that stuff virtualized too?


Super late but I have an comment[0] that answers this relatively decently, particularly this sentence:

> A docker container is not a VM, it is a regular process, isolated with the use of cgroups and namespaces, possibly protected (like any other process) with selinux/apparmor/etc.

Where virtual machines will actually virtualize a whole machine (down to having BIOS for your imaginary motherboard and a CPU for this imaginary machine), linux containerization virtualizes the resources & environment available to a single running process via the use of namespaces (pid, user, etc) and cgroups (available cpu, memory, etc).

So basically, there's a bunch of code in the kernel (shared between all containers) that enables the accurate reporting of all the "virtualized" resources/environment (cpu, memory, other pids running) -- that code can be exploited, which would be a "container escape". Dirty Cow[1] is an example of one of these escapes.

[0]: https://news.ycombinator.com/item?id=20059875

[1]: https://dirtycow.ninja/


thanks, this was super useful. I thought all docker containers were VM's that were one level less virtualized or something, but still essentially a VM. (So, I thought that docker containers saw a virtual box with a virtual bios, fake ram size, etc etc). thanks for clearing this up for me!


No problem, it's really interesting isn't it! There's so much cool stuff out there related to this, the other side of the surge of DevOps hype that people don't see as often, there's tons of cool tech powering these newish ways of deploying software


The kernel


But per the other reply, containers are a lot less "contained" than VM's? I.e. if a program wants to list its set of processes, the host could fuck up and show them some from outside its container - whereas for the same thing to happen by a VM, it would have to have code to read that outside stuff, functionality it might not even contain... so vm's seem safer than containers... is that right?


Yep, VMs are safer than containers, because there is a larger barrier between the possibly malicious code running inside the VM than there is in the container context. A container is just another process, bound by limitations via namespaces and cgroups -- running on a shared kernel as a host. But don't take my word for it:

> Simply put, containers are just processes, and as such they are governed by the kernel like any other process. Thus any kernel-land vulnerability which yields arbitrary code execution can be exploited to escape a container. To demonstrate this, Capsule8 Labs has created an exploit that removes the process from its confines and gives it root access in the Real World. Let’s take a look at what was involved.

(I don't know much about capsule8 as a company is but that article[0] is pretty informative and seems spot on from what I read)

If you can infiltrate a process (let's say a web server) running in a container and know a kernel exploit that can be used to get past these limitations (a "container escape"), then you can use them and get root on the main system.

If that same process was running in a VM (without a container), you need to:

- Infiltrate the process

- Kernel exploit to gain root (assuming the program wasn't running under it) in the VM

- Escape the VM (i.e. use the kernel or whatever else to actually break past the barriers of the hypervisor which was running the vm -- qemu +/- kvm, hyperv,etc) -- aka a "virtual machine escape"[1]

- Gain root on the host system (assuming the process that spawned the hypervisor wasn't running as root)

Generally, virtual machine security is pretty good these days, by virtue of being around longer and having more exposure and eyes looking for exploits.

[0]: https://capsule8.com/blog/practical-container-escape-exercis...

[1]: https://en.wikipedia.org/wiki/Virtual_machine_escape


thanks a ton for this and your other reply to me.


Most containers run outside of VMs.


I'm not sure that is true. I suspect that a great many containers are running in OSs that are in turn running in VMs on hosts in "cloud" structures, perhaps eclipsing the number that are running on an OS on bare metal.


Shameless plug about containers: We recently launched https://cloudcron.polyglot.network

Tell us a Docker image, a command with arguments to run and a cron schedule - and we will execute the task and send its STDOUT to your preferred endpoint (either an email or webhook). We're in beta and are offering 15$ worth of execution time as free trial and would really appreciate HNers giving it a try.

Our blog post has more details: http://polyglot.network/cloudcron


This is great! I really wish we'd always create a single system image for both Docker and Physical/VM, preferably with minimal/no-init.

This is very useful when trying to create a basic datacenter specific distro/deployment, preferably Pixie bootable as well.


On the same vain, there is: https://github.com/vmware/vic a docker engine for esx/vsphere.

Note work at VMware but not on this project.


Wait a minute. I thought Docker was useful because you didn't want to run a whole VM. But this project turns a Dockerfile into a VM specification? Have we come full circle?


The "building" feature of Docker is very much applicable to VMs/bare metal. There spirit of this feature is just "building a root fs". There is no concept of "containers" or "vms".


I was also thinking the same thing. I also think that the prior person wanting to SSH into their docker instance hasn't quite grasped that you can already do that.

I'm not sure what the value of this is.


Docker can replace a VM for many use-cases, but there are times when a VM is still preferable. If you find yourself in such a situation, this tool allows you to leverage declerative Dockerfiles to build your VM. Pretty handy.


Yep, I could see this being a nice option if you're running something very security sensitive and want some extra defense by isolating the kernel.


Well, containers generally run headless applications/servers. I can see this for VM sandboxed GUI apps.


You should check out that Docker project where you take most of the files from a Docker VM and spin up a Docker container. It's called Docker.


I’d love to see an orchestrator that made switching between bare metal, vm and containers a simple configuration option.

This is a step in that direction. Cool stuff!


There's this[1], though not exactly what you asked for.

[1] - https://github.com/firecracker-microvm/firecracker-container...


Proxmox did a pretty good job of that in my opinion. Albeit it was more inspired by vSphere than Docker (it predates Docker).


Yes, I want the same thing. I do hope dockerfiles won't be the underlying configuration files to describe the machine setups though...


Kubevirt will deploy vms on kubernetes. No bare metal option though.


Would be really cool to use this to predictably build images for booting from PXE.


Try LinuxKit.


Why does everything in node have to have all the dependencies in the world?

``` @sindresorhus/is JSONStream ansi-regex ansi-styles archive-type argparse asn1 async balanced-match base64-js bcrypt-pbkdf bl bluebird brace-expansion buffer buffer-alloc buffer-alloc-unsafe buffer-crc32 buffer-fill buffer-from cacheable-request camelcase caw chalk chownr cliui clone-response color-convert color-name commander concat-map concat-stream config-chain content-disposition core-util-is cross-spawn debug decamelize decode-uri-component decompress decompress-response decompress-tar decompress-tarbz2 decompress-targz decompress-unzip docker-modem dockerode download duplexer3 emoji-regex end-of-stream escape-string-regexp esprima execa ext-list ext-name fd-slicer file-type filename-reserved-regex filenamify find-up from2 fs-constants fs-extra fs-minipass fs.realpath get-caller-file get-proxy get-stream glob got graceful-fs graceful-readlink has-flag has-symbol-support-x has-to-string-tag-x hasbin http-cache-semantics ieee754 inflight inherits ini into-stream invert-kv ip is-fullwidth-code-point is-natural-number is-object is-plain-obj is-port-available is-retry-allowed is-stream isarray isexe isurl js-yaml json-buffer jsonfile jsonparse keyv lcid locate-path lodash lowercase-keys make-dir map-age-cleaner md5-file mem mime-db mimic-fn mimic-response minimatch minimist minipass minizlib mkdirp ms mustache nice-try node-virtualbox normalize-url npm-conf npm-run-path object-assign once os-locale p-cancelable p-defer p-event p-finally p-is-promise p-limit p-locate p-timeout p-try path-exists path-is-absolute path-key pend pify pinkie pinkie-promise prepend-http process-nextick-args progress proto-list pump query-string readable-stream require-directory require-main-filename responselike safe-buffer safer-buffer scp2 seek-bzip semver set-blocking shebang-command shebang-regex signal-exit simple-git sort-keys sort-keys-length split-ca sprintf-js ssh2 ssh2-streams streamsearch strict-uri-encode string-width string_decoder strip-ansi strip-dirs strip-eof strip-outer sudo-prompt supports-color tar tar-fs tar-stream through timed-out to-buffer trim-repeated tunnel-agent tweetnacl typedarray unbzip2-stream universalify url-parse-lax url-to-options util-deprecate uuid which which-module wrap-ansi wrappy xtend y18n yallist yargs yargs-parser yauzl ```


These are the first-level dependencies:

    chalk: log formatting
    dockerode: docker API
    download: file download?
    fs-extra: functions like mkdirp, emptyDir
    hasbin: check if bin exists in PATH
    js-yaml": yaml parser
    mustache: templating
    node-virtualbox: VB API
    progress: terminal progress bar
    simple-git: Git API
    sudo-prompt
    tar
    uuid
It looks reasonable at first sight. Of all these, fs-extra is the only one you can argue should be part of the standard lib.

A lot of the other dependencies are simply cruft, for example, `safe-buffer, `safer-buffer`, `buffer-alloc`, `buffer-alloc-unsafe` all patch the same issue and haven't been necessary since ~2016 / node 6.0. Same for `sort-keys`, object key sort order was baked into the ES2015 spec and has been the default behaviour since the beginning of the decade.

As mentioned by other commenters, this is a result of an extremely easy-to-use packaging system, coupled with a culture of sharing and reusability. The dark side of it is (justified) laziness and reinvention of the wheel - the core functionality of many of these modules can be written in one or two lines of code, but it is indeed faster and safer to just import something that exists and has been tested. The vast number of choices means it's hard to find standard solutions, and this also encourages developers to create their own 'improved' version of everything, in a self-reinforcing loop. The language itself has been in constant change, meaning new flavours of previously stable modules pop up to support new patterns (promises, generators, await, classes, etc etc). Then you get egos, marketing and corporate sponsorship added to the mix :)


This is exactly why we don't allow our employees to use Node.js for company software development, even though in theory a fair chunk of the runtime is our own software that we maintain. But the npm ecosystem has a really awful signal-to-dogturd ratio, and developers appear to put very little effort into critically analyzing their dependencies.

The topic has come up, but I'm generally against it; sure, we could spend all the resources on doing all the filtering and analysis and change management that it would take to establish a sane package base that we could officially support for internal development work... But why? We do it for other languages, but Node has a lot of "my first programming language" bs going on in the ecosystem. I strongly suspect that allowing Node development would be a net negative for our company.


I'm glad I don't work at your company. "developers appear to [...] I strongly suspect"... I hope the decision to exclude an entire programming language that's popular almost everywhere is based on serious study rather than appearances and suspicions.


No, don't be silly; responsible companies don't pick and choose languages to exclude based on suspicions.

No... responsible companies exclude ALL programming languages except those that they can responsibly support -- you don't want the new guy writing some a critical piece of infrastructure in Haskell or Ada or Lisp or something because they feel it's morally superior, and then find out it can't integrate with some critical management system because it doesn't have the right bindings, or that it doesn't run on your upgraded production environment, or whatever. When you have tens of thousands of programmers, it's guaranteed to happen unless you proactively prevent it.

No, we have a handful of programming languages where we can guarantee that everything works, that the important infrastructure is accessible, that every library will remain supported until we replace it, that every vulnerability can be patched within a given timeframe, that code will run correctly on every machine, and so forth. Taking on a new language means dedicating a whole team of engineers to maintaining support for that language, indefinitely.

Some companies may be able to YOLO their way through decisions based on what what's popular at any given moment, but when you operate at any serious scale you have to be a bit more... responsible.


This is what you get when you combine a really easy to use package management ecosystem and the standard library is really bad. You don't see this in Python for example, because its not so easy (its getting better though) to create packages and because its standard library is massive.


Javascript's standard library isn't bad, it's perfectly adequate for its intended use case - acting as a lightweight scripting language for the web, within a browser. Javascript as a system or backend language was a mistake, but there's no putting that genie back into the bottle.


You're confusing JS and NodeJS. NodeJS has more stdlib stuff; opening files, for example.


Virtually everything is a dependency in JavaScript, whether you're using node.js or not. One need look no further than your citation of `open()` as the proof that the nodejs environment "has more stdlib stuff". If the operating environment can run in a context where file IO happens, a native file open function is about the most basic baseline there is. It's absolutely not a good look for "well, we have `open()`!" to be the poster child of the JS stdlib.

After `left-pad` and `flatmap-stream`, attempts to justify the state of affairs in the JS ecosystem are patent absurdities. They show starkly that the platform vendor needs to offer a reasonably-robust basic toolkit, and that cultures of "every function should be published as a library!" are a massive risk factor.

A good first-party standard library should be considered a security requirement for every application. Dependencies should be brought in with care and attention, not in a massive indiscriminate orgy of nested modules spraying every function into its independent own library and resulting in every node.js application requiring its own 500MB+ folder of libraries to even start.

Think it doesn't get worse? People are now using node.js to distribute "business cards". Arbitrary JavaScript execution on your local user account. Has science gone too far?! Someone at npmjs.com sure has by allowing this kind of thing. [0]

The technical world is crumbling. Who can fix it?

[0] https://blog.bitsrc.io/malicious-npm-development-kit-a02401e...


> It's absolutely not a good look for "well, we have `open()`!" to be the poster child of the JS stdlib.

...which it isn't, given that the "fs" module has a wide variety of features much beyond an equivalent to open().


Yesterday a friend of mine told me about how they got a Node.js application from a vendor that was about 3 MiB, and after running npm install, it was over 1 GiB.

I half-jokingly said that Node apps are the new ZIP bombs.


Back in the late 90's, I got into a USENET flame war, because the smallest, easily pruned ObjectStudio executable was an entire 4MB. Not small enough. Someone around that time got Squeak Smalltalk down to around 380KB. Not small enough. Back in the late 90's, you could still meet people who would insist that even your hyper-complex business app should be written entirely in C, because anything else was sinfully slow and wasteful. Any language with a VM was automatically too slow to be useful at all.


Isn’t that because every Node dependency stores its own dependencies within itself? So you could literally end up with multiple copies of the exact same version of the same library.

I’ve never understood why they didn’t go with the Maven approach: all dependencies stored in a central location, separated by version.


I believe the same version (within a specified semver range) are hoisted and stored in the root of ./node_modules. Differing versions are nested within the consuming packages and therefore duplicated.

This can be particularly bad when a popular package has a semver major change (even if, for example, support for an outdated version of node is dropped), many libraries will lag behind in updating to the latest major version and you will have many duplicated copies of a popular package.


good luck to your friend with repeatable installations :P


"If you wish to make apple pie from scratch, you must first create the universe"

- Carl Sagan

the npm ecosystem takes this quite literally, for better or worse.


I'll credit npm with this, truly, when there's a package for each one of the Peano postulates. (As an actual functional dependency.)


If you need repeatable installations, wouldn't node be the wrong tool? I mean, you'd have to freeze everything yourself and then those libraries become _your_ problem. Ugh. That's a hell nobody wants.


To be pedantic, it's "wouldn't npm be the wrong tool" (it isn't, necessarily, I believe lockfiles provide you with reproducible builds)

Vendoring/copying them is another way to achieve this (and means you don't need to depend on npm or its lockfiles).

Regardless, those libraries are your problem whether you vendor/copy them or not.

Read more: https://research.swtch.com/deps


You’d just have a local cache, you can pull updates from authors if needed.


Is it really that strange? The transitive dependency graph for my iOS and Android apps are similarly complex.


This has good potential, what are the limitations on the VM?


Some limitations in terms of the vms and providers:

* If the size of the initrd is too large, it cannot properly unpack into vm's RAM --- size of RAM must be increased accordingly. We could also change [boot params](https://www.lightofdawn.org/blog/?viewDetailed=00128), or use shared disks, etc.

* For hyperkit, apple's vmnet requires sudo to create a bridge interface on host. We've played with a version that use's vpnkit and port forwarding (like linuxkit/Docker for Mac), but this adds lots of complexity in image, and opted for the simpler approach.

* We would like a better template mechanism for reusing base images and extending. Right now, we support using base image reuse, with extensions through docker buildargs---ideally, we would want something like %include support in Dockerfiles.

* Finally, we're investigating how to make images work well on multiple providers. For example, ubuntu does not play nice with hyperkit out-of-the-box, but works fine for vbox and kvm.


What about swarm mode and orchestration? Also I presume like LinuxKit, there will be configs for different clouds e.g. Digital Ocean and AWS run ISOs slightly differently.


Yes, one use-case is making it easier to setup/teardown clusters for local testing. Two practical scenarios for us: 1) autograding ansible/configuration scripts, 2) CI for instructions/tutorials that involve clusters/devops: https://builds.sr.ht/~ottomatica/job/69644#task-report

Cloud-ready images is an important direction, and on the horizon.


If this works, this is fantastic. Getting away from the stupidly complex abstractions around Docker is a welcome change, especially if we can still package and deploy immutable images. We already manage containers like tiny VMs, so ditching the abstractions should simplify life a bit.


> stupidly complex abstractions

Not sure what you mean.. could you give some examples?

In my experience, people prefer Docker over VM's because they _like_ the abstractions and tooling associated with it. It's a lot friendlier to developers and makes immutable infrastructure a much more realistic goal for ops folks, IMO.


Have you looked at LXD/LXC? I find it to be a great compromise between the high overhead of VMs and the complex abstractions around Docker.


Yes, it's still just more abstractions. If you look at the way people use ECS, allocating specific resource limits to each container, it's basically a micro EC2 node. And for me, the only reason I use containers is to make it easier to package and run applications immutably. If I can do that without "containerisms", all the better. It also seems like VMs would solve a good deal of multi-tenancy issues.


SystemD does that.


This is cool. For dev, the docker runtime consumes an enormous amount of host system resources. Even with a 16gb RAM host machine, docker is really resource heavy for a development environment. If this can cut down on host system resource usage, that's a major win.


Are you on a non linux host? I assume docker is a lot less heavy on resources on linux than on mac&win.


they have to be. Docker is extremely light weight on a linux host.


I assume they are running it on Windows - I've been running Docker Desktop on Windows for years, and it's backed by a Hyper-V Linux VM, which does seem to use a lot more CPU than running Docker on Linux.


This sounds interesting, I want to look into it further. However, my immediate thought is what about the naming conflict with the very popular PHP framework?

http://www.slimframework.com/


Yet another different take: footloose – Containers that look like Virtual Machines!

https://github.com/weaveworks/footloose

(Disclamer: I'm the author of footloose)


this is great! This might have saved me some time I was planning on spending to learn packer. I have a docker based project and I wanted to add VM generation to the CI pipeline.

Really excited to play with this tonight


Can slim be used to create an iso? So:

Dockerfile > slim > iso


From the GitHub README:

> `$ slim build images/alpine3.8-simple`

> This will add a bootable iso in the slim registry.


What would an ideal re-world scenario look like for using these micro vms?


Docker containers don't have a robust security boundary, due to the kernel sharing that they do. These micro VMs combine the low resource cost of a container with the solid security boundary of a VM, which is very useful in a multi tenant architecture.

AWS Fargate and AWS Lambda run entirely on micro VMs.


I’m curious how this approach compares with Kata Containers. Very cool.


Are we just going in circles now?

Why not just start with a tiny vm and call it a day?


Tooling matters. Building and managing VMs has, historically, been more work.


Debootstrap has been doing the work with pretty much one command for 10+ years.


Does this allow for a docker image to be run inside a browser tab?


Naturally. With this and a bit of hacking: https://bellard.org/jslinux/


Forgive my lack of technical depth, but is it actually running client-side, in the memory allocated to my newly-opened browser tab, or on bellard.org's server and syncing the input/output to my browser?


The virtual machine (emulated CPU, devices) and the OS are running fully client-side:

https://bellard.org/jslinux/tech.html


It's a real VM running in your browser, in Javascript.


Technical depth? Did you click the link?

> Run Linux or other Operating Systems in your browser!

It runs in your browser.

If you clicked the link you would also see demo links that run in your browser.


No need to be abrasive, many companies offer things "in your browser" yet they merely send you the frontend and instruct your browser to connect to their backend.

Such as gaming SaaS thingies.


Thank you!! I always wanted to have a way of quickly ssh'ing into my docker image with some sort of virtual box implementation so I could poke around. I always felt the debug tools lacking. This is perfect. Can't wait to try it out!


Here comes!

Getting shell in a new copy of container:

docker run -it --entrypoint=/bin/sh ${container}

Running shell inside a running container:

docker exec -it ${container} /bin/sh

Running sshd inside a container to let you peek inside is bad taste, and bad security, too.


What would it take to get this running on iOS and Android?


You'd never be able to run this on iOS. It's far too locked down.

You might be able to add QEMU support to this, and then run it on an Android device if you have root. But it would perform terribly because mobile chips generally don't have virtualization extensions and ARM as a virtualization host is a pretty immature platform.

TL;DR - far too much to be practical.


Why JavaScript instead of something more performant?


Normally the language for doing this kind of system building would be .. Bourne shell. Or Perl/Python.


I was actually expecting Bash scripts before I looked at the GitHub repo.

TBH, I think the code would be a lot simpler if it was just Bash.


What for? It's just glue code, it doesn't actually run the micro-VMs.


Because I don't want to have to install the massive nodejs runtime just to glue things together.


I'm just curious what you feel about Java, .Net, Python and Ruby? Ruby is a hair smaller, but the others are actually bigger than the NodeJS runtime, with .Net and Java being significantly bigger.

Or, do you only run software hand crafted in assembly?


You're painting a false dichotomy. There are lighter-weight options, like Go.


There's also Rust, C/C++, D, and others. It's not exactly a false dichotomy as I'm pretty sure GP and others who bring up these tropes against Node do use other scripted or higher level languages that have an even bigger footprint.


This is not even remotely a concern for any enterprise that would use this. Fretting over this is like fretting over the gas left in the nozzle when you fill your car.


Run this in a docker env then?

docker run -it --entrypoint /bin/sh -v $PWD:/bla -v /run/docker.sock:/run/docker.sock node:12

apk add git docker cdrkit libvirt-daemon qemu-system-x86_64

npm install https://github.com/ottomatica/slim

cd /bla && /node_modules/.bin/slim


Nothing stops the tool from being published as a single binary in Homebrew using pkg or nexe. That will probably happen once it gets enough traction, it's actually a lot better for maintainers to support a single version of Node.


Then you're going to have a hard time in the post Node.js era. You also probably want to avoid looking at what many popular desktop apps use behind the scenes :)


I avoid any desktop app that uses electron or similar. Electron is still a hog, and most things written in electron have alternatives. I don't use the slack desktop app for this reason, as an example.


So you use the browser app which uses approximately the same amount of memory / resources?


Minus the browser. Each Electron app isn’t just the running web app but also the browser instance too. Where as a web app running in a browser wouldn’t consume its own browser instance.

That browser instance alone can make quite a noticeable difference.


Stuff used to boot off of a floppy disk!


Pretty sure you still can, if you throw out all the drivers and libraries we've invented since then. Like, we waste lots of space, but a lot of it really is going to useful features.


Used to have self-replicating persistent malware hidden inside filed that fit on floppy disks.


And now we have compromised NPM packages and Dockerhub accounts. What's your point?


My point is people used to be able to program computers and now it's all embedded browsers all the way down which is why the recommended way to install ubuntu from macos involves downloading and running (as root!) a 330MB electron app.


And it took ages. What’s your point?


A neat idea




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: