Hacker News new | past | comments | ask | show | jobs | submit login
jor1k: OR1000 Javascript Emulator Running Linux (s-macke.github.io)
209 points by iso-8859-1 on Sept 17, 2013 | hide | past | favorite | 69 comments



From the project wiki:

'In the beginning google Chrome was the fastest. After more and more optimizations were implemented Firefox was little bit faster then Google Chrome. When IE10 became compatible with my code it was the fastest. After implementing the worker thread Firefox 22 got superior being 3 times faster then the other browsers. For some reason this advantage got lost with Firefox 23. Instead of this Google Chrome managed with version 29 to get this position with 30-50 MIPS. At this moment changing one line of code in the Step() function could reduce or increase the speed by a factor of 3. The reason for these speed oscillations is the tremendous complexity of todays JIT compilers and the black box behavior of them which makes it almost impossible to code really fast code.'


Fantastic. I love that it's OpenRISC based rather than the "obvious" choice of yet another x86 emulator. And framebuffer support, though slow.


Sadly, the choice of OpenRISC also means it's hard to get Xorg working, since it wasn't tested on OpenRISC. Also, the gcc OpenRISC target has a number of bugs, I've heard.


http://simulationcorner.net/X11.png

I am working on it. But it is extremely unstable. In the picture you see the best shot I ever got.


See it as an opportunity to fix things ;)


Reminds me a lot of this http://bellard.org/jslinux/


Bellard's jslinux, despite being pretty impressive too, is not open-source.


If I remember correctly, the source is not obfuscated either


Yeah, there are are bunch of other emulators. I try to push this cause it's open-source and it implements an open-source instruction set.

Here's my list, please add to it if you know any others: https://gist.github.com/ysangkok/5606032

Arm-js has a control panel which is really nice. It would be sweet if the open-source emulators could share some code, but they are pretty different code-wise right now...


Soon Chrome running inside Chrome.


that... is so meta.


asm.js type error: non-expression-statement call must be coerced @ http://s-macke.github.io/jor1k/js/worker/fastcpu.js:927

Firefox 25.0a2 (2013-09-11)

EDIT: Interestingly in Chrome Canary it starts out running at around 50 MIPS and then at some point v8's jit recompiles/deopts some code and it drops down to 12 MIPS and stays there.

JavaScript... sigh



Oh wow, you wrote it all by hand? That's really impressive! :)

If the standalone asm.js validator isn't yelling about that and Firefox 25 is, you should let the people who maintain the validator know. They're supposed to be in sync.


Yes, programmed the asm.js core by hand. It was not too hard. One day. The asm.js is also a little bit outdated. It could be around 20% faster. I have never used the standalone asm.js validator. Not even knew, that it exists. The optimizing work in Javascript is really hard. Every few weeks there is a new browser and new mystic speed issues. Firefox 22 without asm.js was the fastest six weeks ago (30-60 MIPS). Then everything changed. And I don't know why. Even the IE is faster than Firefox (without asm.js) right now.


Very impressive work. I saw Brendan Eich speak At jquery uk in April and he said asm.js was meant for compilers and people should not be programming it directly...I couldn't help but smile and think 'oh yeah?'


Yeah, I've had the same experience. I applaud your persistence! Hopefully asm.js will eventually let you avoid babysitting your code with every browser release.


Technical merits and uses aside, I'm still floored that this kind of thing is even possible.

I've loaded it up and there's Monkey Island running on ScummVM running in a Linux VM running in Chrome's Javascript VM. Stunning.


The greatest potential is ofcourse having a cloud service where people can store their disc and hibernated emulated RAM and then switching to another browser on another machine and continue working like nothing happened.


Two things:

1) I am waiting for a version using networking. I know there are security issues but there are also workarounds.

2) My notebook is hot, so I checked and the Chrome CPU console is at 25% even if I am not doing anything in the console.


I'd have to look at the code, but it might not be too awful to get tun/tap to work over WebSockets...

Regarding the heat, I wonder if the architecture has support for idle states. If so, the VM should "clock down" by executing its loop from a timer instead of full-bore. I wonder if this is already happening, though.


See https://github.com/s-macke/jor1k/issues/3

I tried adding an extra serial port some months ago, but I didn't succeed.

It would be really cool if you made this. Here's a use case I had in mind:

You go to emulator.html#http://otherwebsite.com/packages.json . The emulator's JavaScript part detects the fragment, and downloads a list of dpkg packages to the HTML5 file system. The emulator boots up, checks the HTML5 file system over 9P, and installs all those packages.

Like this, you could test any binary package over the web without fear, and it would be decentralized, you wouldn't have to modify the emulator.

But for jor1k, we'd need networking and 9P/HTML5 support (like in Arm-js). For Arm-js, just the networking would suffice, as the 9P support is already there.

9P in Arm-js: https://github.com/ozaki-r/arm-js#emulator-features


Saw that issue, hope to eventually respond to it with a pull request.

Are you Sebastian Macke? [Edit: scratch that, I see he's commented elsewhere.] Either way, get in touch (info in my profile). I'm curious to hear what issues you ran into in trying to add another serial port. I'm not terribly familiar (read: not at all familiar) with OpenRISC, but I'd imagine it's just a matter of creating a second instance of UARTdev in system.js and adding it to the 'ram' object at an appropriate base address. If OpenRISC is similar to ARM, you may need to add the second uart to your platform/chip/board init as well so the system is aware of it.

Regarding networking, I intend to actually implement eth.js and wire it to a WebSocket, then on the other end I'll write a receiver which hands the frames to/polls a TAP device. At that point, you can do whatever you want on the backend with that interface. I'd like to start this weekend, but (how often does one get to say this?) I'll be at the governor's mansion and that's going to take a bunch of my time.

I'm not sure I follow on the package management bit. Would these be emulator extensions, or packages which the image would use?


This features sound like fun. But networking is not on my priority list right now.

Problem is the server and security issues. At the moment everything runs out of the box even on the simplest web server. This would change. And at the moment I don't want to implement features which you can't use on the demo web-site. But of course, you are free to do so. You have not even to change something in the kernel-image because everything is ready to test. Exchange the dummy ethernet driver with the real one from the or1ksim project.


I think the main issue is that the console and framebuffer use canvas elements which are continuously refreshed by a javascript timeout. This tends to use up all of a single core, no matter how little of the screen is actually changed. Perhaps using requestAnimationFrame could improve the CPU utilization.


I'm curious about the CPU usage as well. The Chrome worker process on my system is pegged at 108% just sitting at an idle console :(

Still, this is extremely cool, especially seeing an architecture from the OpenCores project being used.


As written in the wiki, Firefox 23 is at the moment not a good choice. Firefox 22 emulates this machine three times faster and possibly Firefox 24 will return to this peak. Chrome is fine.


  # scummvm


How the heck does a 7MB image come with an entire OS, emulator, and a fully functional game?! You have to wonder how much of our stuff is bloated.


Go back to the late 80s/early 90s and small System 7 UNIX clones like Mark Williams Company's Coherent 3.2 could install, complete with their entire userland, dev sys, docs, and formatters in around 10Mb of disk space and run on a 286 with 640Kb of RAM. No TCP/IP stack (it used UUCP for comms) and no X11, but if you fast-forward to 1992 you could have Coherent 4, with TCP/IP and X11, running on a 386 with 4Mb of RAM and installed in 20Mb of disk.

Even the mammoth (by the standards of the day) SCO Open Desktop 3.0, with X11/Motif and a bagload of other stuff, could install in less than 200Mb of disk space and run reasonably smoothly on a 386/33 with 16Mb of RAM. That was a full-blown workstation grade UNIX distribution with all the trimmings. These days word processors are bigger.


Back in the day, you could boot the entire BeOS kernel, filesystem, GUI, network stack, window & task manager, and web browser off of a 1.44mb floppy disk.

Ever since then, I've been unable to shake the feeling that something has gone catastrophically wrong with the way we architect software.


I'm seriously considering switching to using an AmigaOS mail client + UAE (Amiga emulator) instead of Thunderbird because of how slow and bloated it is - there are at least two AmigaOS mail clients still in development, and given that they are written to still be able to run on the old m68k Amiga's (in addition to on newer PPC systems) I have an inkling they'd come out of it quite well.


Well when you add up all the backdoors to the various spy agencies worldwide, plus the obfoscation mass code, I the that accounts for the bloat. good luck finding it amongst the 18 GB sources of Android, or similar for the other systems.


Yes, all 7MB... The image is a compiled to target the OR cpu.

But the impressive partbis the emulator itself... The emulator is a full SoC in javascript, based on the OpenRISC instruction set.


A large number of us just felt incredibly old.

(My first machine with a proper multitasking, GUI based, OS was an Amiga where the OS with in a 256KB - later 512KB - ROM and two 880KB floppies)


'Bloat' is 'any feature I'm not using right now'.

We expect more out of our software, so we write more code to do it, and we complain about bloat because no individual person uses, or expects to use, every feature in every piece of software. However, when anyone comes along to remove features, people complain even worse because, of course, nobody can agree about which features to axe! Everyone is using different subsets!


And that "bloat" also comes from features in libraries that the developers use. It's much of what allows development to happen at a much more rapid rate than it did in the past.


Nah, bloat is any feature I'm not using ever; like java, or gnome-tracker.

e: these are just two things that were installed on my system despite not having software that needed them.


I am using firefox 23.0, the program appears stuck at the decompressing step while showing 0 MIPS. Also restart with asm.js button doesn't appear to work.


Works for me on Firefox 23.0.1 after a fairly long delay. Also worked on Chrome 29.0.1547.66.

Edit - one oddity: on Chrome the console does not register the spacebar.


You'll need to specify which version of chrome. Space works fine for me on Version 29.0.1547.66 m


Version 29.0.1547.66 m


Google could take a cue from this an implement Google Native Client (NaCl) in JavaScript like they do with Dart.


That's less an issue of running native code (you can do that with asm.js or emscripten), and more an issue of the numerous additional platform APIs provided in NaCl that HTML5 doesn't have: USB, Bluetooth, GLES...


So realistically, what can you do with something like this especially since there is no networking? Can it do anything fun with WebGL?


In general a JavaScript-based Linux virtual machine should allow you to run software that would otherwise be difficult or impossible to run in the browser (which right now can be done for a lot of software by compiling it to JavaScript with https://github.com/kripken/emscripten). Data could be transferred to the VM for processing and then back to "normal" JavaScript using serial I/O. This might prove a useful kludge.

For a simple example of where it could be used consider a webmail provider who wants to get GPG encryption into their web UI. The provider could embed a Linux VM in the HTML document to let the user do encryption with the well-examined native version of GPG. (Although native GPG running inside this VM would not be safe from, e.g., malicious JS in banner ads.)

Right now the emulation is too slow to do these kinds of things practically, however, and it's hard to say whether or not it will get sufficiently fast before other tools for running native software the browser fully mature.


tptacek tells me that secure JS is never going to happen so I don't see a use case there. Yes, proof of concept is cool, until someone steals your private key. No thanks.

I suppose though you could non-secure stuff, like add a Python interpreter or C compiler and teach people to code inside the VM. GitHub could provide in-browser syntax checking using a real interpreter/compiler.



You don't need the whole of Linux just to run gpg though...


I imagine it would be good for online sysadmin training courses.

Read the lessons, follow along in another tab. No messing around with VMs etc.


Honest question:

How do you write so much JS code? (i.e. editor, testing, debugging, running, dev environment, etc.)

Thank you.


Editor: Normal text editor with syntax highlightning

Testing: Firefox and Chrome

Debugging: Function console.log

Optimization: JIT inspector for Firefox (incompatible since FF 23) and profiling tools from both browsers

So, nothing special involved. Neglecting the redundancy, the source code contains 4000 lines of code. The complicated part was the building of the Linux image and libraries with a toolchain in beta status.


Curious to know more about the toolchain bit. Have you messed around with getting it to work with OpenEmbedded or Poky?

Also please do feel free to contact me out of band (info in profile). I'd like to implement ethernet support, but if you're halfway through I'd hate to duplicate your effort.


The toolchain (uCluibc development) I am using is explained here http://opencores.org/or1k/OpenRISC_GNU_tool_chain

My scripts for the toolchain are published here: https://github.com/s-macke/jor1k-toolchain-builder

Unfortunately the scripts are broken but show the overall complexity to build the linux image.

The ethernet device in jor1k is a dummy right now but the kernel is already prepared to work with this device. So everything I have to do is implementing the correct device which is defined here: https://github.com/openrisc/or1ksim/blob/or32-master/periphe...

But nothing has been done right now. It is not my main priority right now because it would need a server script. And github does not allow this. And there could be a lot of security issues as well if I would add this to a demo.


Thanks. I've seen eth.js and was just waiting for OpenCores to approve my membership so that I could start working on it. I didn't realize the spec was on github.

I need to look a bit more at how everything is laid together, but my intention is to have eth.js write ethernet frames to a WebSocket, then on the server side read/write frames from/to a TAP device.

As for the demo, there's no reason why you can't just turn off the socket for the static demo. Otherwise, I'm intending to host something on DigitalOcean to prove it works. I'm sure whatever it is it will be very simple so I don't wind up with a huge bandwidth bill, but I'm hoping I can think of something that's clever and easy on the wallet. Maybe I'll make it add all machines to one giant subnet or something... we'll see.


Sound good. Websockets are the best solution I suppose. If you need help you can find me in the #openrisc chat at chat.freenode.net .


Thank you so much for the reply. I was thinking I was doing something wrong with the cycle: edit, save, reload in browser, test it, repeat.

Great work and very impressive.

Thanks!


Problem is almost nobody writes JavaScript which doesn't manipulate the DOM in some way, so you almost always need to test in-browser. Otherwise, there's no reason why you couldn't write automated tests to exercise most of this in a headless v8. Won't be good for profiling, but it will at least be good for validation/regressions.


Yes, I thought about it. The DOM manipulation and the hardware emulation is separated. Unfortunately with the implementation of worker threads it became more complicated to run this code in headless v8.


If this becomes a thing, I could dump nitrous.io and still code in browser.


Could be useful for teaching Vim / Git in the browser.


is the "Restart with asm.js core" button working?


Yes, with firefox 23 I pass from 6 MIPS to 90 (which I suppose means that there is some bug without asm.js, since it shouldn't improve that much).

This work is giving a new mean to "impressive"


Not a bug, asm.js is basically machine code, so it should run nearly at native speed.


Yes, at least for me on Firefox 23


'rm -rf *'


`kill 1` is fun


impressive!


This is cool but what is it good for?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: