It's a cool idea. I found the contrast with electron helpful:
> While solving some of our issues, Electron was rapidly increasing in size and hunger, so despite it being open-source soon joined the rest of the software that we did away with. Our focus shifted toward reducing our energy use, and to ensure reliability we began removing dependencies.
It's interesting that the virtual machine is neither very fast nor it is memory efficient. If you really want to have max speed + portability, it's hard to beat restricted subset of C, especially since almost every platform has highly optimized compiler. Something like:
"Code must conform to C89 with -nostdlib, and can only link to libuxn (that we wrote). And use uxn_main() instead of main(), as libuxn defines main. No binary dependencies allowed, all source files must be in project directory, and only uxn build system can be used"
The the authors would only need to write libuxn for each platform they support, which is certainly easier and faster than writing a whole emulator.
But I am guessing this solution did not satisfy other criteria, perhaps things like "playful" and "build from first principles". It's a pity though - distributing apps in source code form instead of emulator binary would make them much more modifiable by end users.
Since they were on a boat with a Raspberry Pi and not much battery power or internet, in addition to execution speed and portability, they were probably also concerned with:
- speed of compilation and linking (on small machines like Raspberry Pi)
- binary size of anything that might have to be updated or shared over the internet
- having to troubleshooting emergent issues without the ability to lookup documentation or download updates or tools from the internet
In this situation, a simpler language on a simpler VM is probably going to be faster to develop with than compiling/linking a subset of C, and after the initial implementation of the VM, might present less opportunity for an unintended interaction of leaky abstractions in your libuxn and your toolchain to ruin your day on a day when you don't have internet connectivity to check Stackoverflow or to update some buggy dependency.
I have a raspberry pi 2 (one of the first one, very low) and tried compiling orca.c on it with gcc. It took 7 seconds and produced 38KB executable. This is more than what I expected, but it's still pretty reasonable. However, I can see how this can be annoying with very rapid iteration cycles.
(Btw, building alone (no linking) takes 5.4 sec, so compilation time dominates. And optimized build (-O3) was over a minute. Not something to be done as part of development.)
But then I remembered about lighter compilers, and tried 'tcc'. This was substantially faster - just 0.6 seconds for the whole process! I think this is very reasonable, and for newer Pi's it would be even faster.
Also note that my idea was to forgo standard libraries and only link to libuxn (and only include uxn.h). This theoretical libuxn will have the same role as UXN virtual machine - written only once, and then frozen in stone, with no updates.
This will take care of large binary sizes - if you are only linking to libuxn there are no updates to download. And there is no need to lookup documentation or consult stack overflow, as you are only allowed to link to libuxn, no third-party libraries.
Will such limited environment be inconvenient? Somewhat, but less that full-blown VM. Will it take effort to write and maintain libuxn on all platforms? Yes, but less effort than full-blown VM. Won't libuxn have some bugs requiring updates? Likely, but I bet it will have fewer bugs than full-blown VM.
As for other things (updating binary over internet, shareing
There's an effort to port uxn to DuskOS (https://duskos.org/). The goal here isn't to be maximally performant or memory efficient (though at this point, running uxn on DuskOS on certain architecture is faster than the mainline uxn implementation).
Rather, these are computing platforms to maximize usefulness in the event of a societal collapse.
That is why there is a handbook of uxn opscode made to include hand gestures (https://wiki.xxiivv.com/site/uxntal_opcodes.html), so that computing and transmission of computing can continue even with the loss of the computing hardware or documentation.
Dusks OS badly needs a ZMachine interpreter. The number of software running on DuskOS would skyrocket. From a Tetris implementation to tons of libre IF (and games) such as SpiritWrak, All Thing Devours, Reversi, ...
I don't get it.. DuskOS mentions "it runs plenty fast on an old Pentium 75 MHz with 16mb of RAM." - that's a lot! I'd expect something geared for "civilization collapse" to be compatible with smaller embedded micros, like 80 KB RAM of esp8266 and 500 KB of RAM of ESP32.
I remember programming on 286 and 386 machines with 33 Mhz and 2-4 MB of RAM, and it was perfectly usable with Pascal and even C (although C was annoyingly slow, a few seconds per build). If your idea of old machine is Pentium with a _whole megabyte_ of RAM, or even more (gasp!), you don't need to pay Forth penalty, you can have normal languages with good ergonomics.
The smaller embedded micros are the target of CollapseOS, which is folded into DuskOS. The stage DuskOS is targeting is when we stop being able to have the ability to make new computers and new chips, and we start looking to salvage old ones laying around. There is also recovering knowledge from disks.
Simple C code from 1990 still works, as long as it does not depend on non-standard libraries.
Imagine how cool it would be if old games were distributed in source code form, so that anyone can modify and "remix" it as much as they want with a simple text editor?
This was impossible back in 1990s, but today we can get a C compiler (tcc) which takes less than 1 megabyte of space and compiles+links a game in less than a second. As long as you don't depend on too many third-party libraries, you can ship C code, compile the game on each start and user won't even notice!
I like both the ideas of 100r and of this rebuttal. I think much of this comes down to a fundamental misunderstanding, namely that code is the level at which we understand something. Rather, when we build software, we build a theory [1]. So what we really need are tools for building theories. That makes it possible then to take high-level abstractions, express something in them, and then reason about how these high-level abstractions can be formulated using low-level abstractions (but abstractions nonetheless). This makes it possible to play with your creation at any level and make software that is both correct and incredibly fast. It is the only scaleable way to achieve negative-cost abstractions. Rust is not much better than C here, as both just fix you to a certain level of abstraction.
That is a savage savage critique beautifully written and demonstrated. I don’t even like small ‘simple’ systems and I’m smarting a bit reading it. Thanks for the link.
It seems a bit off target, though. It's not entirely wrong; it's true, for example, that Uxn doesn't lend itself to writing an efficient implementation easily. It's also true that the existing implementations are all on hardware that still requires a substantial amount of power, 2000 milliwatts or more, as well as being fairly inefficient in absolute terms.
On the other hand, on my laptop, the Left text editor running in Uxn (with SDL!) still uses less energy than Emacs does, despite the interpretation overhead. And you can run it on a GameBoy Advance, so it's definitely a step in the right direction if you're looking for a frugal write-once/run-anywhere platform. It's the first attempt at the problem that's good enough to criticize, and it's something you can download applications for today.
It's easy to see ways to improve it, but to my knowledge the critiquing savage in question has not written a better system yet, although their critique is indeed informative.
To be clear, I don't have any interest in shitting on a hobby of any sort, be it low-energy computing or whatever. I do lots of stuff that's completely for the joy of it, and has no logical benefits.
It can probably run emacs, too. At least, I have, in my life run emacs on a computer less powerful than that ESP32. And, we should acknowledge emacs is not a hard target to beat :)
The main point, that tens to hundreds of thousands of man years going into optimizing compilers, combined with maybe millions of man years of silicon design and production beats a hobbyist for efficiency, I'd say he demonstrates right in the essay by handwriting assembly for uxn and comparing it with sane-to-simple C code and showing it's 100s of times faster in C than uxn.
I also take a little of his snide asides for those who know to heart; as he alludes, calling it a forth-like without the ability to compile immediate words is giving up a lot; forth is like bare metal programmability of the compiler.
Anyway, I do a lot of coding in go and (sadly) javascript, and not much in Forth, even though I like to hold the idea of the simple days of yore in my mind; we've each got our own thing. But I think he's probably right that this will always be a boutique thing which is mostly for aesthetics.
Having written a sort of Forth compiler without immediate myself, I agree that uxntal is not very Forthlike. I like Golang, JS, and Forth.
Your link is broken, but https://hackaday.com/2021/07/21/its-linux-but-on-an-esp32/ documents someone getting Linux running on an ESP32-S3 under a software emulation of RISC-V with an MMU. That's not obviously more efficient than a simple Uxn emulator and seems likely to be worse. People have also gotten ucLinux running directly on ESP32-S3 with uclibc, which seems unlikely to be able to run Emacs or GCC, though not, as you say, because the CPU is too slow or the RAM is too small. I'm not sure I've run Emacs on a machine with less than 16MiB of RAM but I'm sure it's possible.
1500 milliwatts is still several orders of magnitude more power than I think a personal computer needs, and 16MiB is obviously a couple of orders of magnitude more RAM.
Emacs is actually surprisingly efficient. We remember it as being slow and bulky in part because 30 years ago it was among the bigger resource draws on the machines we used at the time and in part because it actually was slower then. Current Emacs compiles elisp to machine code.
It turns out that it's actually pretty common for a hobbyist to be able to beat tens to hundreds of thousands of man years going into optimizing compilers, as you know if you follow the demoscene at all. Proebsting's Law explains why. The silicon design and production are equally at the disposal of GCC and the hobbyist. But the current applications codebase can easily consume all that surplus computational power.
I agree that if you were to rewrite your applications in sane-to-simple C code, compiled with a good compiler, it would come out faster than Uxn, though only by about 10×. But nobody has done it. Vaporware always beats shipped code because vaporware doesn't have bugs or scope/schedule tradeoffs.
I think it's possible to do much better than Uxn. But I also am not convinced that anybody has.
> 1500 milliwatts is still several orders of magnitude more power than I think a personal computer needs, and 16MiB is obviously a couple of orders of magnitude more RAM.
So, your idea is a computer with less than 200 kiB of RAM? So, devices like a PC AT 286 or Amiga A2000, with their whole 1 MiB RAM, are hopelessly overpowered, and the right kind of a machine is something like a Sinclair Spectrum?
Fair! These guys run Quake on an Arduino Nano [1], with only 274 kiB of RAM. But they have to jump through a lot of hoops.
Running e.g. Emacs the same way is likely unrealistic, and I find Emacs a great example of software that the user can actually inspect and personalize. I would like most user-facing software tools be like that. (In an ideal world I'd love my personal machine to have a terabyte of Optane memory, but currently such hardware is but a fantasy.)
>the Left text editor running in Uxn (with SDL!) still uses less energy than Emacs does, despite the interpretation overhead
This would work better if your example text editor wasn't an operating system. Does it use less energy than vi?
I'm no expert in Emacs stuff, but AIUI the main selling point is Emacs Lisp's decades of packages, which is both amazingly valuable and also a huge blocker on optimisation, because nobody is willing to break compatibility with all the old packages in the name of performance.
emacs is a really bad comparison, given that it has million of functions and is infinitely customizable. It's slow performance has been subject of many jokes back in the 1970's [0] ("EMACS: Eight Megabytes And Constantly Swapping")
You should at least compare it to "nano" editor, or even better, it's predecessor "pico".
I see that one of the issues given with uxn is that it isn't efficient, nor self-hosting. I wonder how they'd feel about something like MirageOS, where OCaml is used to create unikernels that contain just what is needed. The application can be developed as a regular program while on Linux, before being deployed at least as a VM image. (Not sure about bare-metal)
I feel like they are artists with computers as their medium. I've had the same question. I enjoy reading about what they do, but it isn't clear that I should ever attempt to be inspired by their work in my day to day engineering job.
Indeed, 100r.co is probably my favourite place on the internet, but it does have the vibes of those 20th century architects with grand artistic and futuristic ideas that were not practical on so many levels. Not that that's any kind of failure, they simply had different priorities. It also reminds me of the gap between philosophy and science.
I deeply respect that kind of work too, there's a place for it, but certainly it is closer to art than engineering.
This Uxn thing reminds me Inferno[1] - a VM based operating system from Bell Labs with its own programming language, GUI and networking protocol. It can run on may hardware with just 1 MB of RAM. But Inferno is far more than that, it's a Plan9's descendant.
In modern times the closest we have to Inferno is Android, which traces back to Bell Labs original goal to target Inferno against Sun's Java efforts on the market.
Pity that most Plan 9 afficionados usually always forget about Inferno, with Limbo being the re-consideration that dropping Alef from Plan 9, or designing it without automatic memory management in first place was a mistake.
> In modern times the closest we have to Inferno is Android, which traces back to Bell Labs original goal to target Inferno against Sun's Java efforts on the market.
That's interesting, I never looked on Android at that angle. Still, Android is based on Linux (Unix), it allows JIT and NDK. Whereas Inferno does not allow escape from VM conceptually, and the whole OS, except low-level stuff, is written in Limbo and works totally in VM.
I once tried Inferno somewhere in 1998 on my PC, played with it a litte and removed. But three years later I met it in Lucent/Definity Avaya PSTN switches the company I worked for bought. Not to say I was surprised. :-)
It is an amazing project ! You can develop and run uxn apps in the browser with https://exaequos.com. There are uxnasm, uxncli and wayvara, a Wayland varvara emulator that supports graphics. There are some compiled roms in /usr/local/share/roms
First time I saw 100r on HR I have read the entire website like a book, amazing people!
Uxn is also cool, I especially liked the Orca ROM, but I couldn’t really figure it out how to make it work with MIDI, so I ended up using the Linux version.
Noob developer here. From what I understand this can only run on emulators. If I have a system powerful enough to run an emulator why would I want to use this? I understand being able to run this on old consoles. But what about modern computers?
The idea is that it's a standard and stable runtime environment.
100 years from now we'll still be able to run NES games, but running old windows programs may be nearly impossible. uxn is trying to be like the NES in that regard, but for more general use cases than just games. Write a uxn program once, and it can run anywhere, any time (on any device that can host uxn).
For now, but maintaining DOSBox and especially WINE continues to require substantial engineering effort, in part because the platforms they're running on change. Implementing Uxn/Varvara is many orders of magnitude easier than implementing Win32, as evidenced by the fact that many more people have done it despite the much smaller base of applications they can then run. It seems likely that many Win32 applications will execute incorrectly on versions of WINE that are current in 02124, unless they themselves are running on top of something like Uxn.
Probably not Uxn itself, though, because a 16-bit memory space is not a practical way to emulate Win32. Dmitry wouldn't be dissuaded, I suppose.
I understand the argument, and feel good about it. I too would rather our software was made with simplicity and performance in mind not only following the premise that hw resources will only increase so let's ignore those constrains today.
But another perspective for win32 support in future metal is the bigger "community" (users, software, etc) so the increase in support complexity is compensated for much (much) larger incentives for supporting much more diverse and desired old software. WINE and related SW is a testament to this
It might work out that way, or it might not. The current open-source community is very much a product of current social conditions, and those conditions might change. They've changed before, and could change again.
Things like LLMs and the xz backdoor, for example, may make it unappealing to accept code from people you don't know personally; things like Apple's notarization requirements may make it impractical to run open-source software except as part of a proprietary package; software patents or legal liability for third parties using your software could make it legally unappealing to free your code; etc.
someone already said but it's an emulator the way java vm or python vm are emulators. they emulate a computer architecture and environment that is uniform across different hardware types.
in the uxn case the different hardware types include small raspberry pis, Nintendo DS, etc. So having the baseline be really simple means knowing the code you've written for uxn will run on all these different hardware types.
You can also build a CPU that runs uxn code directly on hardware (I assume).
if you are a user then the only reason you'd run uxn on modern powerful hardware is if there is an app written for it that you wanted to use. Just like java, or python, or rust.
For a developer, grabbing uxn might be an aesthetic or political choice, like the language, or you want to target low power hardware use cases.
Uxn runs on its own VM, so each emulator emulates this VM, just like every java implementation emulates the JVM, etc. In other words, the emulator isn't designed for any specific physical machine to begin with.
I love this stuff. Thinking about how our software and its supply chain break down so dramatically given small adversaries like 'intermittent internet access' or 'old crappy machines' has definitely changed the way I think about software as a developer.
I appreciate the care 100r has put into working with their doors open and sharing their lives, circumstances and worldview.
I'm not using Uxn but am I following a bit of a quest to try my hand at personal shaped computing. I didn't really want to write an emulator (I don't have a game-preservation shaped problem) and I like to be able to just "takeover" computers around me with bootable USB sticks. Fittingly, my project is in x86 assembly, real-mode with BIOS routines for IO and as a stand-in for drivers. (Tldr; not amazing but serviceable, and Devine is right that 64kb is a lot)
As a long time python programmer, I've found it fun to play in the land of understanding memory layouts, segmentation, and writing assembler. I highly recommend it for educational purposes. I don't need a hobby project that looks like work in my off time so real-mode programming fits nicely. As a plus I rediscovered MSDOS .COM files and it's shockingly fast and will give you 64kb of memory setup and a filesystem which is handy. Again here I don't have to write an emulator and my little Forth-like project could bootstrap from DOS assemblers and tool chains.
My take away from Uxn project is "go ahead and try to do something fun with a computer". Uxn very much isn't about making the most efficient p-machine or best language, but something that fits 100r. And that more of us with the background in computers ought to try something off the beaten path because we might learn something and we might have fun doing it.
I have never seen devine mention where he hosts his website, and haven't come across a uxn web server so I think the answer is "no", it's not hosted on uxn.
Would be cool to have a web hoster that follows some of the principles layed out by uxn/100r and see what kind of service would turn out.
I guess being extremely productive and successfully pushing new and exciting concepts in computing also means you need to adopt a certain level of pragmatism.
It's a testament to the quality/function of the existing types of ROMs available to uxn and the community that forms around it? Uxn type initiatives are forward-backward looking: how do I ensure future retro compatibility of software created going forward.
It's probably also an interesting challange to implement an emulator.
I was hoping that Uxn could eventually be a vehicle to get rich, graphical apps onto less popular operating systems, like Plan 9 (which they support). While their 1st-party roms are great, the wave of software never came.
I am not surprised as Unxtal is very low level and with lot of constraints. It looks more like some tool of demoscene than a practical programming platform.
From an ergonomics perspective big-endian is the little-endian of stack based machines. Register truncation is the big reason why from an ergonomics perspective we prefer little-endian, but in stack based machines the equivalent is pop truncation. But the behavior between these is reversed between the two machine types. Big endian is the layout by which pop-ing one byte off the stack gives you the truncated number.
I don't think you can emulate it on a tiny piece of hardware; the smallest full Uxn/Varvara implementation I've seen so far is the GameBoy Advance, which has 384KiB of RAM. On the other hand, you've previously been able to emulate some pretty astonishing things on pretty astonishing hardware.
There are a lot of minor things in Uxn and Varvara that make them hard to emulate efficiently. Extensive use of self-modifying code, memory-mapped I/O, and using a stack instruction set, for example.
I'm interested to hear how you'd redesign Uxn/Varvara to be easy to emulate on some tiny piece of hardware; of everyone in the world, you're probably the best person to answer that question. Little-endian, check. What else?
There's no requirement that stacks grow upwards in Varvara.
If your stacks grew downwards then you could use LE instructions to operate on 16-bit values on the stack. You'd still need to support loading from BE memory into the LE stack but that might not be too bad (two 8-bit operations instead of one 16-bit operation).
The device ports are also specified as BE so in theory you'd need to split those reads/writes up too. However, in almost all cases those are done directly from the stack values so I bet most ROMs would work fine with LE devices and LE stacks. LE devices would only cause issues when someone used 8-bit reads/writes from part of a 16-bit port.
Awesome, thanks for the link! I didn't remember that aspect of the 4004 project.
This would be a pretty amazing project. I wonder if you could get existing Uxn/Varvara applications like the Left text editor or Orca to run fast enough to be usable. Presumably for Orca you'd want to hook up external sound hardware rather than trying to bitbang the sound on the 8008, and I guess the same is even more true of a framebuffer.
If you were bitbanging RAM access at 7kbps it might be hard to get it to run instructions fast enough to be usable, though.
Uxn uses big endian primarily because it is meant to be easily assembled from (or disassembled into) a thin programming language called Tal. I think Tal is meant to be the highest-level language available in this platform, so it should be as convenient as possible without being too fat---`#1234` in Tal would ideally be assembled to two bytes `12 34` therefore.
On the inside (guest) or outside (host)? On the host side, yeah if there isn't an RV version you should definitely write one (they do helpfully provide docs explicitly for that purpose). On the guest side, I seriously doubt that it's a good fit; RISC-V is designed for hardware and this is designed for emulation, which makes for different design choices.
I actually think Uxn and Varvara are a lot better suited to hardware than to emulation. RISC-V, however, includes a lot of concessions to hardware implementation that just add headaches to code generation and emulation, though not quite to the level of the MuP21.
I had to browse around a bit to answer "why?" I landed here: https://100r.co/site/mission.html
It's a cool idea. I found the contrast with electron helpful:
> While solving some of our issues, Electron was rapidly increasing in size and hunger, so despite it being open-source soon joined the rest of the software that we did away with. Our focus shifted toward reducing our energy use, and to ensure reliability we began removing dependencies.