Having written a sort of Forth compiler without immediate myself, I agree that uxntal is not very Forthlike. I like Golang, JS, and Forth.
Your link is broken, but https://hackaday.com/2021/07/21/its-linux-but-on-an-esp32/ documents someone getting Linux running on an ESP32-S3 under a software emulation of RISC-V with an MMU. That's not obviously more efficient than a simple Uxn emulator and seems likely to be worse. People have also gotten ucLinux running directly on ESP32-S3 with uclibc, which seems unlikely to be able to run Emacs or GCC, though not, as you say, because the CPU is too slow or the RAM is too small. I'm not sure I've run Emacs on a machine with less than 16MiB of RAM but I'm sure it's possible.
1500 milliwatts is still several orders of magnitude more power than I think a personal computer needs, and 16MiB is obviously a couple of orders of magnitude more RAM.
Emacs is actually surprisingly efficient. We remember it as being slow and bulky in part because 30 years ago it was among the bigger resource draws on the machines we used at the time and in part because it actually was slower then. Current Emacs compiles elisp to machine code.
It turns out that it's actually pretty common for a hobbyist to be able to beat tens to hundreds of thousands of man years going into optimizing compilers, as you know if you follow the demoscene at all. Proebsting's Law explains why. The silicon design and production are equally at the disposal of GCC and the hobbyist. But the current applications codebase can easily consume all that surplus computational power.
I agree that if you were to rewrite your applications in sane-to-simple C code, compiled with a good compiler, it would come out faster than Uxn, though only by about 10×. But nobody has done it. Vaporware always beats shipped code because vaporware doesn't have bugs or scope/schedule tradeoffs.
I think it's possible to do much better than Uxn. But I also am not convinced that anybody has.
> 1500 milliwatts is still several orders of magnitude more power than I think a personal computer needs, and 16MiB is obviously a couple of orders of magnitude more RAM.
So, your idea is a computer with less than 200 kiB of RAM? So, devices like a PC AT 286 or Amiga A2000, with their whole 1 MiB RAM, are hopelessly overpowered, and the right kind of a machine is something like a Sinclair Spectrum?
Fair! These guys run Quake on an Arduino Nano [1], with only 274 kiB of RAM. But they have to jump through a lot of hoops.
Running e.g. Emacs the same way is likely unrealistic, and I find Emacs a great example of software that the user can actually inspect and personalize. I would like most user-facing software tools be like that. (In an ideal world I'd love my personal machine to have a terabyte of Optane memory, but currently such hardware is but a fantasy.)
Your link is broken, but https://hackaday.com/2021/07/21/its-linux-but-on-an-esp32/ documents someone getting Linux running on an ESP32-S3 under a software emulation of RISC-V with an MMU. That's not obviously more efficient than a simple Uxn emulator and seems likely to be worse. People have also gotten ucLinux running directly on ESP32-S3 with uclibc, which seems unlikely to be able to run Emacs or GCC, though not, as you say, because the CPU is too slow or the RAM is too small. I'm not sure I've run Emacs on a machine with less than 16MiB of RAM but I'm sure it's possible.
1500 milliwatts is still several orders of magnitude more power than I think a personal computer needs, and 16MiB is obviously a couple of orders of magnitude more RAM.
Emacs is actually surprisingly efficient. We remember it as being slow and bulky in part because 30 years ago it was among the bigger resource draws on the machines we used at the time and in part because it actually was slower then. Current Emacs compiles elisp to machine code.
It turns out that it's actually pretty common for a hobbyist to be able to beat tens to hundreds of thousands of man years going into optimizing compilers, as you know if you follow the demoscene at all. Proebsting's Law explains why. The silicon design and production are equally at the disposal of GCC and the hobbyist. But the current applications codebase can easily consume all that surplus computational power.
I agree that if you were to rewrite your applications in sane-to-simple C code, compiled with a good compiler, it would come out faster than Uxn, though only by about 10×. But nobody has done it. Vaporware always beats shipped code because vaporware doesn't have bugs or scope/schedule tradeoffs.
I think it's possible to do much better than Uxn. But I also am not convinced that anybody has.