This seems like an inspiring step in the direction of a device that I want, but which as far as I can tell, doesn't exist, but I have one burning question:
How small is it?
I've been trying to figure out what I need for comfortable portable writing. I have an Aspire One that's pretty much at the lower limit for key spacing for my hands, with 17 mm × 16 mm keys. I'm typing this on a cheap USB external keyboard with keys closer to 18.5 mm wide, and I think slightly tighter spacing would be more comfortable. I might be able to make do with 15 mm or even 14 mm horizontal key spacing, but my fingers would collide. Someone with smaller hands could manage a slightly smaller keyboard, but not that much smaller. Touchtyping becomes impossible and you're reduced to typing with two fingers like a five-year-old.
From my point of view, this is more important than the information they did give, and it seems like it was pretty important to them too, so I don't understand why they omitted this information.
What I want is:
1. A computer powerful enough to recompile all its own software,
2. which can run a reasonably convenient programming environment (my bar is pretty low here: at least as good as Emacs with GCC on a VAX with a VT100, or MPW on a Macintosh SE),
3. which fits in my pocket,
4. which doesn't need batteries or plugging in,
5. which I can recover if I corrupt its filesystem, and
6. which is usable in direct sunlight.
This laptop fulfills points #1, #2, and #5. My cellphone fulfills points #2, #3, and #6. My solar calculator fulfills points #3, #4, #5 (trivially), and #6. It looks like this "DevTerm" fulfills points #1, #2, and #5, same as my current laptop but maybe slightly more portable. But I don't think any computing device exists, or ever has existed, that hits all of these points. But I think it's attainable now.
I think the conjunction of points #2 and #3 probably requires the thing to unfold or deploy somehow, unless some kind of Twiddler-like device would allow me to do it one-handed. There just isn't room for both of my hands at once on the surface of things that fit in my pocket (about 100 mm × 120 mm). Conceivably a laser projection keyboard could work.
I really don't understand everyone's obsession with mobile computing power. Why aren't we equipping our our laptops with low energy processors and use them to remotely access more powerful stationary work stations via networking? Instead of carrying around 4.8 GHz all the time, I'd much rather have multiple days of battery lifetime. We must have taken a wrong step somewhere. I am still convinced processing outsourcing is the future for all things hardware intensive such as gaming, if it is not for some unexpected milestone in battery technology.
Besides,
> 4. which doesn't need batteries or plugging in
what do you mean by that? Would that include only solar-powered devices?
I was talking of the future, but you're describing the present! There's no reason why local neighbourhood datacenters and ubiquitous high speed mobile networks couldn't be a thing some day.
You don't even need neighborhood datacenters; you could just access your desktop machine in the basement with your lightweight portable mobile terminal in the living room. Just the other night I used mpv on one laptop to stream a video file from the other laptop over Wi-Fi with python2 -m SimpleHTTPServer, and XPra can do the same thing in the same way for remotely accessed applications. (Of course, ssh -X can kind of do that too, but it's a lot less efficient and more insecure.)
In theory we could have much-lower-power wireless communication systems: maybe using lasers, with MEMS corner reflectors on the mobile station to transmit, and a simple photodiode with a dichroic filter over it to receive. Or maybe using submillimeter waves from a phased-array antenna, like the Starlink terminal. Or just time-domain UWB pulse radio in conventional microwave bands, but optimized for low power usage instead of super high data rates or precise ranging.
But, right now, evidently even Bluetooth Low Energy from the leading ultra-low-power silicon vendor costs 10 milliwatts when you have it on. And it's not clear if the technologies I described above will materialize. So the amount of dumb that it makes sense to put into a wireless networked mobile terminal is only about 10 milliwatts of dumb. And 10 milliwatts is not that dumb. Even with a conventional low-power CMOS Cortex-M (300 pJ per cycle, 2 DMIPS/MHz) that's about 30 MIPS or 60 DMIPS of dumb. That's dumb like a SPARC 10 workstation from the mid-90s, not dumb like a VT100 or an analog TV tuner. With subthreshold logic it's more like 600 DMIPS of dumb, dumb like a 450 MHz Pentium II (introduced 01998, mainstream around 02000).
I agree that making computing power mobile makes it enormously more expensive, especially if you consider batteries unacceptable. But making computing power remote means that you need to spend energy on a radio to access it. That's a good tradeoff for some things, but not for others. In my other comment, note that if we believe Ambiq's datasheet, we can get the CPU speed of a SPARC 20 for 1.8 milliwatts.
It turns out the chip also includes a Bluetooth Low Energy 5 radio, so you can use it to remotely access more powerful stationary workstations via networking, as long as you're within a few meters of a base station. The radio costs 10 milliwatts when you're running it, six times as much as the Pentium-class CPU. Normal radios (Wi-Fi, cellphones) cost orders of magnitude more than that.
So constant remote wireless access to more powerful stationary workstations doesn't start to save energy until the amount of computation you're accessing is close to a gigaflop. Maybe closer to a teraflop if we're talking about streaming full-motion video. Intermittent remote access, of course, is a more reasonable proposition.
It's true that gaming commonly uses teraflops or petaflops of computing power, and plugging in such a beast in a server closet is a huge improvement over trying to somehow cram it into your pocket. But there are a lot of day-to-day things I do with a computer — recompiling my text editor, writing stupid comments on internet message boards, chatting on IRC, simulating an analog circuit, reading Wikipedia — that very much do not require gigaflops of computing power.
(Remote wired access of course can use very little power indeed, but if you're in a position to plug into a wire, you might as well deliver power over that wire too.)
If you take a modern cellphone and take almost all the processing power out of it, you still have a 1000-milliwatt radio and a 1000-milliwatt backlit screen. So you aren't going to get multiple days of battery life that way. 1000 milliwatts is enough to pay for dozens of gigaflops of computing power nowadays.
Myself, I have another reason: I travel, though I've traveled very little during the pandemic. But I am often someplace other than at home: at a café, in a park, at the office, in a bus, in the subway, in a taxi, visiting a friend in another city, at my in-laws' house, and so on. All of these places are out of Bluetooth range of my house. I could obtain internet bandwidth from a commercial provider, but that sacrifices privacy, it's never reliable, and I don't consider it reasonable to make my core exocortical functions dependent on the day-to-day vagaries of mere commerce. Personal autonomy is one of my core values.
So, uh, it has to be solar powered only? That sounds fantastically niche and not like something someone would want to design/build.
I think any reasonable "work" performance level would require too large an area of solar cells in order to be practical, especially if it should work indoors.
But last year I learned about two innovations that have been brought to market that dramatically increase the potential abilities of such a device.
— ⁂ —
It's entirely possible that your definition of 'any reasonable "work" performance level' is orders of magnitude higher than my "reasonably convenient programming environment". The VAX reference point is 1 Dhrystone MIPS, and the Macintosh SE (a 7.8 MHz 68000) was also about 1 Dhrystone MIPS. Current ARM7 Cortex designs deliver about 2 Dhrystone MIPS per MHz, so we're talking about 500kHz of ARM7 instructions here.
Without any fancy circuitry at all I was able to squeeze 8 mW out of a 38-mm-square solar panel from a dollar-store garden light, in direct sunlight. Theory predicts it should be capable of 200+ mW (16% efficiency, 1000 W/m²) so hopefully I can get better results out of other panels. The amorphous solar panels normally used on solar calculators, which work well indoors as well as in direct sunlight, are nominally about 10% efficient, which is to say, 10 mW/cm² in direct sunlight or 100 μW/cm² in office lighting. It's easy to imagine dedicating 30 cm² of surface area to solar panels, which would give you 300 mW outdoors or 3 mW indoors. (38 mm square is 14 cm². 30 cm² in medieval units is about 4⅝ square inches, depending on which inch you use.)
— ⁂ —
So, what kind of computer can you run on a milliwatt or so?
Ambiq sells an ultra-low-power Cortex-M4F called Apollo3 Blue with 1MiB Flash plus 384 KiB RAM; the datasheet claims that, when fully active, at 3.3 volts, it uses 68 μA plus 10 μA per MHz running application code, and it runs at up to 48 MHz bursting to 96 MHz. So at, say, 10 MHz (20 VAXen or Macintosh SEs), it should use 170 μA or 550 μW. At 48 MHz (100 VAXen or Mac SEs), 550 μA or 1.8 mW. I haven't tested it yet. SparkFun sells a devboard for US$22 at https://www.sparkfun.com/products/15444.
Remembering, of course, that benchmarks are always pretty bogus, we can still roughly approximate this 20 DMIPS performance level as being roughly the processing speed of a 386/40, 486/25, or SPARC 2, and the 200 DMIPS peak as being roughly the processing speed of a Pentium Pro, PowerMac 7100, or SPARC 20 — but with enormously less RAM, an enormously lower-latency "disk", and a lower bandwidth budget for the disk. And no MMU, so no fork() and no demand-paged executables. You do get memory protection, hardware floating point, and saturating arithmetic, but no MMX-like SIMD instructions.
— ⁂ —
If the computer's output isn't going to be just an earphone or something, though, you may need to spend substantial energy on a screen as well. Sharp makes a 400×240 "memory LCD" display (0.1 megapixels) used in, for example, Panic's Playdate handheld game console (https://youtu.be/uziFTK5c29k) which seems like it should enable submilliwatt personal computing; the datasheet says it uses 175 μW when being constantly updated with a worst-case update pattern at its maximum of 20 Hz. Adafruit sells a breakout board for the display for US$45: https://www.adafruit.com/product/4694
A VT100's character cell was 8×10 (https://sudonull.com/post/30508-Recreating-CRT-Fonts), so its 80×25 screen was 640×250, 0.16 megapixels, while the Macintosh SE was 512×342, 0.18 megapixels. Both of them were 1 bit deep: a pixel was either on or off. So two of these LCD screens together gives you more pixels than either of these two reference-point computers. However, the pixels are physically a lot smaller, which may compromise readability. (On the other hand, as you can see from the Playdate videos, you can do a lot of things on this display that a VT100 or Mac SE could never dream of doing because of lack of compute and RAM.)
— ⁂ —
So, I think it's eminently feasible now, although it probably wasn't feasible ten years ago. But it's going to be a challenge; I can't just compile Linux and fire up Vim. Even if Linux could run on these super-low-power computers, they don't have enough memory to recompile it.
Correction: the VT100's characters in 80-column mode were 10×10, which is 800×250 pixels, or 0.2 megapixels, not so 640×250 as I said. That's 4% more than two of these 400×240 panels together. That said, the X-Windows 5x8 font is reasonably readable, and the 6x10 font is perfectly fine. 4×6, as xterm -fn -schumacher-clean-medium-r-normal--6-60-75-75-c-40-iso646.1991-irv, leaves a lot to be desired.
However, videos of the display hardware like https://youtu.be/zzJjE1VPKjI show that you can get really astonishing things out of 0.1 megapixels when you drive it with current levels of computation instead of, like, an 8085. (That video is a little misleading in that it claims "60 fps animation", while the datasheet claims the max is 20 fps.)
We compiled C code just fine on machines with less than a megabyte of RAM back in DOS days. I wouldn't expect gcc to work on such a machine, but some older compiler (lcc? pcc?) should be feasible.
Yeah, even older versions of GCC ought to work, though they don't come with ARM7 backends. (Normally GCC uses fork() to run different compiler passes, but DJGPP demonstrates that it can run without virtual memory without extensive surgery.) C was developed on the PDP-11 where the per-process address space was 64 KiB — and though I think the PDP-11 hardware supported separate stack, data, and code segments, I think the Unix environment (and C in particular) didn't. And the BDS C compiler supported most of C under CP/M on the 8080. (It's free software now, but unfortunately it's written in 8080 assembly.)
Separate compilation was helpful not just for speeding up the edit-compile-run cycle but also for handling high-level languages in small memory spaces; if your compiler ran out of memory compiling a large source file, you could split it into two smaller source files and link them together. Getting Linux to compile that way would probably be more work than writing a new OS from scratch.
More inspiringly, though, Smalltalk-76 ran on an 8086 with 256KiB of RAM, all kinds of MacOS software ran in 512KiB of RAM, and the Ceres workstation built at ETH in 01987 to run Oberon had 2 MiB of RAM. So I'm confident that a fairly polished IDE experience is possible in 384KiB of RAM and 1 MiB of fast Flash, especially if supplemented with larger off-chip fast Flash. It ought to be possible to do much better than a C-compiler-on-DOS kind of thing.
But you can clearly write a usable GUI environment using a C compiler on DOS or a Pascal compiler on a Mac SE.
You probably know all this, but it may interest other people:
Declarations change the parsing of C, so even a single-pass compiler needs to keep the declarations in memory somehow; cases like `foo * bar;` can be legally parsed as either a declaration of `bar` of type `foo*` or a (useless but legal) void-context multiplication of `foo` and `bar`, depending on whether `foo` has been declared as a type with typedef. Plus, of course, preprocessor macros can do arbitrary things. In PDP-11 C days it was common to put declarations of library functions directly into your code (with, of course, no argument types, since those didn't appear until ANSI C) instead of in a header file, and the header files were very small. Nowadays header files can be enormous, to the point that tokenizing them is often the bottleneck for (non-parallel) C compilation speed; often we even include extra totally unnecessary header files to facilitate sharing precompiled headers across C source files.
So I think it probably isn't straightforward to enable small computers to compile current C codebases like Linux.
tcc, however, would be an excellent thing to start with if you were going to try it.
I imagined a separate pass for the preprocessor, where the state would only be #defines and the current stack of #if blocks. Thus compiler would only have to keep track of type declarations and globals (including functions). With some effort, it should be possible to encode this quite efficiently in memory, especially if strings are interned (or better yet, if the arch can do mmap, sliced directly from the mapped source). Looking at tcc, it's somewhat profligate in that compound types are described using pointer-based trees, so e.g. function declarations can blow up in size pretty fast.
Yeah, definitely running the preprocessor as a separate process eases the memory pressure — I think Unix's pipeline structure was really key to getting so much functionality into a PDP-11, where each process was limited to 16 bits of address space.
Pointer-based trees seem like a natural way to handle compound types to me, but it's true that they can be bulky. Hash consing might keep that manageable. An alternative would be to represent types as some kind of stack bytecode: T_INT32 T_PTR T_PTR T_ARRAY 32 T_PTR T_INT32 T_FN or something for the type of int f(int *(*)[32]) or something (assuming int is int32_t). That would be only 11 bytes, assuming the array size is 4 bytes, but kind of a pain in the ass to compute with.
Interned strings — pointers to a symbol object or indexes into a symbol array — can be bigger than the underlying byte data. Slices of an mmap can be even bigger. This is of course silly when you have one of them in isolation — you need a pointer to it anyway — but it can start to add up when you have a bunch of them concatenated, like in a macro definition where you have a mixture of literal text and parameter references.
How small is it?
I've been trying to figure out what I need for comfortable portable writing. I have an Aspire One that's pretty much at the lower limit for key spacing for my hands, with 17 mm × 16 mm keys. I'm typing this on a cheap USB external keyboard with keys closer to 18.5 mm wide, and I think slightly tighter spacing would be more comfortable. I might be able to make do with 15 mm or even 14 mm horizontal key spacing, but my fingers would collide. Someone with smaller hands could manage a slightly smaller keyboard, but not that much smaller. Touchtyping becomes impossible and you're reduced to typing with two fingers like a five-year-old.
Unfortunately none of https://artvsentropy.wordpress.com/2022/01/15/writing-on-the... https://lunduke.substack.com/p/the-first-risc-v-portable-com... https://www.clockworkpi.com/product-page/devterm-kit-r01 bothers to give any dimensions at all, even when they spend a lot of time talking about how small it is.
From my point of view, this is more important than the information they did give, and it seems like it was pretty important to them too, so I don't understand why they omitted this information.
What I want is:
1. A computer powerful enough to recompile all its own software,
2. which can run a reasonably convenient programming environment (my bar is pretty low here: at least as good as Emacs with GCC on a VAX with a VT100, or MPW on a Macintosh SE),
3. which fits in my pocket,
4. which doesn't need batteries or plugging in,
5. which I can recover if I corrupt its filesystem, and
6. which is usable in direct sunlight.
This laptop fulfills points #1, #2, and #5. My cellphone fulfills points #2, #3, and #6. My solar calculator fulfills points #3, #4, #5 (trivially), and #6. It looks like this "DevTerm" fulfills points #1, #2, and #5, same as my current laptop but maybe slightly more portable. But I don't think any computing device exists, or ever has existed, that hits all of these points. But I think it's attainable now.
I think the conjunction of points #2 and #3 probably requires the thing to unfold or deploy somehow, unless some kind of Twiddler-like device would allow me to do it one-handed. There just isn't room for both of my hands at once on the surface of things that fit in my pocket (about 100 mm × 120 mm). Conceivably a laser projection keyboard could work.