Hacker News new | past | comments | ask | show | jobs | submit login
The first RISC-V portable computer is now available (lunduke.substack.com)
369 points by josteink on March 15, 2022 | hide | past | favorite | 176 comments



I'll skip this one, and wait for RVA22-compliant devices.

This machine uses the Allwinner D1[0] SoC, which is pre-standard V, incompatible with the standard V extension ratified in December.

Even if not enabling/using the V, and thus considering it just RV64GC, it is a fairly slow one relative to what's already available in the market, and it comes with just one core.

I already have the ~$21 LicheeRV w/Dock[1], which uses the same D1 SoC. IMHO much more cost efficient if you want to play with D1.

[0]: https://linux-sunxi.org/D1

[1]: https://linux-sunxi.org/Sipeed_Lichee_RV


For those not familiar of RVA22 documentation of it is here: https://github.com/riscv/riscv-profiles/blob/main/profiles.a...

Essentially it is shortform for standardising a set of optional instruction sets into a bundle. RV for RISC-V, A for "applcation supporting" (as opposed to being used in embedded or microcontroller) and 22 for 2022 (in 2100 it'll be 100)


>Essentially it is shortform for standardising a set of optional instruction sets into a bundle.

This is a common misconception. RVA22 is not just a bunch of profiles, but 2022's OS-A Platform.

Platforms do, among many things, require profiles, which are what you've described. In this particular case the RVA22 platform requires the RVA22 profiles.

From your link, documenting profiles, there's an entire section[0] addressing the difference.

The actual platform specification can be found here[1].

[0]: https://github.com/riscv/riscv-profiles/blob/main/profiles.a...

[1]: https://github.com/riscv/riscv-platform-specs/blob/main/risc...


Thanks for pointing out RVA22 - I wasn’t aware of the initiative but it’s a great idea. I still have reservations about the core architecture but having some standards for the rest of the ecosystem is a big win.


>To be among the first to use an open source, RISC-V CPU in a regular computer? A portable one, no less?! To be a pioneer of a more open hardware future? That sounds like an absolute privilege.

Is this an actual open source processor? I can't find any info on it either way. A lot of the conversation around RISC-V seems to conflate the open source nature of the ISA with the the actual implementation. Even the reference implementations are BSD licensed so people are allowed to distribute proprietary derivatives to say nothing of completely proprietary implementations.

If all or most of the implementations are propriety, I feel like it is a mostly lateral move from ARM as, although it is not open source, the ARM ISA is readily available [1], and the same kinds of rarified CPU engineering and academic circles that will have influence in the RISC-V Foundation, already influence ARM.

[1] https://developer.arm.com/architectures/cpu-architecture/a-p...


>Is this an actual open source processor?

AIUI the SoC is not open, but the CPU design it contains, XuanTie C906, is.

It would even be decent if it had standard V extension, but sadly it uses an incompatible pre-standard draft.


XuanTie C906 is not open-source.

They just open-source a light version of the C906, the OpenC906. For instance OpenC906 does not contain the vector extension.

But even the OpenC906 is only partially open-source. They only disclosed verilog code generated by an internal tool, so the real design source code is not open. Worse, they don't open source testbenchs which makes any modification or verification very tedious.


Is there really any software yet that uses the V?


All software will use V once it is available. Any operation done in a loop over fixed size elements can have V applied to it. It is much more powerful than SIMD.


Any software that uses memcpy will use V once glibc is updated.


I must admit, I've yet to find a performance relevant loop that I can't do in simd that doesn't also have dependencies on previous iterations such that no magic instruction set is going to help unless it's capable of time travel.


LLVM's Polly[0] can often do magic there, by renumbering[1] the iteration space. Where variable vector length instructions help is decoupling the chunk size from the machine code, because they take care of the remainder that doesn't fit in whole vectors/chunks in an agnostic fashion. It's so you can get at least most of the gains from wider vector units without needing to change the code.

[0]: https://polly.llvm.org/ [1]: https://en.wikipedia.org/wiki/Polytope_model


It's not that you couldn't have used fixed SIMD, it's so it can rescale the SIMD automatically.


Right, but portable code != portable performance. See also: OpenCL.

There's also the observation "keep simd vectors small but many" (e.g. Apple's arm chips) over "super long vectors" (intel avx512) is superior as it is much more flexible whilst delivering similar performance for tasks that are amenable to larger vectors. Having an architecture pushing towards the latter seems a retrograde step to me.


The BSD license is definitely open source. It’s just not ‘free software’ in the FSF sense.


BSD license absolutely is "free software" in the FSF sense. The four freedoms are guaranteed by it, and it is even listed explicitly in their list of free software licenses.


Four my reading of the four freedoms, none of these apply to BSD code.

> FREEDOM 0: This says that the user has the right to use the software as well as according to his/her needs.

Code containing BSD licensed components doesn't guarantee this to the user, since they are typically further constrained by EULAs and the like.

> FREEDOM 1: This says that the user can study the working of the software and make changes to meet his/her requirements to have the desired results. Here to make changes to a program, having its source code is a precondition.

Code containing BSD licensed components don't require giving the user the source.

> FREEDOM 2: This says that the user has the right to redistribute the copies of the software to help out others that may require the same.

Code containing BSD licensed components commonly restricts users from redistrbuting.

> FREEDOM 3: This is an extended version of Freedom 2 which says that the user can also provide others with copies of the software in which they’ve made modifications by doing this the developer allows the community an opportunity to benefit from his/her changes. Also, having its source code is a precondition here as well.

Once again, code containing BSD components frequently doesn't grant the user access to the source.


Source code which is presented under the BSD license satisfies the Four Freedoms.

Compiled binaries derived in whole or in part from BSD-licensed source do not on their own satisfy the Four Freedoms.

You need the source. But if you have the source, that's sufficient.

> Code containing BSD licensed components doesn't guarantee this to the user, since they are typically further constrained by EULAs and the like.

Source code which is available under a BSD license is available under BSD, full stop. If the software is constrained by a EULA, then it's not actually BSD-licensed.

If you are the copyright holder, you might also issue the software under another license. But there's not much point to dual licensing when one of the licenses is BSD — a downstream user only has to satisfy the minimal requirements of the BSD license to keep their license to use the software, and they need not satisfy more onerous requirements to get a second license to use the same software.

> Code containing BSD licensed components don't require giving the user the source.

If you have the source, you have the source. The requirement is already satisfied.

The BSD license doesn't guarantee you access to modifications, or to other code used alongside the BSD-licensed code. But that's not what's at issue — that's a copyleft requirement, not a Free Software requirement.

> Once again, code containing BSD components frequently doesn't grant the user access to the source.

It is true that the provider of a proprietary binary which incorporates BSD code is not required to provide you the source code that they used. But again, that's not what's at issue.

These arguments advocate for strong copyleft licenses, which are a subset of Free Software.


> Source code which is available under a BSD license is available under BSD, full stop. If the software is constrained by a EULA, then it's not actually BSD-licensed.

It says "software", not source code. You can have software that is BSD licensed and not have source available. The BSD license says that redistributions of source code are permitted, but not that source is made available to you.

> Copyright (c) <year>, <copyright holder> All rights reserved.

> Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

> Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. > Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. > All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by the <copyright holder>. > Neither the name of the <copyright holder> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

> THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Nowhere grants you source code access. It only says that you are allowed to redistribute the source should you gain access to it. So I can write a BSD licensed program, simply give no one source and only binaries, and that's allowed.

Additionally, incorporating BSD licensed code into a proprietary work does not grant you the ability to relicense the BSD code. It's partially BSD licensed, and you as the distributed of the combined BSD proprietary work aren't compelled to give source.

> If you are the copyright holder, you might also issue the software under another license. But there's not much point to dual licensing when one of the licenses is BSD — a downstream user only has to satisfy the minimal requirements of the BSD license to keep their license to use the software, and they need not satisfy more onerous requirements to get a second license to use the same software.

No, you don't get to relicense the combined work. The BSD components remain BSD licensed.


> The four freedoms are guaranteed by it

Nope. The BSD license does not guarantee source to users. It's very free in that it places minimal restrictions what can be done with, but it doesn't guarantee the four freedoms to people who (for instance) receive binaries of modified works.

I'm not saying it's bad because of this, I don't want to start a flame war here.


GNU+FSF[0], OSI[1] and me disagree with you. But you can have your own opinion, sure.

[0]: https://www.gnu.org/licenses/license-list.html

[1]: https://opensource.org/licenses


It's hardly just my opinion, there's a lot of opinion out there on the web that agrees with me. It's very hard to see how this definition on the OSI site -

"Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge"

Fits licenses like BSD/MIT license which do not require source access to be provided.

And the FSF say this -

"To decide whether a specific software license qualifies as a free software license, we judge it based on these criteria to determine whether it fits their spirit as well as the precise words"

So they give themselves pretty loose rules for inclusion there. I would say that the BSD license, with its very few restrictions, is definitely open source and its spirit is all about freedom. But I really don't see that it supports the four freedoms, specifically freedom 2 (from the FSF site here: https://www.gnu.org/philosophy/free-sw.html) -

"The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this."


Yes, it very well is possible to have an even more extreme view than the GNU project.


I'm sorry, what's extreme about my view? I didn't say they weren't worthwhile, nor that they weren't open source, nor that they aren't free.

Do you disagree with my evaluation above? If so how?

I'm genuinely curious because I've been unable to mentally reconcile the OSI listing BSD licenses for years now. It scarcely matters one way or the other, it's not like the OSI is the ultimate moral arbiter of all that is good and right in the world. It just seems oddly inconsistent.


I agree with FSF and OSI respective definitions of free software and open source.

I also agree with their interpretation of BSD licenses and recognition as free software and open source respectively.

The OSI (and FSF) aren't the "ultimate moral arbiters of all that is good and right in the world". They do, however, represent me, and you'll find, a majority of people who care about free and open source software.

You're, of course, free to advocate whatever you want. I would suggest directing your efforts at convincing OSI/GNU/FSF to adopt your interpretation, where BSD runs counter to the four freedoms and reclassify BSD licenses as a start.

Should that fail, you can consider starting some sort of alternative such organization. If your views are indeed shared with many, there's a good chance it can gain traction. But only if that's the case.

As this is straying far into offtopic for the associated story, my end will not post further replies on this thread. I kindly request the same from yours.


> I agree with FSF and OSI respective definitions of free software and open source.

> I also agree with their interpretation of BSD licenses and recognition as free software and open source respectively.

These seem to be in conflict here, as pointed out above, and your responses to my honest questions about this are not enlightening, in fact quite the contrary - you seem to be getting annoyed and are accusing me of having some sort of agenda.

> You're, of course, free to advocate whatever you want.

I'm not advocating anything. I'm certainly not advocating reclassification of any licenses. I'm not sure why this conversation is putting your back up so much.

> my end will not post further replies on this thread.

That's probably for the best, it seems to be upsetting you.


BSD is free software in the FSF sense. It isn't copyleft though.


Yeah I think they were meaning it's not copyleft, just didn't phrase it the right way.


It's not about the license really. It's just that I see no real reason why we should consider all the propriety implementations of RISC-V any materially more open source than any random ARM processor. Either way we know the ISA but little of the internals and can't practically audit or change them.


The implementation that's being talked about, C906, is actually open source.

Obviously, everybody gets a free license to RISC-V, and most implementations are proprietary. This doesn't mean there aren't any open implementations; C906 is one of many.


It's open source in that you can use the original code/design as you please in your implementation. I don't think anyone is stating that it's open source in terms of copyleft/gpl


The keyboard seems to be the weakness of the DevTerm, which is a shame. This review was typical:

"The keyboard is just OK. To be honest, it was the only part of the device I was a bit disappointed about. The keys are small, not spaced apart enough, and can be hard to press." [1]

[1] https://artvsentropy.wordpress.com/2022/01/15/writing-on-the...


The key positions are also different from a normal keyboard: all rows are shifted by 1/4, whereas a QWERTY keyboard has 1/2 shifts between rows 1-Q and A-Z, and 1/4 between Q-A.

From a psychological UX perspective, I'd prefer 1/2 shifts between all the rows, creating a hex grid.

Ortho is also logical and justifiable, although its packing is worse than hex.

But if convention is to be broken, it should not be the 1/4 that this product has chosen, which offers no benefit and creates an unfamiliarity cost. Better to stick with the standard key positions than this.


Trackpad isn't fantastic either, it's unusable out of the box but the community made a patch that makes it mostly ok.


Haha just like the pinebook pro.

I've actually got an atom based PC that uses the exact same chassis and trackpad as the pinebook pro, and I've been trying to use the same community firmware. Because it is indeed horrible by default. Random touches, scrolling instead of pointing etc. No palm rekection at all.

Unfortunately I never managed to get it to install :(


Sorry, trackball rather than trackpad.


Another similar device, although more powerful and more interesting hardware-wise, would be the Pocket Popcorn Computer, which unfortunately still hasn't provided any proof of being a real product other than shiny 3D renders.

https://pocket.popcorncomputer.com/

Edit: I stand corrected, totally missed the blog posts and the videos, happy to see I was wrong; the pandemic has been hard for many businesses unfortunately. If I may, they should put something on the homepage too however, as I recall monitoring it from time to time since probably over two years and it never showed that there were working prototypes. This would help to fuel interest.


I went to school with Alan and Jose, and I believe they're honestly doing their best to get a real product out there. The company blog (https://blog.popcorncomputer.com/) indicates that they should be shipping to the earliest purchasers very soon.


> still hasn't provided any proof of being a real product other than shiny 3D renders

There are pictures and videos of development devices on Twitter; for example: https://twitter.com/gajjar04akash1/status/148444172903842611...


>Quad-Core ARM Cortex-A53 CPU

Wrong ISA to be on topic.


For some reason I hate it when the Fn key is the first key on the left. (What I hate even more is that laptop manufacturers have no agreement on this.)


At least some brands let you switch Fn and Ctrl via bios settings. This help me be consistent with full size external keyboards.


I'm the exact opposite - I wish more of them put it on the left! Especially with smaller keyboards where you need it to access certain keys.


Yeah Thinkpad


Why? Do you use the Fn key all the time? I sent my Dell laptop back because the ctrl key was on the left and an unremappable Fn key was where ctrl is supposed to be.


I press Ctrl+Shift+C/V quite often and it's for me it's easier to do with the left hand only when Ctrl is below Shift.


I like to remap Caps Lock to Control to get a similar layout as the old Sun keyboards. It also makes pressing the Control key a lot easier.


If you have a real keyboard you can use the pad of your hand to hit the control key in the bottom left corner, which some people find even easier. This is definitely the easiest way to hit Ctrl-Q.


Apple too. Was a sad day when Sun and Apple gave up that fight.


Wasn't just Apple and Sun, it was almost everyone. The first IBM PC keyboards, the Apple machines, Atari and Amiga, almost all Unix workstations, etc. all had it where caps-locks sits now. I can't remember the exact IBM keyboard where it switched, but the Model M moved it and maybe a bit before that.

Caps-locks is such an infrequently used key to be taking up such valuable real estate.. I've never gotten mentally used to control being banished to the lower left. I remap on every keyboard I get.


The other thing Apple got right and PCs screwed up: the bumps on the home row used to be on ‘d’ and ‘k’. That way, even if your right hand was offset one to the right, you’d still feel the bump and notice it was under the wrong finger, which is much easier than noticing that you don’t feel a bump at all.

I don’t recall whether other vendors got that right.


At least with macOS you don’t actually need control very often.


Emacs + iTerm: I need it all the time.


It also screws up my muscle memory. I hate the way ThinkPads do this too. At least you can remap them there too, but in some cases my work had the bios menu locked down :(


I imagine the macbook keyboard is fine in this situation because ctrl key is still directly below the shift key.


Control key has been bottom left key on most keyboards for 40 years now.


The problem isn't the CTRL key on the bottom left. The problem is the Fn key being useless and taking up a spot at all. Mac has this right.


Fn is not useless on keyboards that don't have a dedicated function key row.


true.

but double wrong is still not right


I agree in principle. But on a device this small, cramming another row onto an already tiny keyboard would be much worse ergonomically than having to deal with the Fn key, IMO.


I guess you haven't noticed the trend of laptop manufactures assigning the F keys to random device controls like volume and brightens by default then requiring you to use the Fn all the time to actually use the F keys normally.


Since I never use function keys, mine default to doing the system behaviors.


This is a very dangerous precedent for RISC-V.

Allwinner is known as a champion in GPL violations, obfuscating technical details and keeping binary blobs. This defeats the entire purpose of RISC-V and creates an environment similar to what is now for ARM (everything locked down, everything opaque).

The advertisement is basically false advertisement, this is no computer for "free hackers" but another locked down SCB with a non free core.


The module is available on its own, and cheap enough that I picked one up to try in a different rPi compute module host board: https://www.clockworkpi.com/product-page/copy-of-clockworkpi...

Hopefully it works!


awesome was hopin to avoid the keyboard and stuff i dont need so this is perfect


It looks a lot like the TRS-80 Model 100:

https://www.digibarn.com/collections/systems/trs80-model100/


Unfortunately, although it looks like a TRS-80 100/102, or a wp2, it is actually enough smaller that it looks like you wouldn't be able to type on it.

I'd buy something like this if it were the size of a normal keyboard (and as long as it cost less than 300, oh, and as long as it has 4gb of ram. yeah, probably I'm not actually the target market).


I've used my DevTerm for a while, and the keyboard is definitely small. You get better using it after some time, but the key size and the unorthodox key placement due to size constraints does make it more of a toy than an actual “dev term”. That's fine for me since I didn't have any serious use planned for it: I'm currently using it as a text game terminal with TTY duplication on the thermal printer. That's pretty fun to hand out to friends to introduce them to some retro computing, but I wouldn't advocate the device as a serious computing platform.


How many millimeters across and tall is the keyboard?


I am very excited to see the RISC-V devices come to the end users.

Then I am thinking how this device could fit into my daily life. I am wondering what are the typical scanrios for portable computers these days?


A portable computer people tend to fit into their daily life is their smartphone.

Android RISC-V port was demonstrated over a year ago. I fully expect some RISC-V smartphones will hit the market as soon as chips with high performance cores are ready.

Realistically, that means RVA22 (TBA in spring) + 6 month (assuming best case scenario immediate tapeout) + whatever time it takes to validate such a device built around the new chip.

I would say 2023H1, if I had to guess.


I use a small laptop for pretty much all my daily life stuff. I do have a phone that I use for occasional mobile web access, navigation, etc. but my main computer is a laptop. I rarely browse or surf with the phone. I'd like to have a small linux tablet that I can use familiar dev tools on, but the inkplate 10 looks more attractive to me than this weird slab-like thing. The slab thing inherits too much from gaming devices, imho. Nothing wrong with gaming if that's what you're into, but the TRS-100 of yore was revered as a writing device and its keyboard was better suited for that purpos.


IMHO, for devs, a modular laptop would be more appropriate: a display panel, a CPU/GPU box, mouse/keyboard, a battery pack... and the bag designed to carry those components.


Check out the Pockit that was on here a few days ago: https://www.youtube.com/watch?v=b3F9OtH2Xx4&list=UU49EYw900L...


What could be really cool is a lunchbox portable computer with a keyboard that closes on the front: https://www.flickr.com/photos/befuddledsenses/493303864

I think you could make it pretty small but still large enough to fit two small PCI-express cards and replaceable/upgradeable RAM modules. It would be a good format for tinkerers, very amenable to having replaceable parts and being user-serviceable. Throw some GPIO in there and you would have a real winner for the maker crowd.

EDIT: someone build this lunchbox portable with a raspberry pi and it looks really awesome: https://hackaday.com/2020/06/09/lunchbox-cyberdeck-is-a-tast...


What does it cost to make a case and keyboard like that? I guess they used injection molding?


Yes, it's injection molded. Here you can see some of the mold sprues that come with the kit: https://static.wixstatic.com/media/3833f7_352ac21900e64720a5...


Unpopular opinion, I don't know what's the deal with these small keyboards... like you're not going to be able to do work on that long term.


If you care about ergonomics you wouldn’t use a laptop, let alone a slab PC.


Depends on the laptop size 13" personally is the size I go for/usable but yeah normally I use a 65-8% mechanical keyboard.


More that a display attached to the keyboard is horrible from an ergonomics perspective. Laptops turn humans into gremlins.


Oh yeah I get what you're saying about staring down, yeah external monitor-forward looking is ideal.


It's not a device intended to do work on long term.


looks awesome as a toy but a bit impractical for traveling with. Would be an instant buy when its available in a pinebook pro form factor.


"Awesome toy" is a good way to put it. Very neat, but not practical for me. Really excited to see some RISC-V stuff though.


Looking at some videos, overall my impression is that it's too large to be very portable, but too small to be truly usable. The device sits in a bit of a size "dead zone" where it has the worst aspects of all form factors.


But on a positive spin, maybe that's exactly what you need to come up with some unique use cases.


>Now to see if I can convince my wife to let me buy one.

Exactly what I'm thinking.


This looks just like the venerable Tandy 100!

http://www.oldcomputers.net/trs100.html

That can't be a coincidence.


I love the look of it! Those rotating knobs look great to use compared to fiddly buttons or touch interfaces. No matter how "modern" touch interfaces are, the human body doesn't change and benefits from tactile feedback and immediate control (I don't know what the knobs do but if it's for example brightness, that's fantastic! I got a monitor that makes it harder to change brightness, than to enable a gaming crosshair...)


The knobs on the side? They turn a quarter turn and the front and back half of the housing come apart.


What? I thought they were programmable. That would have been a selling point for me.

This is like the classic home button on an iPhone being a tool for detaching the screen from the back of the phone.


Problem of Arm is not arm itself. Competition especially now with apple there meant they cannot close themselves to free software. The problem of arm is their peripheral like gpu (Pi).

For RISC-v, on top of worry it is (a) all just trying to support a 10xrussian empire to be so no sanction later; (b) only the hardware guy concern address and even that a bit … we are still face all these bob of non-open things.

Why bother?

Helping evil?


I've got the A04 model (ARM). Its more of something to play with than anything else. I keep it by my bed for fooling around with my side projects in the night.


This is certainly dead on arrival, right? Nobody wants or has ever wanted a computer with a screen flush with the keyboard. This was even known in the late 90s: https://en.wikipedia.org/wiki/Jornada_(PDA)#Jornada_720

Maybe someone can hack a longer flexible ribbon cable and a hinge? And sell it as an aftermarket kit? Please? (The thing looks pretty cool otherwise).


I'd use it as a macro keypad. There's a mostly unaddressed use case for high-end macro users. You can get remotes with a few buttons, a knob and in some cases a screen capable of a bit of text labels(like Xencelabs Quick Keys). You can hack a dollar store keyboard and label it yourself, but then you have to maintain the software to keep it in sync. You can get the Elgato Stream Deck, which is effective but expensive and pretty one-of-a-kind between the button displays and the software integrations. And you can use some phone apps to get a touchscreen remote. But there are only a few examples of something like this device where you have a bunch of real keys plus a separate screen that can support high res graphics, without going all the way up to laptop form factors(too much screen).


They've been selling terminals with a similar form factor (and a different internals) for a couple years now.


> This is certainly dead on arrival, right? Nobody wants or has ever wanted a computer with a screen flush with the keyboard.

The Tandy 100, whose design inspired this machine, was very popular among journalists as a note taking machine.


Yeah, I saw a reporter typing up a sports story on a TRS-80 Model 100 in a Wendy's in San Mateo in 01996, 13 years after the machine's release. I think it had been discontinued for many years at that point.

They're still around but I haven't seen one since then.


> Nobody wants or has ever wanted a computer with a screen flush with the keyboard.

You are badly wrong here.

I see others are pointing to the Tandy Model 100.

I will add the Cambridge Z88: http://www.computinghistory.org.uk/det/4860/Cambridge-Z88/ https://oldcomputers.net/cambridge-z88.html

Still in use, OS still being updated, spares and addons available new. https://www.rwapsoftware.co.uk/z88.html

And the Amstrad NC100: https://www.ncus.org.uk/intro.htm

I have both. I have written about them: https://www.theregister.com/2011/11/10/portable_writing_tool...

I also have both an AlphaSmart 3000 and a Dana Wireless: https://en.wikipedia.org/wiki/AlphaSmart

There was the QuickPad: https://www.techtoolsforwriters.com/tag/quickpad/

... and the DOS-compatible QuickPad Pro: https://www.victor-notes.com/qppro/

And the Fusion Writer: https://www.at-udl.net/Writer-Fusion_2223

And the Tandy Portable Wordprocessor range: http://www.computinghistory.org.uk/det/2720/Tandy-Portable-W...

In terms of contemporary modern kit...

The Freewrite: https://getfreewrite.com/

There's this modern peripheral: https://www.kickstarter.com/projects/ficihp/ficihp-multifunc...

And this device: https://www.hackster.io/news/the-ready-computer-model-100-bl...

As you can see this is an area of considerable interest for a lot of people, and I'd like an inexpensive modern one myself. For me, the DevTerm is too small.


We can exclude all of the 1980s DOS-based models you linked. In the 80s getting a computer made was a feat unto itself, a tilting LCD monitor would have sent the price to the moon.

So that leaves the last 3 links you provide.

#1: The screen is slightly tilted. So it isn't "flush with the keyboard" as I described. Also, this seems to have a typewriter-quality keyboard, which is the selling point there. I'm sure the users are touch-typists who are writing their novel, not someone trying to use BASH and VIM, which I'll have you note, is the target audience of this RISC-V computer.

#2: ...This is a keyboard. Which you connect to an external monitor. Sure, it has an internal monitor, but you can clearly see from the demo that it's meant for preferences and settings. You don't use it standalone. On-the-go seems to be a feature, but it isn't the focus, so it isn't a good example.

#3: ...This is a toy. This is a nostalgia talisman for people who want to go to a techno rave, or people who like the aesthetic of the Tandy 100. It isn't meant for practical use.


I disagree.

Some of those old models were very successful and were very widely available, meaning that they still are today. When I wrote the Register article -- in which I said that I bought both an NC100 and a Z88 to research them, both for about £10 -- I didn't know about the (American) Alphasmart machines. Once I learned, I bought both, and while the 3000 was twice the price, that still means about £25. Easily found, widely available today.

My Dana Wireless cost more: circa £50 with shipping from the USA. Still not a lot.

This falsifies your statement that "nobody ever wanted" such things.

Your disingenuous statement that a tilted screen does not match your criteria is also false: the point here is that there's no hinge. It's one piece, robust and solid and cheap.

All 4 of the devices I have possess "typewriter-quality keyboards".

You are trying to wriggle out by splitting hairs. It's a very poor try at that.

These things are real, they were commercially successful, sold in large numbers, were very popular -- and are widely missed.

While there isn't a cheap decent-spec modern one, that's largely because commodity pricing has made laptops so very cheap now. I have done an ad-hoc feasibility study with a friend and we glumly concluded that we couldn't make one in small numbers for less than the price of a perfectly good ChromeBook.

But I submit that the DevTerm shows there is interest.


Who built the cpu itself? It's interesting that it claims V-extension support, even though that was only just ratified


The vector extension was implemented in industry based on the draft rather than waiting until ratification AFAIK


>The vector extension was implemented in industry based on the draft

Not widely if that's what you meant by "in industry".

Most (like SiFive) did indeed wait for ratification (in case of last minute changes), before announcing their cores.

Pre-standard V chips do of course exist, and this is a good thing (there's better testing in using actual hardware implementations).

They have to use the opcode space reserved for custom extensions, and obviously cannot claim V compatibility. No overlap with actual V opcode space, and therefore no compatibility issues.

These are, nevertheless, fine for embedded uses in which the vendor has control over the whole stack.

It does also mean they have to take on the associated cost of having to maintain it, unless they choose not to actually use the custom extension at all, a choice that realistically will actually suffice for most deployments.


Allwinner D1: https://linux-sunxi.org/D1 - SoC based on XuanTie C906.


Allwinner, the core is a XuanTie C906 I think.


Very cool little thing, and it looks like something straight out of a sci-fi movie.


Direct link to the computer in question: https://www.clockworkpi.com/product-page/devterm-kit-r01


Does it run Emacs? :)


Evidently it runs LibreOffice and Vim, so I can't imagine Emacs would be any problem: https://artvsentropy.wordpress.com/2022/01/15/writing-on-the...


It appears that device isn't using the RISC-V module under discussion but an ARM version instead.

>I decided to install in my DevTerm the simplest module compatible with the popular RaspberryPi Linux distribution. It’s slower than the other “cores” but less power-hungry, and it’s very simple to install pre-compiled software and look up answers for any questions (as the RaspberryPi community is so huge). Yet this core still gives the user an ARM64-bit Quad-Core Cortex-A53 1.2 GHz CPU and 1 GB of RAM — more than enough to run simple writing apps.


Thanks, you're right. I think the RISC-V version also runs Linux, though, and has the same amount of RAM?


This seems like an inspiring step in the direction of a device that I want, but which as far as I can tell, doesn't exist, but I have one burning question:

How small is it?

I've been trying to figure out what I need for comfortable portable writing. I have an Aspire One that's pretty much at the lower limit for key spacing for my hands, with 17 mm × 16 mm keys. I'm typing this on a cheap USB external keyboard with keys closer to 18.5 mm wide, and I think slightly tighter spacing would be more comfortable. I might be able to make do with 15 mm or even 14 mm horizontal key spacing, but my fingers would collide. Someone with smaller hands could manage a slightly smaller keyboard, but not that much smaller. Touchtyping becomes impossible and you're reduced to typing with two fingers like a five-year-old.

Unfortunately none of https://artvsentropy.wordpress.com/2022/01/15/writing-on-the... https://lunduke.substack.com/p/the-first-risc-v-portable-com... https://www.clockworkpi.com/product-page/devterm-kit-r01 bothers to give any dimensions at all, even when they spend a lot of time talking about how small it is.

From my point of view, this is more important than the information they did give, and it seems like it was pretty important to them too, so I don't understand why they omitted this information.

What I want is:

1. A computer powerful enough to recompile all its own software,

2. which can run a reasonably convenient programming environment (my bar is pretty low here: at least as good as Emacs with GCC on a VAX with a VT100, or MPW on a Macintosh SE),

3. which fits in my pocket,

4. which doesn't need batteries or plugging in,

5. which I can recover if I corrupt its filesystem, and

6. which is usable in direct sunlight.

This laptop fulfills points #1, #2, and #5. My cellphone fulfills points #2, #3, and #6. My solar calculator fulfills points #3, #4, #5 (trivially), and #6. It looks like this "DevTerm" fulfills points #1, #2, and #5, same as my current laptop but maybe slightly more portable. But I don't think any computing device exists, or ever has existed, that hits all of these points. But I think it's attainable now.

I think the conjunction of points #2 and #3 probably requires the thing to unfold or deploy somehow, unless some kind of Twiddler-like device would allow me to do it one-handed. There just isn't room for both of my hands at once on the surface of things that fit in my pocket (about 100 mm × 120 mm). Conceivably a laser projection keyboard could work.


I really don't understand everyone's obsession with mobile computing power. Why aren't we equipping our our laptops with low energy processors and use them to remotely access more powerful stationary work stations via networking? Instead of carrying around 4.8 GHz all the time, I'd much rather have multiple days of battery lifetime. We must have taken a wrong step somewhere. I am still convinced processing outsourcing is the future for all things hardware intensive such as gaming, if it is not for some unexpected milestone in battery technology.

Besides,

> 4. which doesn't need batteries or plugging in

what do you mean by that? Would that include only solar-powered devices?


I am still convinced processing outsourcing is the future for all things hardware intensive such as gaming

I really hope not. The closest data center to me is about 70ms on a great day.

Most online games chose to host on servers that are 200-400ms away, it sucks immensely, so I don't play those games.


I was talking of the future, but you're describing the present! There's no reason why local neighbourhood datacenters and ubiquitous high speed mobile networks couldn't be a thing some day.


You don't even need neighborhood datacenters; you could just access your desktop machine in the basement with your lightweight portable mobile terminal in the living room. Just the other night I used mpv on one laptop to stream a video file from the other laptop over Wi-Fi with python2 -m SimpleHTTPServer, and XPra can do the same thing in the same way for remotely accessed applications. (Of course, ssh -X can kind of do that too, but it's a lot less efficient and more insecure.)

In theory we could have much-lower-power wireless communication systems: maybe using lasers, with MEMS corner reflectors on the mobile station to transmit, and a simple photodiode with a dichroic filter over it to receive. Or maybe using submillimeter waves from a phased-array antenna, like the Starlink terminal. Or just time-domain UWB pulse radio in conventional microwave bands, but optimized for low power usage instead of super high data rates or precise ranging.

But, right now, evidently even Bluetooth Low Energy from the leading ultra-low-power silicon vendor costs 10 milliwatts when you have it on. And it's not clear if the technologies I described above will materialize. So the amount of dumb that it makes sense to put into a wireless networked mobile terminal is only about 10 milliwatts of dumb. And 10 milliwatts is not that dumb. Even with a conventional low-power CMOS Cortex-M (300 pJ per cycle, 2 DMIPS/MHz) that's about 30 MIPS or 60 DMIPS of dumb. That's dumb like a SPARC 10 workstation from the mid-90s, not dumb like a VT100 or an analog TV tuner. With subthreshold logic it's more like 600 DMIPS of dumb, dumb like a 450 MHz Pentium II (introduced 01998, mainstream around 02000).


See https://news.ycombinator.com/item?id=30691361 for notes on energy sources. I'm open to alternatives to solar.

I agree that making computing power mobile makes it enormously more expensive, especially if you consider batteries unacceptable. But making computing power remote means that you need to spend energy on a radio to access it. That's a good tradeoff for some things, but not for others. In my other comment, note that if we believe Ambiq's datasheet, we can get the CPU speed of a SPARC 20 for 1.8 milliwatts.

It turns out the chip also includes a Bluetooth Low Energy 5 radio, so you can use it to remotely access more powerful stationary workstations via networking, as long as you're within a few meters of a base station. The radio costs 10 milliwatts when you're running it, six times as much as the Pentium-class CPU. Normal radios (Wi-Fi, cellphones) cost orders of magnitude more than that.

So constant remote wireless access to more powerful stationary workstations doesn't start to save energy until the amount of computation you're accessing is close to a gigaflop. Maybe closer to a teraflop if we're talking about streaming full-motion video. Intermittent remote access, of course, is a more reasonable proposition.

It's true that gaming commonly uses teraflops or petaflops of computing power, and plugging in such a beast in a server closet is a huge improvement over trying to somehow cram it into your pocket. But there are a lot of day-to-day things I do with a computer — recompiling my text editor, writing stupid comments on internet message boards, chatting on IRC, simulating an analog circuit, reading Wikipedia — that very much do not require gigaflops of computing power.

(Remote wired access of course can use very little power indeed, but if you're in a position to plug into a wire, you might as well deliver power over that wire too.)

If you take a modern cellphone and take almost all the processing power out of it, you still have a 1000-milliwatt radio and a 1000-milliwatt backlit screen. So you aren't going to get multiple days of battery life that way. 1000 milliwatts is enough to pay for dozens of gigaflops of computing power nowadays.

Myself, I have another reason: I travel, though I've traveled very little during the pandemic. But I am often someplace other than at home: at a café, in a park, at the office, in a bus, in the subway, in a taxi, visiting a friend in another city, at my in-laws' house, and so on. All of these places are out of Bluetooth range of my house. I could obtain internet bandwidth from a commercial provider, but that sacrifices privacy, it's never reliable, and I don't consider it reasonable to make my core exocortical functions dependent on the day-to-day vagaries of mere commerce. Personal autonomy is one of my core values.


* which doesn't need batteries or plugging in *

So, uh, it has to be solar powered only? That sounds fantastically niche and not like something someone would want to design/build.

I think any reasonable "work" performance level would require too large an area of solar cells in order to be practical, especially if it should work indoors.


Solar power does seem like the most reasonable option, although other possibilities include energy harvesting from keyboard interaction, piezoelectric shoe soles like those that drive those LED-flashing shoes, or a pullstring. You need some energy storage, but at low power levels a modern ceramic capacitor is more than adequate in between pullstring pulls or whatever. Many of my earlier notes on the topic are in https://dercuano.github.io/notes/keyboard-powered-computers.... and https://dercuano.github.io/topics/energy-harvesting.html; more recent ones are in http://canonical.org/~kragen/derctuo and http://canonical.org/~kragen/dernocua.

But last year I learned about two innovations that have been brought to market that dramatically increase the potential abilities of such a device.

— ⁂ —

It's entirely possible that your definition of 'any reasonable "work" performance level' is orders of magnitude higher than my "reasonably convenient programming environment". The VAX reference point is 1 Dhrystone MIPS, and the Macintosh SE (a 7.8 MHz 68000) was also about 1 Dhrystone MIPS. Current ARM7 Cortex designs deliver about 2 Dhrystone MIPS per MHz, so we're talking about 500kHz of ARM7 instructions here.

Without any fancy circuitry at all I was able to squeeze 8 mW out of a 38-mm-square solar panel from a dollar-store garden light, in direct sunlight. Theory predicts it should be capable of 200+ mW (16% efficiency, 1000 W/m²) so hopefully I can get better results out of other panels. The amorphous solar panels normally used on solar calculators, which work well indoors as well as in direct sunlight, are nominally about 10% efficient, which is to say, 10 mW/cm² in direct sunlight or 100 μW/cm² in office lighting. It's easy to imagine dedicating 30 cm² of surface area to solar panels, which would give you 300 mW outdoors or 3 mW indoors. (38 mm square is 14 cm². 30 cm² in medieval units is about 4⅝ square inches, depending on which inch you use.)

— ⁂ —

So, what kind of computer can you run on a milliwatt or so?

Ambiq sells an ultra-low-power Cortex-M4F called Apollo3 Blue with 1MiB Flash plus 384 KiB RAM; the datasheet claims that, when fully active, at 3.3 volts, it uses 68 μA plus 10 μA per MHz running application code, and it runs at up to 48 MHz bursting to 96 MHz. So at, say, 10 MHz (20 VAXen or Macintosh SEs), it should use 170 μA or 550 μW. At 48 MHz (100 VAXen or Mac SEs), 550 μA or 1.8 mW. I haven't tested it yet. SparkFun sells a devboard for US$22 at https://www.sparkfun.com/products/15444.

Remembering, of course, that benchmarks are always pretty bogus, we can still roughly approximate this 20 DMIPS performance level as being roughly the processing speed of a 386/40, 486/25, or SPARC 2, and the 200 DMIPS peak as being roughly the processing speed of a Pentium Pro, PowerMac 7100, or SPARC 20 — but with enormously less RAM, an enormously lower-latency "disk", and a lower bandwidth budget for the disk. And no MMU, so no fork() and no demand-paged executables. You do get memory protection, hardware floating point, and saturating arithmetic, but no MMX-like SIMD instructions.

— ⁂ —

If the computer's output isn't going to be just an earphone or something, though, you may need to spend substantial energy on a screen as well. Sharp makes a 400×240 "memory LCD" display (0.1 megapixels) used in, for example, Panic's Playdate handheld game console (https://youtu.be/uziFTK5c29k) which seems like it should enable submilliwatt personal computing; the datasheet says it uses 175 μW when being constantly updated with a worst-case update pattern at its maximum of 20 Hz. Adafruit sells a breakout board for the display for US$45: https://www.adafruit.com/product/4694

A VT100's character cell was 8×10 (https://sudonull.com/post/30508-Recreating-CRT-Fonts), so its 80×25 screen was 640×250, 0.16 megapixels, while the Macintosh SE was 512×342, 0.18 megapixels. Both of them were 1 bit deep: a pixel was either on or off. So two of these LCD screens together gives you more pixels than either of these two reference-point computers. However, the pixels are physically a lot smaller, which may compromise readability. (On the other hand, as you can see from the Playdate videos, you can do a lot of things on this display that a VT100 or Mac SE could never dream of doing because of lack of compute and RAM.)

— ⁂ —

So, I think it's eminently feasible now, although it probably wasn't feasible ten years ago. But it's going to be a challenge; I can't just compile Linux and fire up Vim. Even if Linux could run on these super-low-power computers, they don't have enough memory to recompile it.


Correction: the VT100's characters in 80-column mode were 10×10, which is 800×250 pixels, or 0.2 megapixels, not so 640×250 as I said. That's 4% more than two of these 400×240 panels together. That said, the X-Windows 5x8 font is reasonably readable, and the 6x10 font is perfectly fine. 4×6, as xterm -fn -schumacher-clean-medium-r-normal--6-60-75-75-c-40-iso646.1991-irv, leaves a lot to be desired.

However, videos of the display hardware like https://youtu.be/zzJjE1VPKjI show that you can get really astonishing things out of 0.1 megapixels when you drive it with current levels of computation instead of, like, an 8085. (That video is a little misleading in that it claims "60 fps animation", while the datasheet claims the max is 20 fps.)


We compiled C code just fine on machines with less than a megabyte of RAM back in DOS days. I wouldn't expect gcc to work on such a machine, but some older compiler (lcc? pcc?) should be feasible.


Yeah, even older versions of GCC ought to work, though they don't come with ARM7 backends. (Normally GCC uses fork() to run different compiler passes, but DJGPP demonstrates that it can run without virtual memory without extensive surgery.) C was developed on the PDP-11 where the per-process address space was 64 KiB — and though I think the PDP-11 hardware supported separate stack, data, and code segments, I think the Unix environment (and C in particular) didn't. And the BDS C compiler supported most of C under CP/M on the 8080. (It's free software now, but unfortunately it's written in 8080 assembly.)

Separate compilation was helpful not just for speeding up the edit-compile-run cycle but also for handling high-level languages in small memory spaces; if your compiler ran out of memory compiling a large source file, you could split it into two smaller source files and link them together. Getting Linux to compile that way would probably be more work than writing a new OS from scratch.

More inspiringly, though, Smalltalk-76 ran on an 8086 with 256KiB of RAM, all kinds of MacOS software ran in 512KiB of RAM, and the Ceres workstation built at ETH in 01987 to run Oberon had 2 MiB of RAM. So I'm confident that a fairly polished IDE experience is possible in 384KiB of RAM and 1 MiB of fast Flash, especially if supplemented with larger off-chip fast Flash. It ought to be possible to do much better than a C-compiler-on-DOS kind of thing.

But you can clearly write a usable GUI environment using a C compiler on DOS or a Pascal compiler on a Mac SE.


C isn't difficult to parse, so you could probably automate splitting of C source files on e.g. function boundaries.

But then again, with a single-pass compiler, it shouldn't matter how large the input is, no? I know there are forks of tcc with ARM support...


You probably know all this, but it may interest other people:

Declarations change the parsing of C, so even a single-pass compiler needs to keep the declarations in memory somehow; cases like `foo * bar;` can be legally parsed as either a declaration of `bar` of type `foo*` or a (useless but legal) void-context multiplication of `foo` and `bar`, depending on whether `foo` has been declared as a type with typedef. Plus, of course, preprocessor macros can do arbitrary things. In PDP-11 C days it was common to put declarations of library functions directly into your code (with, of course, no argument types, since those didn't appear until ANSI C) instead of in a header file, and the header files were very small. Nowadays header files can be enormous, to the point that tokenizing them is often the bottleneck for (non-parallel) C compilation speed; often we even include extra totally unnecessary header files to facilitate sharing precompiled headers across C source files.

So I think it probably isn't straightforward to enable small computers to compile current C codebases like Linux.

tcc, however, would be an excellent thing to start with if you were going to try it.


I imagined a separate pass for the preprocessor, where the state would only be #defines and the current stack of #if blocks. Thus compiler would only have to keep track of type declarations and globals (including functions). With some effort, it should be possible to encode this quite efficiently in memory, especially if strings are interned (or better yet, if the arch can do mmap, sliced directly from the mapped source). Looking at tcc, it's somewhat profligate in that compound types are described using pointer-based trees, so e.g. function declarations can blow up in size pretty fast.


Yeah, definitely running the preprocessor as a separate process eases the memory pressure — I think Unix's pipeline structure was really key to getting so much functionality into a PDP-11, where each process was limited to 16 bits of address space.

Pointer-based trees seem like a natural way to handle compound types to me, but it's true that they can be bulky. Hash consing might keep that manageable. An alternative would be to represent types as some kind of stack bytecode: T_INT32 T_PTR T_PTR T_ARRAY 32 T_PTR T_INT32 T_FN or something for the type of int f(int *(*)[32]) or something (assuming int is int32_t). That would be only 11 bytes, assuming the array size is 4 bytes, but kind of a pain in the ass to compute with.

Interned strings — pointers to a symbol object or indexes into a symbol array — can be bigger than the underlying byte data. Slices of an mmap can be even bigger. This is of course silly when you have one of them in isolation — you need a pointer to it anyway — but it can start to add up when you have a bunch of them concatenated, like in a macro definition where you have a mixture of literal text and parameter references.


I would love a little stationary riscv computer like this, which I can plug my own monitor and keyboard into. Does that exist?


The Lichee RV with its Dock does cost ~$21 and is more or less what you describe.

Notably, this is the same chip used in the computer discussed in this story.

[0]: https://linux-sunxi.org/Sipeed_Lichee_RV


You can plug a monitor, keyboard, and mouse into the DevTerm.


Looks like neck pain waiting to happen


Btw nearly buy one as I love the design. But the more I read here, I stop myself.

But that design is great.


The design looks like it's from the 80s or earlier. Is this some sort of nostalgia thing?


The article links to the inspiration in the second paragraph: https://lunduke.substack.com/p/the-last-programming-project-...


What's the significance of RISC-V?

Easier to write compiler backends for? Faster? Simpler?


Simple ISA is easier to implement. You can have a naive/slow/cheap chip that can run the same programs as a cutting-edge high-performance chip.


RISC-V requires less hardware complexity, and I think GigaDevice's GD32VF microcontrollers have gotten lower energy usage out of that (compared to the otherwise identical GD32F microcontrollers with an ARM Cortex-M core). I don't think it's especially easy to write compiler backends for, though I haven't tried yet; the instruction set is nice and small and orthogonal, yes, but most of the time the subset of instructions a compiler backend actually uses on amd64 or whatever is also nice and small and orthogonal. It's much easier to write emulators for, both because the instruction set is small and because, like the MIPS, it has no condition-code flags, a property that has caused consternation among GMP developers. And there is of course a much wider range of logic designs available for RISC-V than for any other architecture, because it's the first open-source architecture that's become popular.

For me one of the biggest draws is that the privileged spec is enormously simpler than the corresponding morass for most other architectures, especially i386 and amd64. Writing a toy, but working, virtual-memory operating-system kernel for RISC-V seems like the kind of thing you could do in a weekend rather than a semester. And that's enormously freeing.

Trouble is, almost all the actual physical RISC-V hardware I've seen so far seems to be microcontroller-oriented, so it doesn't have an MMU, so it doesn't support this stuff. SiFive has sold some boards called HiFive Unmatched and HiFive Unleashed with multi-core RISC-V CPUs they built that I'm pretty sure did have MMUs (since you could run Linux on them), but they're gone now (https://www.mouser.com/ProductDetail/SiFive/HF105-000?qs=zW3...).

So something like the Allwinner D1 seems really appealing. Just, not saddled with a keyboard that's both too big to fit in my pocket and too small to actually type on. The Clockwork Pi would be a good fit — but 64 MiB is pretty small for running modern Linux.


Look into the Lichee RV.


Thanks!


Some time after MIPS shot itself in the head by killing MIPS Open, it's nice to have a reasonably well designed and well-supported open architecture and instruction set.

edit: apparently there are open versions of SPARC and POWER, so... maybe there's not a lot of value in RISC-V except that it's simpler than the other two, drops irritating legacy features (register windows, condition codes...) and is in general designed for modern implementation?


Open source ISA - no licensing fees.


What does that mean in practice - easier for people to manufacture hardware for them?


You can get many open source soft cpus to run on FPGAs. The interesting bit is that RISCV is supposed to be extensible ie. it's easy to modify these cpus to provide interesting designs eg. Tagged memory to allow fast hybrid software/ hardware GCs, asynchronous cpus, rump cpus for DMA etc.

These new designs can then be used commercially- that's a big win for computing in general.


Yes. You don’t need to sign up for a membership or some other thing to manufacture chips that comply with the standard.


But it also means people are implementing modified ISAs (such as this CPU which has a non standard V extension) which sucks for compatibility.


>But it also means people are implementing modified ISAs... which sucks for compatibility.

I've seen this a lot, both as misunderstanding and as FUD.

RISC-V was designed from the start with extensions in mind. This includes allocations for custom extensions that aren't standardized by RISC-V.

>(such as this CPU which has a non standard V extension)

There's nothing actually "incompatible" with this SoC, in the conflicting sense.

D1's V does naturally use these custom extension allocations and thus does not collide with the standard V.

It will therefore run RV64GC code just fine, and an illegal instruction exception will trigger should standard V code be encountered.


> it’s hard to not immediately fall head over heels

No doubt, because you're going to be cricking your neck over to look at the thing. Looks painful.


$239 is a lot of money for a meme. The processor looks just as closed as a raspberry pi


The processor avoids being closed by not having a GPU.


So where's the RTL for the processor then?


The RTL is on GitHub under the Apache2 license: https://github.com/T-head-Semi/openc906

Yocto project in https://github.com/T-head-Semi/xuantie-yocto


Ok looks like I was wrong.

Is the openc906 definitely the same design as the one in the D1? It looks like it is but I have been bitten by xyz vs. open-xyz subtlety before.


The openc906 is just the CPU core, however (and the released C906 Verilog code might have some updates and bugfixes compared to the silicon in the D1).

The English datasheet and user manual for the D1 are available at https://linux-sunxi.org/D1

However, some of the peripheral hardware (such as the video unit/frame buffer) is not documented and the documentation for the C906 CPU core itself is only available in Chinese.


> and the released C906 Verilog code might have some updates and bugfixes compared to the silicon in the D1

Not really, the C906 source code (close-source) is written in Vperl (a custom internal language) which is translated into verilog by an internal tool. The OpenC906 (open-source) is only the generated verilog code from a light version of the C906 Vperl code, with some hand-made modifications/fixes.

The Vperl code and the tools to process it are not made public. However, the code generator leaves the Vperl code in verilog commentaries which gives an idea of the original code, and shows that big chunk of code have been removed/patched by hand.

I've mostly read code from the OpenC910, but I can tell you for sure that the C906 design and the OpenC906 design does not only differ only in some "updates and bugfixes". It's another code generated from a more advanced design which is not open-source

Edit: Not telling that the OpenC906 doesn't give you an idea of how the C906 is made, just that the C906 may contain a lot of differences with the OpenC906 that you don't know.


I see. Thank you for explaining! I didn't realize that the Verilog RTL for the OpenC906 wasn't the actual source code, or that the C906 was different from the OpenC906.


Chinese is the command line of the 21st century. I've had the same difficulty with CKS32 datasheets.


I don't know for sure; even if it purportedly is, there's probably no way to verify that no hardware backdoors have been inserted, if that's your concern. And rebuilding the CPU with your desired modifications would require signing the appropriate agreements with your fab provider of choice and probably a million dollars or so over a year or two. Still, you could probably do cycle-accurate simulations on an FPGA at a lower clock speed, if that's what you're into.

https://www.hackster.io/news/mangopi-mq1-is-an-ultra-compact... claims the Allwinner D1 "is" the "XuanTie C906, which Alibaba's T-Head division recently released under the permissive Apache 2.0 open source license." But of course the author could be mistaken about that.


You can integrate the C910 (should work for the C906 as well) in Olof Kindgren's FuseSoC tool, see https://twitter.com/OlofKindgren/status/1451654866837938186


> Is the openc906 definitely the same design

No, OpenC906 is a light version of the C906 and they only release generated verilog code without any testbench


AIUI while C906 is open, the D1 SoC is not.


Oh, that's possible. Do you have more information? https://news.ycombinator.com/item?id=30692163 said something similar.


+1. Asking the right questions.


$239 + 60 business days


Man that's ugly.


I think this is neat, but it is either too expensive or it is underpowered for the price. I think if you bought a 7th Gen iPod Touch, available since April 2019, and jailbroke it, and added a bluetooth keyboard, you'd have a far more powerful computer (A10 chip has two high performance cores @1.64Ghz plus two more efficiency cores) for about the same price, plus you'd also still have an iPod and mobile Safari.


But. It wouldn’t be riscv. Which is the unique selling point. Not the power.


First iteration and niche nerd product is obviously going to be more expensive.


In recent years I've become more and more worried that work on RISC-V is just a way of empowering non-democratic governments the world over by giving them an escape hatch out of western controlled IPs. Yes it's better that these things are not controlled by big companies and yes this should improve innovation in the hardware space but what are we trading off when we get that benefit?


They already did designs based on Alpha and MIPS ISAs. That genie definitely isn’t going back into the bottle.

Instead, we’ve moved to limit their ability to manufacture. All the chip plans in the world are worthless if you can’t fabricate them. ASML was largely funded with US DoD money and they seem to have veto power when it comes to exports.

Russia’s cutting edge tech is something like 180nm with promised for 90/65nm still being pipe dreams.

China has managed small amounts of 32nm and talks about 22/14nm stuff, but they can’t get new equipment and all of that is mostly gleaned from the US outsourcing so much.

This puts these countries 10-20 years behind and that’s if we don’t consider the nodes that are essentially complete, but waiting on the resources to build them out.


SMIC's 14nm has been in full production for years now at this point, and is rumored to have TSMC equivalent yields. The Kirin 710A is one example of a production chip made on that node.

China has been effectively cut off from EUV steppers, but has used that fact to put a lot of money into developing their own by any means necessary. They have a pretty decent track record of developing manufacturing experience in something when they get cut off from importing it.


SMIC 14nm transistor density sits about halfway between Intel 22nm and Intel 14nm. Intels SRAM density was almost twice that of other 14nm processes putting SMIC much closer to Intel 22nm. I’d note Intel’s 22nm is 11 years old and their 14nm is 8 years old.

Last I checked, SMIC still wasn’t ramping up 14nm production until later this year.

I believe this was directly impacted to problems getting more equipment. High yields on not much equipment also doesn’t matter too much. If that equipment was easy to design and make, then everyone would be doing it. Even a year or two makes a huge difference in this market.

As a side note though, nodes have very little impact on weapons themselves. Most missiles and similar weapons use positively huge nodes because they are more reliable and less susceptible to interference.

We could land a rocket on the moon with 350 times less compute power than your highschool TI calculator possessed. A handful of 180nm chips are more than up to the task. Even ancient 600nm Pentium designs would be more than enough.

People tend to think about weapons, but the real consequences are in the design and implementation stages.


That's not a bad thing by any means. It's making the best of a bad situation (non-democratic governments that are, to varying extents, anti-West) by incenting those governments to cooperate with the West on open alternatives to proprietary IP. Since such cooperation would basically never happen if it wasn't for the open IP-alternatives being available (they would develop proprietary IP of their own instead, as we've seen so many times already!) I'd call that a pretty good deal!


RISC-V Foundation was brilliant in their timely relocation to Switzerland.

It prevented RISC-V from being embargoed by this sort of thinking, and did go a long way to promote it among those seeking technological sovereignty.


Sounds like a soft vulnerable world hypothesis. I think it's training, collaboration, motivation, and funding of engineers that really matters though. I think what's happening in Ukraine shows that as long as brain drain happens it doesn't matter if knowledge is open because it cannot be used effectively.


The conflation of ideology and tech as shown in this post is even more reasons to have indigenous and maybe open IP to prevent "west" blockade. The non-democratic Chinese dont give a shit about the ideology of their customer. I prefer tech and commerce be that way instead of an avenue of virtue signalling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: