Interesting that the group behind this is Tsinghua-Berkeley Shenzhen Institute, which is sponsored by UC Berkeley as well as a major Chinese university and the Shenzhen government.
China is quite interested in RISC-V gaining traction as trade and security barriers rise and they work to build up their domestic technology sector.
Going into wild-ass speculation mode, if this board really is affordable, I wonder if it's being subsidized by the Chinese gov't to encourage RISC-V adoption.
If I understand correctly, it's hard to make a cheap RISC-V Linux board with a price in the same order of magnitude as Raspberry Pi, because the Pi benefits from the availability of cheap, powerful ARM-based CPUs that were mass-produced for the mobile phone market. Linux-capable RISC-V parts have nowhere near these economies of scale.
I like your idea because I could see the Chinese government doing this to help prop up SMIC too. The RPi4 isn't any smaller than 16nm, and SMIC needs 14nm customers.
Shoot for the 500MHz target listed, which is achievable with Victorian era style over engineering. Next sell those more or less at cost after the government footed the bill for initial capital (probably paying back .gov.cn first). Then using the massive body of software you expect from the new ecosystem around the first wave of chips to reoptimize the cores and inch your way towards really competitive chips. All the while SMIC gets great experience at their lower node with a pretty low risk, but high volume project.
Total speculation, but fun speculation. If someone pulls that off they probably win the whole Made in China 2025 thing for homegrown computers in an interesting enough niche to have a foreign audience too.
I am sorry I don't but the fab industry with its billion-dollar machines that pops out chips at a marginal cost of a dollar a piece should be a textbook example of a natural monopoly.
I'm all in SMIC stocks. You can get a sizeable part of the company today for cheap, and if sanctions continue, and issues with Taiwan scalate, China is going to go all-in SMIC as well.
> Going into wild-ass speculation mode, if this board really is affordable, I wonder if it's being subsidized by the Chinese gov't to encourage RISC-V adoption.
You might find this[1] meeting with Steven Zhao interesting. Steven Zhao is the guy behind Orange Pi, raspberry pi alternative. Interesting part translated:
Steven announces that Orange Pi can buy a good quality WiFi component for $1, that his cards will have a good WiFi connection for that price.
- Uh, if the purchase of the component is $1, inevitably the exit price will be a little higher for the end customer, Mr. Zhao? - If you tell me the price, I will know your margin ... -
- No, because today all Orange Pi cards are sold at the BOM (Bills Of Material) price.
- Does that mean that the cost of engineers, premises, development hardware, design and all that ... isn't impacted in the price of a card?
- That's right.
- But how is that possible?
- We benefit from government grants.
And here's a statement that takes your breath away. Orange Pi is a private company in a state dumping system. This can be said from a Western point of view, of the international agreements and of all the conceptual set-up intended to protect the big established industrialists.
I also recall that it was actually primarily a GPU chip which had an ARM core originally intended as a driver for the GPU (and not really intended as a primary processor). And they were leftovers from larger volume chip runs.
The BCM2835 was originally a high volume shipping cell phone GPU for Nokia with a different part number (BCM2708) and the ARM core enabled internally. It was also stacked with DDR memory.
It shipped first in a STB for Roku, but that was happenstance vs any design intention.
The market for separate cell phone GPUs died off, but the die was manufactured still for the RPi.
Promising hardware, though this isn't looking like comparable hardware even to the Raspberry Pi Zero. Only has a quad-core 64-bit RISC-V (RV64GC) processor at 500+ MHz. RAM size not specified. If it was priced in the $5-15 range it would be more or less practical, but higher than that I'd be buying it solely for the architecture novelty.
It's still a decent start, and I'm hoping future versions will begin open sourcing the rest of the chip core like the memory controller. It'd be nice to have a good DDR IP core for things like this.
EDIT: Removed a bit of the comment from a misunderstanding from reading the article
Original iPhone managed 60fps with an even slower processor (412MHz - https://everymac.com/systems/apple/iphone/specs/apple-iphone...). I know it only had 320x480 resolution, but if you consider that the CPU's job is to keep the GPU busy then I see no reason that efficient software could get a smooth UI out of this processor paired to a 60fps-capable GPU. Having said that, 'efficient software' is like hens teeth nowadays :)
> I see no reason that efficient software could get a smooth UI out of this processor paired to a 60fps-capable GPU
Depends on your expectations, technically NES had 60 FPS GPU.
If you want to drive a FullHD at that rate, and render GUI comparable to modern phones - you need 3D acceleration, ideally programmable shaders, and the fillrate.
GPU is unavailable at launch, they say it will appear in later versions, and it will be a PoverVR. Never said which PowerVR.
The Pi Zero is single core at 1GHz so it's not a totally fair comparison. There are plenty of times I wish I had more cores, even if they were half the speed. Compiling anything on the zero is incredibly slow. It also has slower DDR2 RAM and no USB3 (which makes a big difference for many peripherals).
But zero is well.... a zero.... If you don't want to mess with arduino/esp*, but want a price point and size in that range, with full linux support, to connect a thingy to another thingy (eg sensor to web/mqtt), and don't care about additional power use, you take a rpi0. I don't think it was ever meant as a "normal computer" or a "server" (as we use normal-sized RPis for).
Sure! The Zero is great as a cheap DAQ, I use them all the time. But the SBC that it's being compared to is much more full featured and I don't think you can say it's worse or even comparable.
The Arduino Uno retails for around $30 and gives you a 16MHz CPU and 2KB of RAM so I don’t think specs and price are that related, especially when looking at different architectures.
Microcontroller vs microprocessor. Also I don't know anyone who pays RRP on Arduinos. If I need one for a project I get a Pro Mini (same specs, much smaller) for $2 shipped from China.
You don't use arduinos in commercial products either. You design your own board and drop a atmega328p on it so you don't have to deal with the support calls of the arduino working it's way loose.
It's roughly in the same class when compared to Arduino/uC RISC-V boards that really are incomparable and suited to a completely different type of application.
The achievable clock speed is mostly a function of the pipeline depth and how well the design is optimized, the ISA should have very little impact on it. There are a bunch of high-clocked high-performance ARM chips available because several huge companies have already made large R&D investments into developing them.
The SiFive HiFive Unleashed has quad RV64GC plus one RV64IMAC and runs at around 1.5 GHz in TSMC 28nm -- mine's a little unstable at 1.5 after three or four hours at full load, but it's perfect at 1.4.
It was pricey at $999 because the chips are from low volume "shuttle run" aka MPW production (probably $300 or $400 each -- would be maybe $10 in volume production) and the board has 8 GB DDR4 2400 and other expensive things such as FMC connector.
But it's well faster than a Pi 3 and I've had 2.5 years of use out of it already...
The 1B came out in 2012, and you can now get a Pi with 4× Cortex-A72 1.5 GHz and 2 GB of RAM for the same price as the 1B cost then. The title does say "alternative".
My first thought as well. On the other hand... baby steps. We need a chip like this very much. If the first ones are slow or too proprietary that may be OK, it is still pushing the RISC-V ecosystem in the right direction. It will help validate a lot of software which will enable more open designs by having things ready to go.
I've recently purchased a Sipeed Maix Bit, which is based on the Kendryte K210 (RV64GC dual-core), just so that I can start messing around in the RISC-V 64-bit ecosystem. However, the K210 has just 8MB of on-chip RAM, limiting the applications. It ships with MicroPython, and you can run microcontroller Linux, which is a fun little hack.
However, I'm primarily interested in Rust development on the RISC-V. For bare-metal programming, the K210 is enough for me to get my feet wet. Thanks to the wonderful community, you can already run gdb for source-level debugging of Rust on that platform, with the addition of a $7 USD JTAG adapter.
That said, as long as the PicoRio board (even without a GPU) is available for less than $100 USD, I am totally buying one as soon as they are available for order.
They managed to pick the one mobile GPU vendor that has zero open-source drivers for their chips and a reputation for horribly messy and unstable proprietary code.
Another board for the landfill, forever doomed to some 80s throwback Unix timeshare shell application.
In IMG's defense, the way that their drivers and firmware are micro optimized for the specific IP core version and use case by the host software was one of their competitive advantages. It's one of the reasons why their cores were some of the fastest and most efficient for the years that they still had plenty of capital rolling in to support that model.
You'd expect that kind of thing to be brittle when you don't have source access, but the drivers themselves were remarkably clean when I spent some time reversing them. It's just a shame that they're so afraid of open source too, because the above really compounds that pain.
I'd rather unaccelerated graphics. Hell, take any display solution from opencores.
But don't give me proprietary garbage and especially not PowerVR. I'm OK with no graphics as an alternative. Hell, I could always use some USB display solution.
Yeah, I'm hoping that the delay before the second edition is enough that they can change course on it, too. Either way, better to have more riscv options than fewer:)
The Beaglebone has the same problem, which is unfortunate. It would be sweet to have a completely open source stack on that thing that included accelerated graphics.
I honestly didn't realise the Raspberry Pi isn't open source. Why isn't it?
The Foundation's charitable objective is education, so sure, it's not necessary, but there really seems to be no reason for it to be closed - I suppose it's just 'closed by default'?
rpi is one of the most closed SBCs there is. Even basic materials necessary to write OSes and drivers for the device, namely datasheets and register manuals for the SoC have never been published. Unlike every other SBC I'm aware of they don't even publish full schematics. The "schematics" on their website are abridged (try and find the USB ports on them).
Arguments that Broadcom is forcing their hand on this simply don't add up. It's not Broadcom that's stopping them from publishing even basic schematics of the board; that was their decision. Sure, they'd never be able to open the chip itself, but nobody is particularly asking them to (it's not like people have fabs in their garages).
Moreover it's pretty clear at this point that Broadcom is making SoCs custom-order for rpi at this point; see their original announcement on the rpi4, which makes it quite clear they were involved in the design process. Based on what I know about Broadcom's MO, it's simply not plausible that with that amount of money on the table, release of datasheets and register manuals couldn't be negotiated; it seems clear it's not a priority for rpi.org.
And as mentioned, rpi boards include vendor-lockin DRM in their binary blob boot firmware designed to prevent people from using third-party camera boards.
Not only is this unacceptable in itself, it also means rpi is actively depending on this boot firmware remaining closed, else people could easily remove this antifeature. People have been hoping for years that the firmware blobs required to boot the boards will eventually be opened, and pointed to some vague noises in this direction which were made by rpi.org years ago, but the fact that they're now relying on the fact of it being closed to enforce DRM causes me to find this increasingly unlikely.
> Arguments that Broadcom is forcing their hand on this simply don't add up.
I have worked with much more mundane integrated circuits where the manufacturer required you to sign an NDA before getting any kind of design support from them. One of the terms of the NDA was not publishing the schematic of the device using their product. I don't see why Broadcom wouldn't do something similar for their SoCs.
Broadcom is notorious for demanding NDAs for their datasheets, completely pointlessly. I've also heard of companies demanding their parts be redacted on schematics if published, even more ridiculously, though as far as I know Broadcom isn't one of them (I've seen public schematics with Broadcom parts).
Still, from what I've heard of Broadcom specifically, they're pathologically secretive with their documentation unless some star customer dangles enough money in front of them. rpi.org certainly fits that description. So I don't really buy this excuse for rpi here.
In reality, I suspect it's more the fact that rpi.org was started by ex-Broadcom engineers and they brought their culture of secrecy with them.
These things can be completely irrational in my experience. One part I was working with had a driver upstreamed in the Linux kernel, you could buy off-the-shelf devices for cheap that had it on the PCB and I even found a company that was apparently selling a complete set of layer-by-layer die shots of that part for around $1000.
Yet the chips weren't sold by any distributor. To even just get engineering samples from the manufacturer you had to sign this draconian NDA that covered everything from software to hardware docs even remotely connected to their part. Even after signing the NDA they wouldn't disclose full datasheet for e.g. what some registers did. For some things the comments in the Linux kernel turned out to be more informative than anything we got through official channels.
But if they're the only source of parts you have to use, you have to dance by their rules. Ruin the relationship by breaching the NDA and you might just as well bury the product you spent a year developing.
If you read a little further down in that thread, Eben Upton says that it is to prevent other camera vendors from just using the ISP tuning they developed for their official camera. You can still use some other camera with a CSI-compatible interface.
The question is why is a charity organization devoted to education using DRM for protectionism rather than taking an approach where knowledge is more freely shared.
I understand that they don't want other camera vendors to compete with them. What I don't understand is why they would take that stance as a charity, and what I disagree with is the implied argument that it's better for the Raspberry Pi ecosystem if the hardware is closed.
I believe it's because the Broadcom System-on-a-Chip (SOC) that the Raspberry Pi uses has one or more binary BLOBS that are required to access features on the SOC. RISC-V is an open standard; chips made based on this standard are open source.
Also, its not possible to write your own DSI/MIPI drivers for a phone LCD that you can buy on eBay, partly for not having the DSI/MIPI specification unless you pay upwards of thousands of dollars a year and Raspberry Pi doesn't allow access to the DSI/MIPI HAL layer - its part of the SOC which Broadcom controls.
About the only way to make a device with an LCD is to use the built-in HDMI.
This is a theme with the RPi. They generally implement new hardware in their proprietary firmware first, for lack of skills/time to do a proper upstream job of it (or the convience of copy-pasting some vendor code), then pay someone to develop an open-source driver. For example, here is a massive 80 patch series nearing merge for the new display pipeline in the RPi 4:
Broadcom SoC is probably protected by a bunch of patents and/or trade secrets, which makes it impractical to open source. For example the whole boot process is a black box, which is why you have to use Raspbian and not regular Debian.
Since the GPU starts before the CPU, you need a blob to boot the GPU before you can start open code on the CPU. There is open GPU boot firmware being worked on but it isn't ready yet and relies on GCC support for the GPU, which isn't mainlined yet and the authors don't intend to mainline. So probably RPi will need blobs to boot for the foreseeable future.
I’ve played with this recently in setting up a shared netboot server with Debian for my pile of ARM SBCs (I really should write up that blog post...). The upstream kernel’s device trees are different from the ones supplied by the Raspberry Pi Foundation, with some slightly different names for devices, and basically no easy way to use overlays. Other than that minor obstacle, it’s perfectly doable to build up your own image with debootstrap and chroot with 100% Debian repositories (or go the easy way and download one of the prebuilt images [0]). You do still need the closed-source blobs (bootcode.bin, start.elf, fixup.dat) for boot though, which are made available in the non-free repo.
The 4’s hardware support is still being merged upstream last I heard though, so the easy way for that is still to use the Foundation’s kernel (which is what Ubuntu does, probably to also make overlays work as expected).
You only need boot blobs. Accessing the GPU, no. VC4 driver for the Pi3 and older is there, and V3D/VC5 for Pi4 is the only driver, AFAIK there is no proprietary driver for the Pi 4 GPU at all, only the open one.
Apparently they plan to include a PowerVR GPU on the second generation boards.
That's not very OSHW if you ask me. I'd rather an unaccelerated framebuffer, or even no graphics at all.
They don't seem to understand what kind of market there is for this sort of device. If I didn't care about this sort of thing, I would just grab one of the many proprietary-encumbered SBCs, any of which faster and cheaper than this board is going to be.
FWIW, every chip I've seen with a PowerVR GPU uses a video scan out engine that's pretty much completely disconnected from the GPU.
That's one of the neat things about how programmable their cores are; you can resolve in the tile to just about any surface format since it's all done in software anyway, so it's really easy to integrate with other scan IP blocks.
TL;DR: You'll probably get a simple un-accelerated framebuffer (maybe with a few planes) with a PowerVR off to the side that you can just ignore if you want.
Of course you can, just ignore it, and pretend it isn't there. Don't even load its drivers. Because of dark silicon, you won't be able to fully light up the chip anyway. Use the GPU being off to clock other parts faster if you want. You still get an unaccelerated framebuffer if it has the same arch as every other chip I've seen with a PowerVR for the past twenty years (Dreamcast, BeagleBoard, iPhones, PSP Vita, the MIPS board they came out with that I'm forgetting the name of).
>Because of dark silicon, you won't be able to fully light up the chip anyway.
If it's a separate chip, that's better, because I can cut power to it, or outright desolder it.
Still, I'd be uncomfortable having paid for a chip I did not want. Much like being forced to pay for Windows when buying a laptop to run netbsd on.
My point, if you missed it, is that this is not the sort of purchasing experience those who'll pay extra coin for a board with a slower chip that's also less power efficient just because it's more open would actually desire. And thus, it is not sensible to include such a chip.
I'd also be fine with something with a very simple framebuffer. They should consider throwing HDMI or display port or on their V1. Or hell, why not the "Display Parallel Interface" DPI thing that's on the RPI. I suspect they'd find the market for that would be fairly significant. At these clock rates and with multiple cores plenty can be done without acceleration.
You can technically get a cut-down Linux kernel and a couple of small programs on it, but with only 8 MB RAM it's awfully limited. You need 256 MB or 512 MB absolute minimum for any kind of reasonable modern desktop distro, and better 2+ GB.
Yeah, I don't have a lot of confidence in using MicroSD or USB flash for the software and OS in anything other than a hobby toy. I'd prefer that storage to be SSD or eMMC.
Separate from what the software&OS drive is, for a small NAS, 2 SATA ports for data storage (RAID mirrored) would be nice, but doesn't have to be in the general-purpose hobby $35 (or whatever) size board. For larger NAS needing more SATA ports, I definitely wouldn't expect or want that in the $35 board, but seems like there's probably a market for a board like that.
What's the latest on a cheap, small board where you can run linux, with BLE & WiFi 2.4/5GHz?
Rpi Zero W and the corresponding Banana Pi M2 Zero meet all criteria except the 5GHz. Beside these, I have a really hard time finding any cheap alternative.
There's plenty of embedded devices for iot, from arduino to all others. But 100% of the iot I have in my apartment are actually powered and don't need a batter/extremely low consumption. I much rather enjoy the flexibility of running linux (or android) vs flashing a firmware.
You can share your wifi config from the phone to the device, but my phone is connected to 5GHz and not 2.4, so that doesn't work. I then have to connect my phone to the 2.4 and then forget the network. It's just an unnecessary step.
"even though the goal is to eventually have as much IP released under a BSD-like open source license"
What do people mean _exactly_ by 'IP' in contexts like this related to computing boards such as SBCs and FPGAs?
The only 'IP' I know is a term pertaining to legal concepts like patents, where 'IP' and 'patent' are almost synonymous. In the current context, what does it mean to 'release' this 'IP'? How do you 'release' a patent?
It's a logic or semiconductor layout design for a component (e.g. a processor core, or a peripheral like a UART, DMA controller, etc.). "IP" for hardware is roughly analogous to "source code" for software.
It sort of colloquially gets used as "subsystem" or "library" as well. Tends to refer to a piece that can be treated more or less as a black box and imported with well defined semantics.
Although it does come from Intellectual Property, they use "IP" the same way game developers and web developers use "asset" for an image, 3d model or sound effect. Sometimes they even say the plural as "IPs".
In semiconductors an IP is often some logic block, often supplied by a third-party to paste into your design, or part of your own company's library.
For example on SoCs a cache, a USB PHY, a CPU core, an AES unit, an Ethernet controller, a PCI controller, a video decoder, an audio decoder, etc.
A design unit you could imagine selling to be used inside other company's designs. That's why you can "release an IP".
I find the use of "IP" for these things a bit distasteful, so I try not to use the term myself, but it's very common in the field.
One of the comments I made to myself was that on the die shots, USB3 part is as big as a CPU core ! What would need so much silicon ? Isn't USB3 hardware more or less a serial bus? Is it logic or memory or analog interface parts here? I am curious.
There's a great deal of communication that happens outside the operating system, and a bunch of different multiplexing modes and things like that. There's at least 4 different speeds it's going to need to talk at (1.5mbps, 12mbps, 480mbps, and 5gbps) with a good deal of synchronization and other bits. There's a rather decent amount of logic that has to be implemented, and I wouldn't be shocked if you end up needing what basically amounts to another CPU for the controller itself to manage the entire connection and negotiation. Depending on the physical construction it might also need some decent power transistors if it's also handling the 2 amp current limit on usb3.0 internally (rather than signalling ones outside the chip).
Nvidia at one point used one of their home grown Falcon cores (which looks like it could be around the same gate approximate gate count) to baby sit the USB3 phy as well. They're probably going to switch to RISC-V for that if they haven't already.
Why is RISC-V even so hyped up/successful? Other open ISAs precede it, and AFAIK the code density that RISC-V allows is disappointing considering the designers had an opportunity to start from scratch.
The code density beats ARMv8 (the best they could do with a fresh start do-over) and beats x86 too. They gave serious consideration to creating a compact instruction set that also doesn't cause the performance impacts seen in Thumb.
There's various reasons for a new ISA. The most appealing right now is the patent pool. Most other open ISAs are explicitly patent encumbered (recent MIPS versions) or provide zero protection (openrisc). The chances of getting sued over RISCV seem very small until you start moving to non-ISA parts of the chip.
For an ISA to succeed, the ISA itself is a distant second to the ecosystem as x86 shows so well. The amount of money an R&D being poured into RISCV would single-handedly be enough reason to adopt.
One concrete advantage RISCV has is its variable-length SIMD instruction set. The assembly instructions used for SIMD in RISCV can all operate on data sizes that are various powers of 2. All using the same instructions. You could even call it “SIMD with dynamically-sized data”.
Right now, x86 needs one set of instructions for each SIMD data size. That’s why we have SSE, AVX, AVX512, etc. This introduces complexity for both hardware designers and software designers. You have to actively update software to work with newer chips with larger SIMD registers. In the proposed RISCV design, you wouldn’t have to. It would just work on all sizes.
SVE looks pretty good. But ARM themselves are showing absolutely no signs of incorporating it into any of their own cores, despite the specs being published as part of ARMv8.2-A in January 2016. They're up to ARMv8.6-A now, with 8.7 or maybe ARMv9 expected any time.
Wow, I didn't realize how many of these x86 extensions are basically just SIMD with more and more registers, even going back to MMX in the 1990's. It somehow looks like they never planned ahead and just kept re-inventing a slightly bigger wheel. Great to see RISC-V does it in a unified way.
Right now, x86 needs one set of instructions for each SIMD data size.
If you look at the encodings, many of them are actually the same opcodes as the original MMX, but with prefixes.
In the proposed RISCV design, you wouldn’t have to. It would just work on all sizes.
That doesn't make sense --- how would the code know how much to increment pointers by, how many times to loop, alignment, etc. ? Unless they added some really un-RISC instructions to do automatic vectored operations (like if x86 had a REP ADDSB), I don't see how you wouldn't need to change software for changing SIMD widths.
That is interesting... and actually feels a bit CISCy, but this
Configuring a grouping of 8 registers for a vector of 16 elements might look like overkill because 128 bit vector registers are sufficient and should be widely available. On the other hand, there might be a CPU with "V" support that just implements - say - 64 bit vector registers where we would need to group 2 registers. Since a grouping thus may be needed it really doesn't hurt to configure the maximum here.
...seems to imply that the generated code may still need to assume certain widths.
It's just that if your code doesn't need 32 different vector variables then you can step down to, say, 4 variables with 8x longer registers for more efficiency. You need to know how many live variables your code has (which the compiler knows), not how long the registers are.
They may have beaten the average compiler in x86 code density, but compilers aren't normally optimising for size and, as someone who has been writing x86 Asm for a long time, I can safely say that compiler output in general, even at max size opts, is far from the theoretical limit for x86 --- there is plenty of room for improvement there. On the other hand, RISCs are far more constrained in size optimisation. For an example of what I mean by x86 having a very high limit on code density, see https://news.ycombinator.com/item?id=15720923
But that is part of the issue with x86. Steadily, the dense instructions of x86 have been neglected and become less performant. A perfect example of this is "rep movsb": up until I think a decade ago, Intel actually recommended against its use in their optimization manuals.
Do you have any in mind? From what I can tell even POWER is not as open as RISC-V. They only show schematics to their partners, not the public at large.
This is about the ISA, and POWER ISA has indeed been made royalty-free recently. Schematics are about a particular microarchitecture, particular hardware implementation.
If this is real ... sign me up! I have a couple of RISC-V Arduino clones, interesting but not that useful. I've ported software to RISC-V using the Fedora RISC-V port running under QEMU. It'll be nice to finally get inexpensive Linux capable hardware in hand!
Of course, what would be really good would be a CHIP-Pro style module that is a complete Linux SBC suitable for inclusion in a IoT PCB design. Thus far, the CHIP-Pro (ARM-32) and Onion Omega2 (MIPS-32) have been the only viable reasonably priced embeddable Linux modules I've seen.
They did but the founder started up again with the same design. Only the price is much higher (can't blame them, the CHIP-Pro was a steal which I think is why they went under).
EDIT: The reboot was called Source Parts but it looks like they may have gone under also. Too bad, the CHIP-Pro was pretty nice.
It looks like it has some details on "slow" i/o like UART and I2C, but not sure if that counts things like analog PWM timers for certain pins, like there are on RPis. Does anyone have any more information on that? You can do it all in software on a UART, but I think that can be a little more tricky. Since so many basic embedded projects can be helped with that kind of functionality, it would be awesome if it was available.
I like the idea of going multi-core from the start. The Pi Zero W is massively under-powered when compared to its siblings, but IMHO suffers a lot more from being single-core than from any other constraint.
(Which is why I use the 3A+ for most of my stuff, really. Four cores and full-size HDMI and USB are very useful indeed.)
One of the key selling points for me is the RasPi OS and software ecosystem. Raspbian a great distro, very well aligned with the hardware. Getting a generic distro to boot on a system isn't the same as ensuring a smooth experience with things like system config, video playback, networking and peripherals.
In some ways I see this as a downside. Despite how many years the RPi has been a thing, the drivers for it still aren't fully upstreamed to the Linux kernel, and the device tree in mainline is very different from that in the RPi Foundation's kernel fork.
The RPi-specific userspace bits are just (in the past year or so) starting to become more standard (like moving the stuff in /opt/vc to standard system locations, having a normal kernel headers package).
I don't want a specially-built OS just for one piece of hardware. I want to run stock Debian, with a stock kernel. And yes, I can do that, but as you hit at, the experience is sometimes lacking (for a while I couldn't get sound to work with stock Debian).
I've been looking at some of the RPi workalike boards that run Armbian... I haven't dug into it yet to evaluate the quality of Armbian, but I at least like the overall approach of having one OS that can support different chipsets and boards.
The chip industry will do like the telecom industry.
1st few players grow it important
2nd it becames a big monopoly, we all depend on
3rd other players need 10 years to build competition ( needed time to build cable/towers) ( needed time to develop processors )
4th you have a bunch of players, all with FIX COSTS and with a 1$ variable cost for new customers, the competition squeez the margins to arround Zero.
5th you still have to keep up with the improvements ( 5G?? / 7nm??, 6nm?? ), so just to keep going investments are needed, and so profits disappear in the industry
Hilariously, on one of the slides posted to Twitter, the phase 1 chip won't have low-speed peripherals. But I'm sure they'll include at least one serial port. No one is going to get very far without even a serial console.
The phase 1 SoC is all I'm interested in for now. The lowest-size LPDDR4 part I could find is 256MB, which isn't a lot, but enough for what I want to do. Having that RAM, and networking or USB, will be enough for me to buy one. Assuming a reasonable price, of course.
I've been laser focused on RV64, but RV32 is nearly identical. It is a shame the current VexRiscv bitstream for this board doesn't seem to support atomics though.
Really, it’s the price and mass market availability more than anything else that makes the Raspberry Pi a Raspberry Pi, moreso than the exact featureset (especially considering the difference in capabilities between the original and v4). No word on that, it seems?
China is quite interested in RISC-V gaining traction as trade and security barriers rise and they work to build up their domestic technology sector.
Going into wild-ass speculation mode, if this board really is affordable, I wonder if it's being subsidized by the Chinese gov't to encourage RISC-V adoption.
If I understand correctly, it's hard to make a cheap RISC-V Linux board with a price in the same order of magnitude as Raspberry Pi, because the Pi benefits from the availability of cheap, powerful ARM-based CPUs that were mass-produced for the mobile phone market. Linux-capable RISC-V parts have nowhere near these economies of scale.