If I'm not mistaken, this is the same dev board that's been for sale from the manufacturer (sifive) for months and months [1]. Is the only news here that the board is now in stock at spark fun?
> If I'm not mistaken, this is the same dev board that's been for sale from the manufacturer (sifive) for months and months.
Yes, this seems to be the case. The layout is only very slightly different than the HiFive1, most of the major components are in the same locations.
I was thinking about buying one of these and porting Tock OS [1] (written in Rust) onto it for a fun personal project. There's been a surprising amount of work already in the embedded Rust space, and I believe the compiler support for RISC-V is already sufficient.
I happen to have a hifive1 board laying around. If you get to the point where you share your results (whether you're looking for collaborators or not), could you follow up with a link here? I'd love to try it out (alpha/beta versions are welcome).
Sure will. Though I'm super awesome at coming up with personal project ideas, and significantly less awesome at finishing them.
I'm also working again on a space naval battle RTS game, which is entirely command-line driven. Written in Rust, using an unfamiliar game engine, because I can't make things easy for myself. :-)
I'm currently looking at Amethyst (1), because it integrates an entity component system.
The game is inspired by the Honor Harrington series of novels by David Weber.(2) The ships in that universe can accelerate at 500g or more, so they can scoot across a solar system in a reasonable amount of time (hours), but need to accelerate/deaccelerate the entire time to do that. Consequently, course selection and manuvering are significant decisions that can decide the outcome before the fleets even engage in combat.
The individual units are intended to be as smart as possible, keeping proper spacing, following the rules of engagement, and such. You, the fleet commander, will be issuing orders much like a modern naval commander would. And I will be appropriating as much of the lingo as makes sense. "Set course 126 mark 31", "engage at maximum range", "target group baker-3".
The game itself won't try to accurately represent the actual size of a typical solar system, but instead have everything scaled to that scenarios can play out in half an hour or so. I may also scale up the size of planets considerably, to provide obstacles.
Initial plan is for a 2-D game with fairly simple tactical and strategic views. But 3-D with a Homeworld style interface would be cool.
Please please let us know when you have a need for testers. I'm literally (like, have it up on my ereader next to me right now) reading through Honor Of The Queen.
Should note that these are similar in specs to Arduinos, ie. only a few kilobytes of memory and a processor similar to a Cortex-M0. You won't be able to run Linux on them.
There might be a way tune the risc-v port to this chip if there isn't one tuned already, and while it is likely you wouldn't be able to run linux well on them at the moment, and you'd definitely need some additional rw memory of some form, it would be possible, it does run on microcontrollers with worse specs.
And a lot fewer peripherals. Interrupt controller, RTC, WDT, 2 SPI units (one of which is used for program flash), 2 very basic UARTs, and three timer/PWM units.
Notable omissions include: no DMA, no I2C, no ADC/DAC, and no good documentation. (The documentation that is available is inconsistent and poorly written. Some of the documentation references a nonexistent third SPI peripheral, for instance.)
Arduino Uno no, since that uses a chip that is over 8 years old. But a modern 8bit AVR which cost under 1USD in single units, like Attiny3214, has a DAC.
You can bit-bang I2C easily enough, as long as none of your slave devices have an SMBus-style bus timeout feature. Might be a bit hard for an AVR-class CPU to keep up with those without some attention paid to performance optimization.
Granted, a very high clocked Cortex-M0+. The clocks are about two and a half times the highest clocked M0+es on the market AFAICT. What it lacks is peripherals (especially ADC).
>> Except most M0 cores will have 1 or 2 cycle internal flash memory reads, and this has slow external flash.
With 16K cache. I'm not sure how you ensure consistent performance though - make sure your code will all fit in 16K, but is that enough? And what about the first time through?
Even on ARM with built in flash you can't ensure that easily. The only way to do so is to copy code to ram and run from there (most of the M0-4 devices I've seen don't have an icache). This is because of the way that flash ends up being read from by them, (I can't remember the correct term, i want to say something like stop-waits) where the processor ends up waiting for an indeterminate time period waiting on the flash memory to read the next page.
16k cache is likely enough to ensure stable performance of any given function and any tight loops you're using but will probably not be enough for the entire program so you'll still have misses that cause slow downs but it'll probably not be terribly noticeable unless you're trying to ensure timing over large functions.
Most M0 chips I've seen have the cache, it just sits on the other side of the AHB matrix. It's more integrated into the flash controller than the CPU core.
This looks fun. I'm a bit surprised that they went with the Arduino "standard" with its weirdly spaced headers given that I assume that most of the Arduino software won't run on the chip. Or did they make a compatible SDK?
Also looking at the specs it seems that the SoC has PWM, UART and QSPI but no I2C or DAC/ADC. That might make it a bit annoying to port some Arduino designs to it (although you can always bitbang the I2C). There's also no USB or MAC unless I missed something. So basically if you want to get data in and out fast QSPI is your best bet and even that doesn't seem to have a DMA so you'll have to use CPU cycles to copy everything in and out.
So in summary it's definitely a cool board if you're interesting in hacking on RISC-V specifically but if you're just looking for a controller for your next DIY project there are more fully-featured options for cheaper.
> I'm a bit surprised that they went with the Arduino "standard"
Well, for some reason the entire "maker" community thinks you need to shove the Arduino stack onto something before you're allowed to write software for it. Apparently that's a requirement for street cred.
I don't use Arduino but the reason it is popular is because it is an abstraction layer for the hardware. Kind of how languages like Ruby and Python make programming more approachable for people that C.
IIRC, most Arduino shields run at 5V logic level, and this is a 1.8/3.3V device. While I've worked with a few shields that could automatically switch logic level, I don't think it's the norm.
That's actually a shame. It will run into the same problems the raspberry pi has when being used as an "embedded" system then. Not impossible to overcome of course, but not fun
Kind of a tangent, but unfortunately Microsemi seems not to be shipping any further batches of https://www.crowdsupply.com/microsemi/hifive-unleashed-expan... . For a hardware noob, does anybody know if there's another board that can be substituted to get PCIe/accessories with the same preconfigured setup? I saw Xilinx mentioned at a significantly higher price tag ($3500+) but not sure what sort of work would be required.
If you want a 320 MHz GPIO focused Arduino board for $60, this is great. Might be great for open-loop or encoder based motion control. This could probably be a good 3d printer control board.
If that's all you're after, I'd direct you to the STM32 NUCLEO-H743ZI. 400 MHz ARM Cortex-M7, 1 MB SRAM / 2 MB flash, and an absurd variety of peripherals, on a board with an Arduino-compatible pinout -- for $23.
Motion control is control of actuators, like motors for example. Open loop means driving the motor with no feedback source. Closed loop, alternatively, means using a feedback source like an encoder or other sensors to measure the position or velocity of the motor, and then using software that monitors that sensor and adjusts the control parameters to ensure that the desired position, velocity, and/or acceleration is achieved. This is referred to as “closing the loop” because you have the controller going to the motor as one half of the loop, and the encoder going back to the controller (and the control system code) as the other half of the loop.
Precise motion control often uses control loops (the loop of code that checks the sensor and adjusts the output) that run at 20 kilohertz, 40 kilohertz, or higher. Since the control loop is a bunch of lines of code, this means you want to execute a bunch of lines of code repeatedly at this high rate of speed. For example on an 8Mhz micro controller clock running a 40 kilohertz loop, there’s only enough time to run 200 instructions per loop! (8Mhz/40kHz=200) 200 instructions isn’t a lot, so it would be tough to run a complex motion control loop in that space. And if you want to run multiple control loops to control multiple motors - forget it.
The teensy 32 bit micro controllers are a lot faster - I see a 72mhz and 180mhz option. Either of those would be solid for motion control. However if you want to control multiple motors and possibly perform some other functions, such as computing motion commands from G code in the case of a 3D printer control board, you need all the speed you can get. In that case, these new boards offer plenty of clock speed to play with!
Just driving stepper motors and reading encoders (and a bit of maths to connect them). Teensy etc. would probably be too slow to do the step generation and encoder reading in software.
But to be honest you never want to do that in software (except on weird chips like XMOS ones). You'll almost always want to do things like that using hardware timers and counters.
The maths will be done in software but a Teensy would be fine for it.
Interesting - not quite going for the normal Arduino market. $60 is a pretty high price tag, but it has a quite high clock rate and should get better performance.
Tempted to buy one just for making some toy cryptographic applications as the open architecture makes an appealing feature for those applications.
Ultimately though I think it'd be best if they can get China sold on this architecture. $2 Arudino compatible boards or $7 espressif boards with wifi are hard to argue with for most people messing around in the microcontroller space.
> Ultimately though I think it'd be best if they can get China sold on this architecture.
This is happening. There are a few embedded RISC-V cores coming out of China now. It seems if you have the choice of paying to license an ARM Cortex-M core or downloading Rocket chip off github, then people are going the download route.
Edit: This is not to say that RISC-V is more popular than ARM in the embedded space. ARM obviously has a huge momentum and vast (billions) installed base.
I let OpenBSD’s level of support be my litmus test. If everything on the board works out of the box with OpenBSD, it probably acheived a high degree of openness.
I don't understand the hype around RISC-V. The single most important bottleneck in general Computing nowadays is memory latency. However code density of RISC-V is absolutely underwhelming and does not even beat ARM-Thumb which is literally decades old.
If it has to be RISC and open why not using something that already has an existing infrastructure and is well established. E.g. Fully open source implementations of the SPARC architecture exist for a long time already.
The Sparc V8 running Solaris took hundreds of clock cycles to handle traps, so some languages actually performed better by ignoring the register windows and avoiding the overflow/underflow traps. This one bad implementation hurt the reputation of the idea in general even though Sparc V9 improved this significantly.
People also like to point out that the original NIOS processor from Altera had register windows but they were eliminated from the NIOS II. What they forget to mention is that Altera claimed this allowed them to make a smaller core "without hurting performance too much". Which means that the register windows version was faster, not slower like they want to imply.
The research paper is not the final RVC extension (although it was derived from that research). The video covers the RVC extension as it was implemented.
Consider reading the About page [1]. In summary: RISC‑V is a free and open ISA. They aren't claiming groundbreaking hardware performance and/or design. They are claiming groundbreaking lack of legal/NDA entanglement.
Special bonus/origin story: completely open for academic research and tinkering.
Nobody that works on RISC-V believes that a mew ISA will change the game in terms of performance. That's not really the main goal.
It is false that density is 'underwhelming'. Thumb while old is extrmly good and specifically designed for density.
With Thumb on 32Bit they are about on par, but on 64Bit RISC-V wins against ARM. ARM 32 Bit is really the only thing that can compete.
They had good reason for not using something that exists. SPARC has one open specification but the version after that is not open anymore. It has a number of technical problems for the modern world. There is not enough open software and hardware in to make the argument for adopting it wothwhile.
Furthermore its a monolitic and RISC-V was from the ground up designed as modular ISA where the same basic software stack can run on deep embeded and HPC.
> Nobody that works on RISC-V believes that a mew ISA will change the game in terms of performance. That's not really the main goal.
And this is why they will fail. RISC-V suffers hugely from NIHS and would have no chance against any competition in a real market without that artificial hype. They need to get better.
> They had good reason for not using something that exists. SPARC has one open specification but the version after that is not open anymore.
The fully GPL licensed OpenSPARC T2 is more advanced than anything RISC-V has to offer even though it is ten years old. Why reinvent the wheel when you can build on top of existing solutions that has proven itself on millions of machines including two top 100 Computer clusters.
> RISC-V was from the ground up designed as modular ISA where the same basic software stack can run on deep embeded and HPC.
> And this is why they will fail. RISC-V suffers hugely from NIHS and would have no chance against any competition in a real market without that artificial hype. They need to get better.
I'm sorry but that is nonsense. An ISA (unless its a utterly terrible one) simply is not what determains performance.
RISC-V will be useful for performance because you can make a good cores far easier then if you used any other ISA. RISC-V is well optimized for performance and future standard extensions will give the micro-architect a lot of options to make performant chips.
Furthermore, why do they suffer from NIHS? The whole ISA is quite literally designed to be a relatively conservative design that specifically build on the knowledge gained by others in the last 30 years. Its the exact opposite of NIHS. The only NIHS is that you are complaining about is that they did something new at all.
> The fully GPL licensed OpenSPARC T2 is more advanced than anything RISC-V has to offer even though it is ten years old. Why reinvent the wheel when you can build on top of existing solutions that has proven itself on millions of machines including two top 100 Computer clusters.
SPARC is now owned by Oracle and only SPARC V8 is an open standard. The OpenSPARC T2 is v9. Do you really think its good to start a new revolutionary compute project on something so strongly tied to Oracle?
You are aware that some of the same people who helped design SPARC also designed RISC-V. You can listen to their explanations of why they didn't want SPARC, specially not for a what is designed to be a universal ISA.
> This is also true for SPARC.
No its not. SPARC is not a modular ISA in the same way RISC-V is and the RISC-V believe that a modular ISA will be needed.
> only SPARC V8 is an open standard. The OpenSPARC T2 is v9.
"Source code is written in Verilog, and licensed under many licenses. Most OpenSPARC T2 source code is licensed under the GPL." - Wikipedia
> No its not. SPARC is not a modular ISA in the same way RISC-V is and the RISC-V believe that a modular ISA will be needed.
"The "Scalable" in SPARC comes from the fact that the SPARC specification allows implementations to scale from embedded processors up through large server processors, all sharing the same core (non-privileged) instruction set" - Wikipedia
> "The "Scalable" in SPARC comes from the fact that the SPARC specification allows implementations to scale from embedded processors up through large server processors, all sharing the same core (non-privileged) instruction set" - Wikipedia
Yes. SPARC is a RISC and therefore it can scale well in implementation. RISC-V however has taken the modular approach to ISA design far further then anything else has so far.
Again, maybe you should actually read about the design of RISC-V and why the didn't want to adopt SPARC.
You accuse me of spreading misinformation, but you don't seem to know what the difference between SPARC and RISC-V are.
[1] https://www.sifive.com/boards/hifive1