Hacker News new | past | comments | ask | show | jobs | submit login
Open-source chip RISC-V to take on closed x86, ARM CPUs (computerworld.com.au)
354 points by jumpkickhit on May 6, 2017 | hide | past | favorite | 151 comments



I thought that microprocessor production and design was fraught with risk of infringing on other designers' patents (even for original ISAs). I can see that industry heavyweights have arrived to support RISC-V, so hopefully that comes with a team of professors/lawyers that could defend them. But, why now? Why couldn't this have happened sooner? Didn't Sun try to create an open SPARC processor design? What does RISC-V have that it didn't?

Is the intent for RISC-V to compete with modern high-end CPU designs, or do we just want to have royalty-free microprocessors for our embedded devices?

You might be surprised (at least I was) to learn that peripherals like hard drives and PCI-add-in cards usually have their own CPU executing their own software. Those processors are often MIPS/ARM/etc based and the manufacturer has to shell out to someone to be able to use that, even if they designed the processor themselves. I can see how this particular market is ripe for something like RISC-V. But does anyone expect RISC-V to really go head-to-head with Xeon, Opteron, ThunderX, Centriq?

I sound incredulous because this seems surprising to me, but I have no evidence to suggest whether it's as unlikely as I think. I've certainly seen open source software designs far superior to closed source ones, so maybe hardware design is no different?


The RISC-V team was very careful in establishing prior art for every design decision. The patents on ISAs tend to be on their quirks, so keeping things truly minimal helps avoid all of that complexity.

Like a lot of what Sun did, prior open chip designs weren't good enough. Academics starting with a clean slate and a 20+ years of additional experience gives RISC-V real advantages over MIPS and ARM.

The primary advantage is that RISC-V is truly RISC: they have a core ISA that is frozen but extendable. This means that they can have application-specific CPUs with intelligent fallback and full compatibility.

As to why now, well, they came up with RISC-V as a teaching ISA and started getting emails from industry asking why it had changed from last semester. It turns out that chip manufacturing has gotten cheap, with custom runs on 28nm processes going for $30K.

You are correct in that their early target market are co-processors and other niche components. Just getting a license from ARM is 10+ million dollars. But the biggest cost in building your own ISA is investment in software and tooling. RISC-V gives a common foundation for everyone to build on.

However, their intent is indeed to bring competition at every level. Their licensing scheme was chosen specifically to allow big players to create IP and keep the secret sauce to themselves, unlike SPARC V8 and OpenRISC.

Apple creates their own CPU and GPU designs, Google has custom hardware for machine learning, and Samsung is about to overtake Intel as the world's largest chip manufacturer. Why should they keep shelling out billions to ARM, Intel, and AMD just to use their ISA?


> Just getting a license from ARM is 10+ million dollars.

Which license? ARM offers several different models.

> Apple creates their own CPU and GPU designs, Google has custom hardware for machine learning, and Samsung is about to overtake Intel as the world's largest chip manufacturer. Why should they keep shelling out billions to ARM, Intel, and AMD just to use their ISA?

The Google hardware we know (TPU) for learning does not use ARM at all, so they would obviously pay nothing there. BigCo's like Apple or Samsung are probably ARM partners, so no per-design/unit fees.

Samsung Electronics may become the biggest chip manufacturer soon, but much like with smartphones that's only by turnover, not profit. But profit is what ultimately finances your R&D.

An interesting titbit is that even Intel and ARM are, at least on paper, partners.


>BigCo's like Apple or Samsung are probably ARM partners, so no per-design/unit fees.

Nope, all ARM licensees pay a per-unit royalty, regardless of their partnership level[0]. Being the biggest might help Apple and Samsung negotiate a slightly lower royalty rate, but by the end of the day it's still at least 1% of the chip's selling price.

[0] http://www.anandtech.com/show/7112/the-arm-diaries-part-1-ho...


> but by the end of the day it's still at least 1% of the chip's selling price.

Right, but you cannot just consider the difference in royalty fees between ARM and another ISA.

For example, the STM32F* chips cost between $0.60 (STM32F0) [0] and $19 (STM32F7) [1].

You can't just compare the royalty fee here when thinking of lost revenue for the manufacturer. The ARM ecosystem is huge, and if ST switched to another ISA, they would probably lose customers unless the new ISA had an ecosystem that was as good or better than ARM.

Yes, the royalty amount matters, but customer preferences also matter. I doubt any high volume ST customer would be willing to switch to an entirely new ISA without the kind of ecosystem that ARM has for a few pennies per chip. And who is going to pay for the development of the tools? Unless it's someone like RISC-V, then they're probably stuck developing it themselves. Kiss goodbye to all the money you "saved" by not paying ARM royalties.

Furthermore, if ST switched to a different ISA to save the ARM royalty, what makes you think they would pass along that savings to the customer?

[0] https://www.digikey.com/products/en/integrated-circuits-ics/...

[1] https://www.digikey.com/products/en/integrated-circuits-ics/...


Bluespec, AMD, Qualcomm, IBM, NVidia, Micron, NXP, Samsung and a few dozen other companies are all dues paying members of the RISC-V foundation. I think it's possible for them to create a new ecosystem, just give them time.


The Google TPU is designed as an ISA extension of RISC-V according to the paper, and David Patterson's heavy influence implies this as well.


Well, a 10 million dollar license is nothing of you factor the Fab costs


Apples and oranges. The companies that design the chips and pays royalties for IP are mostly fabless (with rare exceptions like Intel). The fabs are owned by separate entities like TSMC, Samsung, SMIC that specialize in the production itself. The fabs don't pay for CPU IP (they sometimes pay for some other basic IP to be able to offer them as standard to their customers, but that doesn't cover CPUs/GPUs).

About Samsung, they're on both sides but with separate legal entities.

The licensing cost IS usually significant to most fabless companies, who come in many sizes. Most are not as big as Apple or NVidia. The fabless model comes from the very high cost of a modern fab, it allows sharing a fab through many fabless design houses.

ARM has the reputation of being cheap, because the typical IT/software people compare them to Intel. And yes, compared to Intel most anyone is cheaper ;) But in the embedded space, which is cost sensitive, ARM is not cheap. It's like what IBM and Microsoft were in IT: nobody got fired for choosing them, and they come with a very powerful ecosystem. Something they make client pay for.

In front of ARM, one can find good technical competitors depending on the target market (MIPS, Cortus, Andes, Beyond Semi...). But MIPS ecosystem is smaller, and for the others waaaaay smaller.

RISC V has an opportunity in time to provide a good technology, backed by a strong ecosystem, for lower costs. In the embedded space at least, this is a very powerful combination.


The older OpenRISC had already found a few niches. For example, the management core in some of Allwinner's newer ARM SoCs is apparently OpenRISC, presumably because they didn't want to license another lower-end ARM core or something. Similarly, some Samsung TVs apparently use it.


$10 million is approximately the cost of a mask set even on the cutting edge fabs.


For the costs, they do not only get the license to use the ISA, but the whole processor design, as in the "source code" to the whole processor. As a licensee, you place that design together with any other electronics you create (or license separately) on a chip to create the device you want.


Perhaps I'm misreading your comment, but Samsung is designing its own CPU these days. It doesn't license ARM's Cortex CPUs anymore (at least for the Galaxy S/Note series) - basically it's doing what Qualcomm and Apple are doing, too.


There are a number of things that make RISC-V different from OpenSPARC. One is that OpenSPARC is not a very scalable design, it could probably only ever work in servers and workstations, and workstations are essentially a monoculture. Another reason is timing: manufacturing processes have effectively stopped yielding performance and power gains as the fundamental constraint (distance between components on chip) doesn't get any smaller. This means that in order to keep the pace up for any popular workload, many new microarchitectures will need to be created. RISC-V is amenable to a wide array of microarchitectural decisions, and doesn't carry the architectural baggage of ARM and MIPS licensing and technical quirks, or the impossible licensing of x86.

> But does anyone expect RISC-V to really go head-to-head with Xeon, Opteron, ThunderX, Centriq?

Except for Intel, none of these manufacturers have any reason to resist RISC-V. All of these vendors can directly port knowledge from one ISA to another. So really it becomes a matter of "will there be a demand for RISC-V hardware in the server and workstation markets?", and the answer to that is a firm "maybe".

I think that today, since most software is written in high level languages which are at least as abstract as C, there is little reason why a new ISA couldn't take hold in any of these markets if hardware vendors can promise good things to ISVs and to customers who write their own software.

At present, there is a considerable barrier to entry for new designers and manufacturers to approach ARM, POWER, and MIPS, and basically no hope of manufacturing an AMD64. ARM is not the cuddly RISC architecture people think it is, it is in many ways nearly as quirky as x86, and therefore very expensive to enter and innovate on. Furthermore, it can take more than a year to negotiate an architectural license with ARM, your company or the market might no longer exist by the time you close the deal. POWER is not too far off on licensing, and it's not all that elegant either (though far more surmountable than ARM). MIPS lacks industry momentum and standardized high performance feature sets (wide vector compute, hardware security features), and its custodian company seems to barely acknowledge it exists.

All in all, nobody knows if it'll work out, but there are good reasons for RISC-V to succeed in the market. The challenges also seem surmountable.


> RISC-V is amenable to a wide array of microarchitectural decisions

As is ARMv8. Applied Micro, Broadcom, Cavium, Huawei, Nvidia, AMD, Samsung and Apple are all architectural licensees with radically different microarchitectural implementations. For example, Nvidia is Transmeta code-morphing style implementation. Also, Intel and AMD continue to innovate in the x86 space.

At this date, RISC-V has no fundamental advantage over any of these architectures and then has the fundamental disadvantage of not having access to their IP. You're lining up against Usain Bolt and your advantage is that you're wearing organic cotton. That's just not gonna work.


NVIDIA's Denver is a feat; I think that if it says anything, it's that ARM's instruction encoding is deficient. AArch64 instruction encoding is less dense than AMD64, and completely lacks compressed instructions (like Thumb). To NVIDIA, it made more sense for them to write a realtime binary translator for an internal ISA which is clearly enabling their microarchitecture more than ARM is.

AMD and Intel spend more money on research than you can even imagine, and it's not surprising that they manage to make x86 machines that lead in single-thread performance, but since Intel tried Itanium, you can tell their designers feel x86 is deficient. I mean, just imagine how much of their chips is just decode.

Also, not to get too childish, but ironically NVIDIA is one of the vendors who is shipping RISC-V in a lot of products. Soon enough, that Denver-style core is going to be sitting on a die right next to a RISC-V. ;- )


That's a good point about Nvidia adopting RISC-V. I didn't know that but for their application it's a good idea.

I'm not a Denver fan and at this point I think we can safely say Code Morphing has been tried. However, I wouldn't conclude that Denver means that ARM's instruction encoding is deficient. The optimizations that Denver provides would be impractical in any ISA.

Moreover architectures are meant to be implemented+optimized in many ways. You may not like x86 but planting that flag in the sand allowed Intel to innovate on the microarchitecture side while developers innovated on the application side knowing that x86 would still be there. It's the same value proposition that IBM offered with the original architecture, System 360.

Where I like x86 and where I don't like RISC-V is that x86 doesn't try to be perfect but rather to adapt over time. RISC-V tries to be perfect and indeed eschews many of the bad RISC ideas from the past (register windows, branch delay slots, tagged integers, ...) while then pig headedly avoiding an obvious feature shared by both x86 and ARM, condition codes. I've read their argument and found it unconvincing. Original ARM had way too much condition code support. I think ARMv8 strikes a nice balance that RISC-V should have followed.

The HP+Intel Itanium effort allowed AMD to propose their AMD64 extensions. That's been quite successful. I wish they'd taken the opportunity to trim more cruft. When ARM went 64b with ARMv8, they took that opportunity and the result is quite clean. I prefer it to RISC-V although I haven't written any RISC-V assembly.

ARMv8 tries to be a good instruction set architecture. To me, RISC-V tries to be the platonic ideal RISC ISA. I'll go with good.


Yeah, I agree that condition codes (and offset loads) are features, not bugs in x86 and ARM; also ARMv8 shows an effort to reduce the number of instructions which can respond to condition codes. Chris Celio has talked about some interesting ways to make up for the lack of these two features, and it seems quite convincing. If you're using the compressed instruction set (which all desktop/workstation type RISC-Vs and most microcontrollers support), then the decoder can implicitly fuse the add and the load, or various forms of conditions, and treat them as a single operation. AFAIK, the compiler backends already try to order the instructions to exploit this.

And yeah, I'm not utterly convinced by any argument about "purity" in ISAs, but in this case there's no question that it has helped a wider variety of people develop interesting and competitive chips in less time.

ARMv8 is a considerable step up in many ways from ARMv7 and earlier, but AArch64 retains user mode compatibility with ARMv7, which means that the more different AArch64 is, the harder it is to implement. In this way, every ARMv8 is also an ARMv7 (of course, sans all the supervisor/hypervisor instructions).

In many ways I like x86, and for the most part I like the vendors. I love that x86 has given Intel and AMD the opportunity to innovate so dramatically.

But just for a moment, imagine that instead of just Intel and AMD, the whole industry can put that same flag in the sand and have just one general purpose instruction set family.

You could have an 8088, a Cortex M0-M4, an ARC, an AVR, a PicoBlaze, a MicroBlaze, a SuperH, a MIPS, a Power, a LatticeMico, etc. but many of these architectures survive today because of a differential in licensing cost with ARM, not because of any technical prowess (and some of them are better in some ways, don't get me wrong!). Imagine that for the vast majority of these people, one ISA family would suffice, and the whole market could compete to bring new performance, power, and cost profiles to each market served by these cores. Then imagine that that same industry can easily start scaling up their designs to compete in the application processor market, and then perhaps in the workstation and server market, then perhaps the HPC market.

Just a thought though, I can't predict the future with any degree of certainty. I just think RISC-V is a whole lot more practical than you might think, just perhaps for people who have slightly different values from you (or from me, for that matter).

I think there's a lot of promise in that it is becoming the standard teaching ISA for universities throughout the United States, Canada, India, and elsewhere. If there is a generation of new computer engineers coming out of school with research-grade FPGAs in their hands, and their thesis work can be commercialized in a matter of months rather than years, then you can imagine that there will be huge commercial output in RISC-V whether it catches on now or then.

I think that's when it will start to seem more attractive to you, there will be more investment in it and you can see some clear, immediate benefit aside from cost savings and licensing flexibility.


> just imagine how much of their chips is just decode.

This article has some annotated pics of AMD Ryzen 7: http://wccftech.com/amd-ryzen-architecture-detailed/

Based on these photos, I’d estimate instruction decoder takes about 10% of each core, and about 7% of the whole chip.

For Intel I was unable to find similar pics, but my estimation is 3-4% of the chip area. The instruction set is the same; the complexity should be comparable. But most Intel chips have like 50% of the area occupied by integrated graphics.


I'd add to that Agner Fog's early report on Ryzen:

http://agner.org/optimize/blog/read.php?i=838

The micro-op cache size increased to 2048 from Intel's 1536 μops. His testing shows 5 instructions per clock cycle up from Intel's 4. There are limits to the ILP a scheduler can find; more in one area means more demand in another. This is no mean feat for AMD to pull off.


You're right, but key words here are "At this date". What you said could have been said exactly in the same way about Linux against commercial Unices about 15 years ago. It's a long game here, and we're only in the early phase. And it may not take that long, but we'll see.

Regarding technology, just as Linux initially the key is not having a technological advantage, but being cheaper and good enough. The improvements can come later, when the ecosystem gets bigger and more resources pours in.


RISC-V is thrown around like if it's an already working CPU but it is not. It is just a document describing an ISA, you can't compare it to Linux, it's a completely different thing.

There are a lot of costs in implementing an ISA and that includes: design, functional verification, physical implementation and software testing. All those steps together will require hundreds and hundreds of engineers, many expensive tools and thousands of man-hours. And once you are done, you will need to find be careful not to violate any patents while shipping your CPU to the end customer.

By paying a license fee to ARM, you get all these steps done for you plus with support.

Linux is something that you can download from kernel.org, compile overnight and get it booting right away. RISC-V is something that you need to build yourself.


When I did a presentation on RISC-V last year I counted 6 real implementations (full custom ASICs). Now granted only one or two of those you can buy, the others are research projects, but they do exist and they did produce real silicon. This year we should see a couple of 64 bit implementations, which is when it'll get really interesting.

There are of course multiple FPGA implementations (I have one about 2 feet away from me now), but they are very slow.


I am aware that they exist but it's not like companies are going to actively invest in devices for the end consumer purely based on RISC-V.


From where I'm standing, it looks like you're shifting goalposts:

> > RISC-V is thrown around like if it's an already working CPU but it is not

(Evidence of existing CPUs provided)

> I am aware that they exist but it's not like companies are going to actively invest in devices for the end consumer purely based on RISC-V.


My point is, you can't get those existing CPUs for free.


What kind of for free are you talking? The first commercial run RISC-V microcontroller SoC has fully-published RTL, and a company which will support you in adding it to your products (SiFive). Obviously people aren't going to give you the manufactured chips for free, but how close do you want it?


Is your presentation online?


No it was a private talk about company strategy.


Linux didn't instantly become something you could download from kernel.org, compile overnight and get it booting right away. It took hundreds and hundreds of developers, thousands of man hours, financial investments from a large number of companies worldwide and many many years for Linux to become what it is today. All of this done while there were several existing commercial Unix variants which could have been licensed instead.

During its early years many scoffed at it as you have RISC-V. "It is a hobby OS. It will never be able to really compete with the likes of Solaris, HP-UX, and AIX. Heck, it won't even be able to compete with SCO and Unixware."


I'm not saying that RISC-V is an hobby project, I'm just saying that hardware development is nowhere comparable to software development.

You need much more support and verification while developing hardware then developing software. And while you can reuse the functional design (please note the keyword "functional"), the physical implementation needs to be redone from project to project.

ARM, Intel, AMD, Apple and Qualcomm have an army of engineers with all kinds of tools that go through all the steps of hardware design and implementation which you can't do as a side project at home using just your computer.


Linux is still a hobby OS. It's not a particularly wonderful example of software engineering. Its only real benefit is that it's free and comes with an ecosystem of other free software that kind of mostly works as long as you don't mind the occasional security horror, and can be used as-is or customised at relatively low cost.

That combination of features makes it appealing in a variety of business cases - for business reasons.

RISC-V can't be customised at relatively low cost. It's nominally free, but the freeness doesn't mean much in a business setting.

The total cost of developing a custom core remains beyond the reach of small companies.

Big companies already know there's an established ARM ecosystem with working compilers and a simple, risk-free, and relatively affordable business model.

So what business problem does RISC-V solve?


"Linux is still a hobby OS. It's not a particularly wonderful example of software engineering. Its only real benefit is that it's free and comes with an ecosystem of other free software that kind of mostly works as long as you don't mind the occasional security horror, and can be used as-is or customised at relatively low cost."

That is complete nonsense. Linux, particularly Red Hat Enterprise Linux is as serious of a server OS as exists in the world today. Companies like IBM and Oracle would never embrace a "hobby" OS in an enterprise setting.

As for "occasional security horrors", sadly there is no OS of any flavor that is immune.

"So what business problem does RISC-V solve?"

For one thing, RISC-V offers the promise of fully open computer systems, without opaque black boxes anywhere providing potential back doors or other problems. The Intel AMT vulnerability is an excellent example of how that can go very wrong:

https://www.ssh.com/vulnerability/intel-amt/

RISC-V also provides a playground for smaller entities (like university labs) wishing to experiment with innovative new hardware techniques like Unums. That's very valuable in its own right.

http://web.stanford.edu/class/ee380/Abstracts/170201-slides....


" You're lining up against Usain Bolt and your advantage is that you're wearing organic cotton. That's just not gonna work."

I'm stealing this as a saying.


> and your advantage is that you're wearing organic cotton.

Open source hardware is not like organic cotton, it's more like having the kitchen, the tools and the ingredients to cook your own awesome meal vs going to a restaurant but never able to make any food yourself.


OpenSPARC is GPL. And OpenSPARC means the T1/T2 designs (high throughput, very low single-threaded performance) which are useless outside their niche.


> does anyone expect RISC-V to really go head-to-head with Xeon, Opteron, ThunderX, Centriq?

Yes, and no. RISC-V is disruptive [1] to both ARM and x86. But before it can run (compete with Intel CPUs), it needs to crawl (compete with low-end ARM CPUs/microcontrollers). It could take a decade or more before RISC-V competes at the high-end. Basically think of it as following ARM's path but on a much more accelerated path (for many different reasons: much larger chip market these days, being open source and royalty-free, etc).

[1] - https://www.youtube.com/watch?v=mbPiAzzGap0


I think part of the accelerated path will be that we are reaching the limits of 'easy' (in the billions already) process improvements.

When everyone and his dog has 14nm you can't rely on process to keep you ahead, Ryzen has shown this partially recently, Broadwell level IPC on an Octacore processor for less than Intel sells a Quad core.

It's going to be a bloodbath as the middle eats into the top.


Many people claim to want an open-source chip, but end up balking at its price tag, this is exactly why Raptor Engineering's Talos failed. [1]

I've pretty much given up hope on a non-x86 based chip hitting our desktops, the closest to reach will be ARM.

The economies of scale aren't there, I pretty much end up rolling my eyes at each of these articles.

[1] https://www.raptorengineering.com/TALOS/prerelease.php


You have a point, but I think there is a reasonable midpoint that Talos missed altogether. RISC-V's move towards RPi/Arduino style mini-systems lets them get a market going at a price point that has some chance to get some uptake early, and hopefully grow from there.

Talos seemed amazing, and I was always fond of the ppc/power architecture and wished for something more modern than an old G5 mac, but $7k for a desktop is just more than even most early adopters are willing to do.


That was the Apple Lisa's folly. It cost too much than most early adopters wanted to pay.

The PPC was a good brand but Intel out innovated them making faster x86 chips cheaper than PPC makers could. Apple had to move to Intel to compete with Intel PCs running Windows or Linux. Amiga still uses PPC but they are expensive.

Arm chips are so cheap to make that there is the $5 Raspberry PI Zero cheapest computer on a chip tech.

Risc-V would have to be priced like the Arm chips are, or else there would be no benefit to use them unless you like paying more for an open source chip that is free as in freedom and more expensive.


I am curious as a total novice why chips and why now? As you said ARM is cheap. For price to performance I don't see why there aren't multi-motherboards coming out.

Someone made a 10 raspberry pi MoBo which if it is cost effective would be easier than a cluster. I think it would be awesome if that was scaleable but I am sure there is a reason. It would be cool and save space making a cluster out of 4 of those boards rather than 40 rPis


Toy SoCs, like those used in almost every single-board computer, have bad I/O, which makes them poorly suited to general-purpose clustering. Also, general-purpose clustering is not something people strive to bother with. Especially not at a performance level that is as low as a bunch of Raspis.


10 raspberry pi MoBo? Do you have a link for that?



> Amiga

Which incidentally uses the more than ten year old PA6T-1682M which has been on life-support for the entirety of its existence.


Talos failed because Raptor tried to sell a high-end server in desktop clothing when the market called for maybe a PowerBook at most. They needed to start smaller with something lower-power.


The hole point of a free ISA is that many different people can produce them so that it will reach a economy of scale and chip producers will face competition making dirving down price.

Its not garantied to work but its a worty target.


With more producers, you get more competition, but LESS economy of scale, and for a product like this (with demand being limited initially), this was already a critical concern.


A point re manufacturing. Still, you get far more economy of scale than a proprietary niche product would have, plus excellent economy of scale re most chip design - where the real expenses are. Runs will be big enough to get most of the manufacturing benefits from scale.


Free is about freedom, not price.


So, freedom is only for the rich?


Freedom is sometimes efficient enough that it costs nothing to have freedom.

Other times, there are real or artificial barriers that mean freedom has a cost. In those cases, someone has to bear the cost or forego freedom.


In business there is no "cheap" or "expensive". There is only "worth the money" or "not".


That's not the point; the context of the post you replied to was people that "claim to want an open-source chip, but end up balking at its price tag".


Exactly, freedom isn't always the cheapest or most efficient.


AFAIK, most advanced RISC-V design includes superscalar and OoOE (e.g. comparable to a MIPS R10000 (1995) or an Intel Pentium Pro (1995)), while the RISC-V for the IoT is comparable to a typical single instructions per clock RISC CPU, e.g. MIPS R3000 (1988).

The success in the IoT will depend not only in a cheaper price because of no royalties, but also in the "ecosystem": peripherals, buses, etc. Running Linux is a huge start, so I have no doubt it can be a success in this field.

Regarding the use in mobile and desktop, it will have to wait until SIMD extensions are introduced, and software being adapted (e.g. ffmpeg/libav including RISC-V assembly SIMD implementation for the codecs).

Anyway, realistically, for the RISC-V getting enough traction, some big player should bet on that, which is currently highly improbable, unless some Apple/Samsung/Huawei/Google gets crazy enough for doing it.


>Regarding the use in mobile and desktop, it will have to wait until SIMD extensions are introduced

I'm little confused about this talk and comments in here relating to open source in high performance applications (desktop, servers etc).

The RISC-V ISA is open source, I get that. There can also be open source Verilog/VHDL designs available. They are probably good for small low performance IoT applications and old processes.

But there is long way from ISA to silicon in high performance VLSI processor design for a new processes and foundries. Architecture, logic synthesis, timing analysis, floor planning, routing and placement, and ungodly amount of testing .. I don't see anyone spending hundreds of millions of chip design for RISC-V ISA and open sourcing it.

In other words, if you want competitive RISC-V chip, some company must spend hundreds of millions to develop their own RISK-V chip architecture and they will not open source it.


> In other words, if you want competitive RISC-V chip, some company must spend hundreds of millions to develop their own RISK-V chip architecture...

I wonder if a country like Russia, wanting something that they can deploy without worrying that the NSA & Intel are cooperating on state spying, will eventually try and take something like this and bootstrap a domestic semiconductor line of business.


Russians have build their own processors for decades. They have used SPARC instructions sets and they also have their own VLIW Elbrus 64-bit ISA and processors are made by TSMC for 28 nm process. It has Intel x86 comparability with system-level dynamic translation.

Just adopting new Instruction Set Architecture (ISA) is not going to help to magically boost semiconductor business.


The Russians have Baikal (T1), the Chinese have Loongson (3A/B). Both are MIPS based. Unfortunately, even though both announced that there will be consumer hardware available over a year ago, it did not happen. So I'm not too optimistic regarding RISC-V.


Nvidia would be perfect


NVidia is busy with their own ARM designs.


NVidia are using RISC-V in their GPU designs.


Where can I read more about it?



I want to buy RISC-V, both to play with and to support the cause. What are my options like and should I buy something now or wait for the next generation?


Apart from buying an actual dev kit like HiFive from SiFive, a great way to get into it is to buy SiFive's FPGA kits

https://dev.sifive.com/freedom-soc/evaluate/fpga/

The FPGA board can be used for other things too. If you are really adventurous, I'd suggest buying a FPGA board with better chip so you can fit in larger IP blocks in the future. It will work perfectly fine as a replacement for the above FPGA kit I've linked to.

My suggestion would be this[1]. It has a pretty large LUT count so you can go nuts. The RAM and Ethernet will be pretty useful if you want to run linux[2] and test out stuff. It'll be a bit hard to run linux on it right now.

On the other hand, you can choose a Parallela board[3] which comes with a FPGA chip along with a new (soon to be retired) arch called epiphany. Here[4] is a GSoC project which runs linux on that board using the FPGA.

[1]: http://store.digilentinc.com/nexys-4-ddr-artix-7-fpga-traine... [2]: https://github.com/riscv/riscv-linux [3]: https://www.parallella.org/ [4]: https://github.com/eliaskousk/parallella-riscv


I see that what they recommend for the freedom platform

https://www.sifive.com/products/freedom/

today, is a dev board

https://dev.sifive.com/freedom-soc/evaluate/fpga/

that costs around 3500 USD:

https://www.avnet.com/shop/us/p/kits-and-tools/development-k...

Do anyone here know what kind of "classic pc" performance one is likely to get out of a board like that? Could you get on the order of low-end pc (~500 dollar soc / netbook with real gigabit ethernet, sata6 and usb3) from something like that, if paired up with a reasonable cpu design?


The $3500 board is a lot, yes. There is also a $99 "Arty" board that is enough to run what is in the HiFive1, and I think even to add an MMU and run Linux. The $3500 one has enough space to do multiple cores, FPU and so forth as well.

Both the $99 and $3500 FPGA boards run a Rocket CPU at 65 MHz, which is pretty slow, though faster than you get with a software cycle-accurate simulator on a workstation.


A somewhat speed-optimized IP CPU core [1] runs at 600DMIPs on a Virtex7. I would hope that SiFive is in the same ballpark, but it's probably slower.

Even in a best-case scenario, this is nowhere near a low end consumer PC.

edit: For reference, my 5 years old Thinkpad x230 does around 7400 DMIPs per core.

[1] - https://www.xilinx.com/products/design-tools/microblaze.html


Well, that actually sounds a lot better than I feared - I have a aging aspire acer one that works fine as a lightweight Linux desktop.


People mentioned the siFive option, but that's a microcontroller. If you were looking for a 64 bit processor implementation, without having to use an FPGA...

It looks like the first dev board that would be capable of running linux will likely be the lowRISC project.

"We are expecting to crowdfund an initial instantiation of the lowRISC platform during the course of 2017" http://www.lowrisc.org/


I'll buy a LoweRISC as soon as they have something for sale (and I have one of the first HiFive1s). But at the rate both organisations are progressing I expect to be able to get a quad core 1.6 GHz 64 bit RISC-V "Pi-like" board from SiFive before LowRISC. Possibly even this year. I've gathered from other things they've said that they're sampling wafers with that SoC about now.

Ideally, they'll both succeed.


You can buy the HiFive1 board (I have a few and they're swell), but it's more of a microcontroller than an application processor, the board is Arduino compatible. If you want something more like a Raspberry Pi or bigger, you'll have to wait a bit longer.

The Freedom Unleashed platform is coming down the pipe, and you can pre-evaluate it on an FPGA board (albeit at lower frequency, and fairly pricy since it's a complex design and requires a fairly large FPGA to prototype). I won't pretend to know when they'll have a standard Freedom U500 SoC dev board, but it will have multiple cores, PCIe 3.0, USB 3, Gigabit ethernet, and DDR4 compatibility according to their website[1].

[1]: https://www.sifive.com/products/freedom/


(Public service announcement: https://riscv.org/forum/)

Others have the SiFive chip, but if you want to boot Linux then your best option currently is running Rocket on an FPGA kit. I expect that we'll soon have more options. If you just want to play around, then I'd recommend Spike or Fabrice's https://bellard.org/riscvemu/


You can buy one from SiFive: https://www.crowdsupply.com/sifive/hifive1 although it's going to be a while until a RISC-V chip will be packaged on a board with USB, Ethernet, SATA, and all peripherals that make an ATX motherboard or a Raspberry Pi worth owning.


The HiFive1 from sifive for $60 is what I got.

https://www.sifive.com/products/hifive1/


For an FPGA to play around with, the Lattice ECP-5G is really nice: http://www.latticesemi.com/en/Products/FPGAandCPLD/ECP5andEC... . PCIE and DDR3, in a $99 dev board.


SiFive


RISC-V is atleast 10 years away from competing with x86 and ARM. It is just now getting to a point where it can power arduino class hardware. Long way to go... but looks promising.


The Freedom E310 chip in the HiFive1 board is shipping today and is already competitive with embedded ARM: https://www.crowdsupply.com/sifive/hifive1/#comparisons. It's comically faster (i.e., 100x) than the AVR chip in the Arduino Uno board, but the similar form factor makes it cheap & easy for early adopters to play around with.

I'd bet that within five years a good proportion, even the majority, of Amazon & Google servers, will be running on RISC V chips.


I'd bet that within five years a good proportion, even the majority, of Amazon & Google servers, will be running on RISC V chips.

Can I take the other side of that bet?

I'm mildly bullish on RISC V, but displacing x86/AMD64 in the data center in 2 generations (servers generally have a 2 year refresh cycle) seems fairly optimistic.

ARM hasn't managed it, despite some good attempts. Nor has AMD, nor Power8/9.


> It's comically faster (i.e., 100x) than the AVR chip in the Arduino Uno board

Beating AVR cores on performance is like stealing candy from a baby. No one hands out prizes for that.


It is not competitive at all yet. In the world of embedded system, the processing power is not all about embedded system. The CPU takes less than 10% of the sillicon of the chip. Look at what what it got - A single SPI, some PWMs, even so I2C, no hardware timer, no ADC/DAC. So it should rely on software implementation of peripheral. Lack of hardware peripherals will lead to fail real time requirements. It may slower than even arduino in some operations. Moreover, where are you going to use 320MHz with tiny 16kb of RAM? This chip might be useful for calculating such as DSP or Machine learning unit but this is yet practically useless for a microcontroller. There is yet a long way to take off.


The big problem with the E310 is that after you've prototyped something and want to design a custom board, you've got a much more difficult task ahead of you. The AVR needs a significantly fewer number of support chips, as it has onboard flash, has pretty large voltage tolerances, has adcs, etc. It gets even murkier when you start considering boards from other companies like STMicro, which have Cortex-M7s, are competitive in performance and have boards that go for half the cost of a hifive1. It's interesting if you want to poke around with Risc-V for the purpose of poking around, but i wouldn't want to start a serious project based on it at the current time.


That sheet doesn't mean anything, they are comparing a microcontroller clocked at a frequency of 300 MHz with microcontrollers clocked at 30 MHz.

Of course that they have better performance... Misleading marketing material.


Is there a power consumption comparison?


(I know there is DMIPS/mW, but looking for a direct power consumption comparison)


Where do you get this from? It will still take a bit until the privileged spec is finalized, but what is holding RISC-V back then?


Implementations that are competitive with current x86 and ARM at the highend


As someone not knowledgeable about hardware, I really enjoyed reading Agner Fog's message board where he and others discuss creating a new open source instruction set: http://agner.org/optimize/blog/read.php?i=421

RISC-V is discussed some, and part of the discussion is how to improve it.


I look forward to a RISC-V proc powering my laptop with some ridiculous Nvidia GPU. We must escape x86 :(


There is some gentle irony in wanting an open source ISA CPU paired with a GPU so proprietary the ISA is not even public knowledge.


I'm looking forward to the day we can operate dedicated, proprietary GPUs like black boxes we can turn on and off at will as optional accelerators. :-)


"The journey of a thousand miles..."


Twice the battery life and all the computing power :) sounds like heaven


Atm, arm is likely a better fit for that.


I wish ARM made all their manufactures use UEFI. Microsoft did for their phones, but the bootloaders are locked. All the other ARM devices are SoCs with random shit connected to random pins and non-upstreamable kernels. It's not a platform.

For a new design to work, it really need to be a sold platform, with a spec and kernels that don't need per-device images to boot and run. Device trees help, but they're just not enough.


> SoCs with random shit connected to random pins and non-upstreamable kernels

The non-upstreamable part is due to variety factors, including but not limited to dubious code quality, tendency to reinvent things in house and unwillingness to share code with competitors and develop common frameworks. UEFI won't help with these particular issues.

As for random shit connected to random GPIOs (or mapped to random memory), you probably really want ACPI. Fwiw, ACPI is said to be coming to ARM and last time I heard Linux developers weren't exactly excited - ACPI means running opaque vendor-provided bytecode which pokes I/O registers behind the OS's back and, if x86 is anything to go by, contains countless bugs which can't be fixed without the vendor releasing firmware update and users applying it, so you end up with a collection of machine-specific workarounds anyway.


devicetree is how this is all typically managed for Linux on ARM.


ARM has the beginnings of that platform coming together, just in the server space. It's pretty much implementable:

https://en.wikipedia.org/wiki/Server_Base_System_Architectur...


I'd rather they use Coreboot. UEFI is definitely coming "locked" to virtually all ARM-based devices, and my guess is Microsoft is at fault there, perhaps just by simply making it being locked the "default", and then the OEMs not bothering to change that.


Right now the arm SBCs actually make functional desktops. There's very little you can't do on a raspberry pi other than run closed software.


You can actually run Windows on a Raspberry Pi. Having said that, they make for horrible machines to do actual work on even with desktop Linux.


The "Windows" you can run on a Raspberry Pi (ignoring anything involving X86 emulation) isn't a general-purpose OS. It's more like a deployment target for UWP/IoT apps that you write on a Windows 10 PC.


It's written in C. Why not a general purpose OS?


There are important people at Microsoft who basically get very confused about what "x86" or "arm" means. There was a contingent that decided allowing third parties to port win32 apps with a recompile was off the table (despite it being very doable - just look at explorer or notepad running on surface rt), and to this day whenever many MS folks make public statements about windows on arm they sound hopelessly confused.


Important people don't know what underlying hardware their software runs on? I mean it's not that hard to figure that out... I hope they're not just parroting what others tell them.


I could never figure out if it was stupidity or willful ignorance. Probably elements of both.

So I quit that company.


Vim and gcc run fine, I can compiler kernels and busybox and work on my personal software projects and homework on mine. They don't run gnome or html5 browsers as well as your laptop might but nothing runs those well anyway.


t-try installing Chrome on it...


Not Chrome proper but Chromium works (surprisingly) well on Raspberry Pi 3.


well... can you load like JS heavy site


lynx for life yo!


Pretty sure that this depends on the definition of 'functional'. My 386DX40 was functional, but I'm glad I have 8 i7 cores now.


Odroid is probably a better choice for a little extra oomph you would want on a desktop without costing that much more: http://www.hardkernel.com/main/main.php


Isn't ARM the RISC solution that is already here? Already dominating mobile, Microsoft is soon to be announcing x86 apps running on Windows on ARM


ARM is too license-encumbered. If we're going to switch away from x86, it might as well be to something that admits free development and production.


Things are not that straight forward, I wrote a better comment here: https://news.ycombinator.com/item?id=14284409


I'm aware that individual implementations of the ISA are nonfree. There's still a huge benefit to using the free ISA, because we can target nonfree hardware while necessary and our software will just work on free hardware when it's available. There are already some free low-power designs, and if RISC-V gains traction I expect some free high-power designs to come out of the woodwork.

A robust open processor ecosystem could be the impetus for a huge improvement in open-source EDA tooling and public EDA research. Even now, the absolute cutting edge in HDL research is free and open-source (and 100x more user-friendly than anything to come out of Xilinx or Altera), and I suspect the same would happen for silicon-targeting synthesis tools if there was a demand for it.

The idea would be to get 90% of Intel's performance for 5% of the work. Intel spends insane amounts of money on manual routing and optimization, extensive testing at every stage of prototyping and production, stuff like that. The creative laziness of the free software/hardware people could probably improve on that very substantially with minimal losses to the finished product.


I wonder how of those fit in a zedboard. (not going to ask how many hours of tool fighting that would require though).


Running on a Zedboard is quite well documented; only took me a couple of hours to do it from scratch following their instructions: https://github.com/ucb-bar/fpga-zynq


I meant in a SIMT fashion, more like a GPU


I see on this thread a lot of people getting blind-folded by the "open-source" term attached to the headline of this article.

First of all RISC-V can mean more than one thing: it can refer to the architecture, which is in fact open and free to use, or it can refer to the implementation of the same architecture, which will not be necessarily free or open source. For example, check the so claimed SiFive company which was promising free and open source implementations of RISC-V: http://www.eetimes.com/document.asp?doc_id=1331690

"“A year ago there was quite a debate if people would license a core if there was a free version, [but now] we’ve seen significant demand for customers who don’t want an open-source version but one better documented with a company behind it,” said Jack Kang, vice president of product and business development at SiFive."

By the end of the day, they just decided to follow ARM's path by providing license fees to their CPUs.

Secondly, when people say that RISC-V is "free" and "open-source" and that will allow companies to create cheaper and more open hardware, that is just an illusion. There are many more things on a SoC other than a CPU (like memories, communication buses, GPUs, power management processors, and so on). Cutting costs on a CPU will not make the cost of an SoC go down to zero, the CPU is just a small part of the puzzle. With RISC-V, you either need to implement the CPU yourself (which will be extremely expensive and time consuming) or you will have to find someone who provides with CPU cores already implemented. And of course that you need to have support and guarantees that the cores you bought will work on silicon. There will be always a huge cost associated when shipping CPUs, you can't escape from that.

You can already imagine that open-source hardware doesn't play by the same rules as open-source software, it's a completely different game with completely different rules.

And people speak of ARM's royalties like if they were a very bad thing. Truth to be told, the royalties you pay ARM can be a very good deal taking into account that you get access to silicon proven CPU cores, support from the best engineers in the industry and you automatically get covered by the many CPU patents that ARM owns. And you can even choose on how you want to pay for ARM's CPU licenses: you can either choose to license an already implemented CPU design by ARM or you can buy an architectural license and implement your CPU completely from scratch (this is what Apple and Qualcomm are current doing). You don't need to be completely tied to ARM. Even in the royalty fees you can choose whether you want to pay a big upfront license fee but then paying low royalties per device or you can choose to pay a low upfront license but compensating on the royalties per device.

There is a lot of misinformation going around the possibilities of RISC-V, mostly of this misinformation coming from people involved in the development of the spec. Don't be fooled by the buzzwords "open-source hardware" and "free hardware".


> You don't need to be completely tied to ARM.

The point is that if you want to use the ARM ISA you have to pay ARM. Not so with RISC-V. Anyone is free to fab a RISC-V chip without paying royalties.


A lot of the interest behind open ISAs like RISC-V isn't from a cost perspective, because most people recognize that the cost differences are negligible.

They want to avoid the Intel scenario, where a company controls an ISA that industries depend on, and can force their ideology on implementing silicon for whatever ends they want.

ARM, being proprietary and controlled by the company can do the same. If you want a specific part built, you need the permission of ARM to do anything involving the ISA.

If you are going to build all civilization on the top of an ISA, it can't be reasonable to subjugate that ecosystem to the whims of one board of directors.


This is one of the few sensible comments in this thread and explains exactly why RISC-V is important.


Touche!


You are talking like you could download RISC-V from the internet and send it to the foundry for mass production without additional costs or effort.

RISC-V is just a document, is not an implementation. There are many costs involved into designing a CPU that end up costing much more money than paying a license fee.

Check this comment: https://news.ycombinator.com/item?id=14284450


In that comment you say "RISC-V is thrown around like if it's an already working CPU but it is not" which is false since there are multiple existing, functioning RISC-V implementations in silicon (https://www.sifive.com/products/freedom/), FPGAs and software (https://bellard.org/riscvemu/).

Most people on HN should understand the difference between an ISA specification and its implementation in silicon.


The tone of your comment is very unfortunate taking into account the content of it.

The second like you showed is just a simple FPGA implementation, you can't really use for ASIC designs.

The first one is a completely different implementation by a different company. And they are not really providing the whole source. I challenge you to find a place to download their designs yourself. I will be waiting.


The second is real silicon. I have a couple. But all this talk is missing the point. For companies looking to design a product for which the processors is a means to an end, RISC-V holds a tremendous time-to-market advantage and full complete control of your own destiny. That, more than anything, will matter for the 1st wave of adopters. (RV has other technical merits that you for some reason discount).


How about this: http://www.pulp-platform.org/ ? It still in development, but the plan is to release the full CPU. You still need process/foundry specific data of course (things like flash-cells).


The GAP8 is a real product based on the pulpino. It's a cpu, more of a low-power embedded dsp/mcu, and a good one at that.

http://www.eetimes.com/document.asp?doc_id=1330188


Pulp seems to be just a microcontroller, not a modern CPU. It is a good effort but you wouldn't be using it on a normal computer.


I believe this is incorrect. It seems that there is lots of confusion related to RISC-V.

Only RISC-V ISA and some simple (and relatively inefficient) architectural designs are free and without royalties.

But when someone develops efficient RISC-V microarchitecture they are not going to provide it free. If some company develops RISC-V based architecture that is competitive with AMD,Intel or ARM, they are going to ask good money for it.


Money yes, "good money" that leverages a proprietary network effect, no.


After enjoying coding on 680x0 in my youth and later being frustrated by x86, I acclaim that new ISA. There is a design decision I'm curious about but I could not find related information. How did they come with the names "x0, x1, x2..." for general purpose registers, instead of the more conventional "r0, r1, r2..."?


x for "fixed point". r for "register" is too vague.


What is so special about reduced ISAs, what's the differentiating factors between them?

I mean they are so reduced that the ones I've seen are largely the same few logic ops. RAM access and interrupts might differ some, but a) memory access should follow the implementation and b) essentially everything else is memory mapped (avr, c51, pic)


To fully appreciate this, one would need much experience with microprocessor implementation. It's not reduced for the point of being simpler or smaller, but because it represent a local optimum of perf/area for running C code, especially for small to medium implementations.

However, much like the Alpha, RISC-V has been carefully designed to be efficient to scale UP, that is, to superscalar out-of-order implementations.

Do read the spec and the footnotes; they are delightful: https://raw.githubusercontent.com/riscv/riscv-isa-manual/mas...


So many RISC options and almost none for CISC. If you want yet another low power chip, then it makes sense. If, however, you want to get real on efficient cache usage and a high instruction-per-execution ratio, just do yourself a favor and stop ignoring the costly experimenting results that the industry already paid for.


This just doesn't match the research I've read. CISC wasn't about speed of execution, and x86 isn't an asset, there. (As I understand it Intel's expertise in compilers, emulation of CISC by RISC, and manufacturing are wonderful compensating factors.)



An opensource, non-licensed, low-cost embedded processor -> sifive sells it for 600.000$ minimum.


I always thought ARM chips were RISC. shows what I know


That's how they started out, but there is always pressure to add new instructions for niche markets. But since there isn't any coherent way to implement extensions it means they have to pollute the entire ISA. RISC-V was carefully designed to allow for extensions.


No it was never a [real] RISC. Even the very first version included predication and shift in most instructions. They weren't even fast. The event that changed everything was when DEC Alpha engineers (for reasons I don't know) decided to use their expertise to created StrongARM. This ironically ended up at Intel and was renamed XScale, before being spun out. (EDIT: grammar & typos)


would there be a advantage to not only creating a reduced instruction set but a minimal instruction set and letting the compiler do the rest. especially when you can add a lot more cores, so that mul becomes a cpu core with a counter and add for example.


No.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: