Hacker News new | past | comments | ask | show | jobs | submit login
RISC-V port merged into Linux 4.15 (groups.google.com)
296 points by rwmj on Nov 16, 2017 | hide | past | favorite | 72 comments



I love how once this exists in all the standard tools, anybody can just make a new chip and practically instantly run huge amounts of software on it and have the right base to add to it. A free ISA is a great interfacing point, innovation can go on above and below mostly independently, no licences or anything required. A good example of permission less innovation.

The road is long for RISC-V but I think the project is progressing about as fast as one can hope for. Thanks to everybody who helps with this.


What about the boot sequence and driver enumeration? How is this handled in RISC-V? This has traditionally been one of the major obstacles to ARM - you need different kernels for two ARM devices with nominally the same core because of initialisation differences.

Devicetree makes this better but not 100% as simple as the old "legacy" PC boot sequence was. Ironically these days PCs are quite hard to boot too with UEFI.


Heh. Device Tree only deals with a fraction of the problem.

Here's what I think would help: (warning: lots of work!)

1. Large database of free peripherals and foundational blocks. Not just UARTs and SATA controllers, but also network interconnects (crossbar), SDRAM controllers, and all the other bits that go into a modern System-On-Chip (SoC). Power management is a huge pain in general, and is widely different among chip families even from the same vendor.

1.A. Obviously, the above implies a decent set of open-source drivers for multiple operating systems for the peripherals. At the very least, you need support for a bootloader, a real-time OS, and Linux. Preferably with no binary blobs.

2. A standard for enumeration of these on-chip peripherals. There should be a way to query the chip itself to see what is included in the SoC, what device addresses they have, what options are enabled (like how many interrupts), and how the peripheral lines (if any) are connected (IO mapping).

3. Some widely-accepted standard for storing the board configuration as well. This could just be a I2C EEPROM, but it has to have a listing of everything connected to the SoC and how. This I2C is mapped to these pins, and has this accelerometer connected at this I2C address. And all the hardware manufacturers (who produce the boards) have to be convinced this is a good idea to have this extra cost (and space) built into their products.

Only after all that would you have a fighting chance to boot a relatively generic kernel and have it actually run. And then have a fighting chance to have the ability for the end-user to upgrade the software after the manufacturer has ceased support.

The cross-platform situation with PCs is light-years ahead of where we are in the embedded space.


Step 3 is often achieved by having the firmware append a device tree blob to the kernel which contains all of the board setup. I think it's pretty widely adopted?


Yes, it is headed in that direction. There's more work needed in areas like uniform conventions for configuring peripheral pins and such, in my opinion.

And then there's clock configuration, which is kind of a mess, because everything is hooked up differently on different SoCs that I've seen.

It would also be nice if vendors always shipped up-to-date device tree source files for any board they ship.


For those of us who don't know what is involved in power management, would you mind giving a quick explanation of what it involves and why it's painful? What power is there even to manage unless you want to put the computer to sleep or shut it down? Or are those the sole things your are referring to?


Not the OP, but there are many (tens of) "power domains" on the system, and if you care about power/battery life then you want to power each of them down when they're unused, automatically, even while the rest of the system stays running. i.e. why pay the battery life cost for powering a USB hub when there are no USB devices plugged in, or for a DSP when there's no sound being played right now.

But the calculation of when it's safe to power down any of the power domains can be very complicated -- you can't turn off a power supply unless everything that depends on it is unused, so there's a subtle and board-dependent graph/dependency problem to solve.


Piggybacking on this—what can make it worse is that some chips might not properly power back on reliably, in spite of the manufacturer's claims.


Awesome, thank you!


In addition to what cjbprime mentioned...

There's a lot going on in a modern (or even one from 10 years ago) SoC with regards to power management. Just the processor cores themselves can run at different voltages and frequencies, and these are tied to the software workload.

And then you've got the big / little stuff from ARM now, which is an asymmetric multiprocessor, and which cores you use depends on the workload.

At how it works and the interface to it changes from vendor to vendor, sometimes from chip to chip. So it usually isn't implemented all that well, nor is it maintained for very long either.

If (big if) we had more standardized interfaces to this functionality, we could use common drivers that can be upgraded independently.

Like I said, the desktop people may not realize how easy they've got it, where most things are accessed via a standardized bus interface (PCIe or USB). There's no virtuous cycle of software and interface re-use, like there is in desktop land.


Yeah. And arguably this is intentional behavior due to a consequence of ARM's business model, where the SoC vendors all license the same IP and then need to "differentiate" (create incompatible improvements) against each other.


PCs are quite hard to boot too with UEFI

Let's hope RISC-V can avoid something stunningly complex like ACPI while they are at it. You might be trying to write a secure OS of well reviewed code, then suddenly find that to reliably shut down a PC you need ACPI which drags in 100000 lines of code including an interpreter capable of accessing arbitrary physical addresses. (Or you just try all the shutdown mechanisms you've ever heard of and hope one of them nails the machine you are running on.)

I wonder if hardware is easy enough now that a platform could require that all optional hardware respond with a UUID during an enumeration phase. That would have been a burden back in early ISA days when things were as simple as a couple registers on a card, but maybe now it isn't. Then the OS can handle the ones it knows about and leave the rest alone.


Learning to write a better ACPI script for my lifebook was a big accomplishment for me, but ultimately lead me to a 13” MacBook Pro.

I suspect it’s such a hidden part of the whole process of using a computer that few repercussions are felt. Since not enough negative PR value, it changes but slowly. Though it sure does suck for portability.


It would be pretty cool to have a DSDT to kernel c compiler allowing us to port our laptops over to native power management over time.

We should be able to write gpio chip drivers for the actual hardware which the interpreter normally targets and eventually eliminate ACPI from our systems.


I suspect it’s such a hidden part of the whole process of using a computer that few repercussions are felt.

Not only hidden, but mostly does its job these days, and has been ignored by everyone so they can get on to writing "Uber but with ML".

There's a shocking amount of stuff like that, made decades ago and still there just because... well, it's still working.


The plot of one of Vernor Vinge’s books involves a man who has been in cryo sleep for a couple hundred years. When he wakes up their (essentially) IoT devices are still running his code under layers and layers of abstraction. He promptly hijacks all of them and ultimately uses them to avert a conspiracy (and susbtitite an ethically more defensible one of his own).


What about dtb and standardizing something like coreboot's libfirmware.

Should it really be up to every hardware block to support full introspection?

Doesn't that start to turn into PnPIsA?

The eeprom solution starts to sound a lot like how Dimms are identified and configured.


> as simple as the old "legacy" PC boot sequence was.

Some of the Hackintosh folks know how much this can be false. As soon as you step out of real mode / VESA graphics, all bets are off, those ACPI tables can be a real PITA, and you may have to fix stuff by extracting, editing and recompiling your SSDT/DSDT, at which point you generally wonder how the stock ones worked at all in the first place: missing symbols, broken CPU state tables... (well, deep inside you know, Windows/Linux are doing a hell of a job doing their best at working around all that mess)


There are still many issues left and it will not be easy to figure out all these things. It will have many of the same problems as ARM but hopefully the community can work on these.


For servers, they're looking at ARM SBSA as a model. There's already some starting work on UEFI for RISC-V (done by HPE).


> There's already some starting work on UEFI for RISC-V

That makes me deeply sad.


Why? UEFI is open source. It's standardized, and provides standard interfaces to boot-time device drivers and to operating systems above. It provides a reasonable command line (TBH much preferable to u-boot). It works well with Linux and Windows. It's complicated, but it's solving a complicated problem.


Parts of this presentation has some explanation: https://schd.ws/hosted_files/osseu17/84/Replace%20UEFI%20wit... (obviously you can ignore Intel x86 specific stuff like the ME).

OpenPOWER apparently does something similar, i.e. a minimal firmware that loads a Linux kernel + simple userspace from flash.


On RISC-V the machine layer (like ME) will be completely open source, and so will UEFI. You're inventing a problem that doesn't exist.


Open source is a necessary but not sufficient criterion.

Leaving aside the issue of user-replacable signing keys if some kind of measured boot system is in use, and availability of documentation so you can replace the firmware if you want, which isn't really UEFI's fault, there's other issues with UEFI as well. Such as:

- Why does it take 8 minutes to boot a UEFI server vs. 17 seconds with NERF? Minnich's opinion is that it's the braindead way UEFI initializes stuff. E.g. if thingy A needs thingy B, it will initialize thingy B regardless of whether it has already been initialized, etc.. Leading to a combinatorial explosion. Whereas Linux builds a dependency tree and initializes each thing only once, in the correct order.

- UEFI option ROM's apparently mostly contain x86-64 code and not UEFI byte code. Some clever guys at SUSE had managed to work around that on aarch64 by using qemu to run the option ROM's, and since the x86-64 and aarch64 data structure layouts are more or less the same (endianness, alignment etc.) they could share the memory space. I mean, it's an awesomely clever hack, but do we really want a Frankenstein thing like this be the bright future we're striving for?

- UEFI is amazingly complicated. Say if you want to load the kernel over HTTPS, you need a TCP stack, HTTP, TLS etc. Sure, Linux is also complicated, but I'm sure Linux is 1000x more battle tested than the UEFI stack. And, if you're going to run Linux anyway, you're not increasing the attack surface by running Linux as your firmware & boot loader. And, if you're using booting Linux from the rom flash, you can use Linux native drivers to access the NIC, storage, USB, whatever and don't need the kind of hacks mentioned above on non-x86.

Sure, using Linux as firmware & boot loader might be politically untenable to some proprietary OS vendors. But so what? Let them solve their own problems!



Exactly. UEFI is a bad idea implemented badly.


While it is true that a lot of software is available immediately, there is also software which is not. For example PC games are nearly always restricted to x86. Performance of common non-x86 hardware might not be enough to run the latest, demanding games, but. I don't see why games like Factorio, Stardew Valley, Lethal League, Don't Starve or Bastion couldn't run on less powerful hardware. Great games do not require demanding hardware.

One problem is that games are usually closed source and are rarely freed. Another problem is that there are no open graphics driver for common single board computers. The latter problem is being worked on by the amazing Mesa project. What can be done about the former?

I worry that games will be abandoned and never be freed. The copyright of movies eventually expires and they become a common good. What about closed binaries compiled for obsolete hardware and software?


This is going to be one of the most embarassing chapters of world history in a few centuries.

"Oh yeah, all those old folk at the turn of the milennia had this thing called copyright and all their code was proprietary so we no longer can run or use 99% of software written between 1970 and 20XX".

"And there are these files where it was a felony to be able to access them!"

Wow those guys were sure dumb. Now all our information is open and shared and makes the world a better place and anyone can use it or improve it!

All I can is lets hope its only until a 20XX that we get over the facade of IP being worth the cost.


"This is going to be one of the most embarassing chapters of world history in a few centuries."

And it won't be that far from archaeologists being prevented to decipher any ancient language. If today's copyright laws existed in historic times we wouldn't have precious documents copied by monks nor the Rosetta Stone. It's just dumb crazy how this works.


Is having copyright a concern? Even Linux has a copyright. And all GNU software. Do you want all forms of copyright not to exist?


I am a general IP abolitionist, but the real solution to the problem of source code lost forever is in the domain of right to repair. Since the advent of software the right to repair, which used to be respected for almost anything you could buy, has been completely abandoned. Requiring access to the sources to modify code you buy (or in a post-IP world, receive in binary form) would go a long way to stopping the software culture death of source code never being released and rotting on hard drives until its irretrievable.

The problem is the current dominant culture is either completely apathetic or actually hostile to right to repair for software. Its why I say "I hope its only 20XX" because I don't see the path to fixing that mindset.

That being said, it is important to distinguish that while I spoke about how future historians will see both draconian IP and the lack of right to repair as barbaric and antiquated, they are distinct but related concepts.


RMS has historically viewed copyright as a bad thing[1], all the way from how it has developed historically to how it is applied uniformly to different kinds of works (practical works like software being copyrighted is of particular concern).

Now, you might say that "the GPL uses copyright, which clearly means that RMS and the FSF are pro-copyright". They're not. Copyleft is a hack of the copyright system[2], specifically designed such that as copyright becomes stronger so does the amount of freedom provided by copyleft. This is explicitly the purpose. If it were not legal to stop users from modifying software, then the GPL would not need to exist (nor could it).

RMS has stated[3] that his views on an ideal copyright system is:

1. Authors retain copyright, not publishers. This matches the original copyright law that was created in England under Queen Mary in The Statute of Anne (1710)[4].

2. The scope of works covered would be reduced. Most importantly, practical works would not be covered under copyright.

3. The length of time that copyright applies to a work would be massively reduced. Works are being created today that children born today will never see in the public domain. The solution is to reduce the length of time that copyright applies to 10 years (or possibly 5, which some authors appear to prefer).

4. The breadth of restrictions would be reduced. Most notably, distribution of a work would no longer require permission from the owner. The argument for this change requires watching his entire lecture, but it comes from a historical argument on how copyright has developed and how modern copyright has broken the trend in that it attacking consumers of a work rather than publishers (which was the original purpose, if you look at the development of copyright law alongside the development of copying technologies like the printing press). Not to mention that lending a book to a friend or selling it are both perfectly acceptable things to do.

[1]: https://www.gnu.org/philosophy/misinterpreting-copyright.en.... [2]: https://www.gnu.org/licenses/copyleft.en.html [3]: https://www.youtube.com/watch?v=eginMQBWII4 [4]: https://en.wikipedia.org/wiki/Statute_of_Anne


GNU software's clearly using copyright as a hack ("copyleft") to achieve its actual goal of a non-copyright world. So it shouldn't be surprising that GNU supporters would prefer copyright not to exist.


>> So it shouldn't be surprising that GNU supporters would prefer copyright not to exist.

Not so sure. That would effectively make all open source code fall under the equivalent of a BSD or MIT license, which the GPL clearly is not. GPL want's the essential freedoms to apply to all derivative works as well as the original. Neither BSD, MIT, or complete lack of copyright can make that happen.


FSM is about making all software free, not all software GPL'd. If copyright laws are dissolved and prop. software got outlawed, there would be no need for GPL. GPL is a hack to circumvent future prop. software using your code. In other words, if no one can hide their source code, there wouldn't be a difference between MIT and GPL.


Abolition of copyright and banning proprietary software seem like two very different things. The comment I replied to just mentioned elimination of copyright. I agree that combined with a ban on proprietary software none of the licenses would really matter - depending on the wording of the laws of course.


If you don't have exclusive right to code, there is no reason why people shouldn't access it.


I don't see how they're at all different, let alone very different. How would you legally have proprietary software without copyright law?


By not providing the source code. That doesn't preclude decompilation, but that's just not the same thing.


While having the source is an important part of something being free software, the far more insidious problems with proprietary software are the other freedoms they strip away from you -- the freedoms to use and distribute the software.

Not to mention that in the 60s, source code was always provided because nobody thought it was valuable (and copyright was not applied to said source code). The software was the source code. I can imagine that if we actually went through a cultural shift of abandoning the completely ludicrous views of modern copyright, that companies would start providing source code again (we're seeing that happen today, in small steps, with companies making more and more free software projects -- even without the copyright reform). There would be no incentive to keep the source unavailable, because they'd have no monopoly on it.


See https://en.wikipedia.org/wiki/Proprietary_software -- your definition of "proprietary software" as equal to "closed source software" is idiosyncratic.


Yes fortunately copyright is governed by law and we can (with good governance) change them for the better in the future.


There are quite a few communities around creating free engines for games, such as OpenMW[1] (there are plenty more). With a free engine all of those problems can be resolved.

As for movie copyright, this is the true purpose of DRM -- to try to make copyright eternal by locking away the content behind strong-enough encryption and then licensing the keys. And of course, breaking DRM (even if the content itself is no longer protected by copyright) is a felony.

[1]: https://openmw.org/en/


Wow. So DRM is a technical hack of copyright law. Is there any other way than adjusting the law to enforce release?


I have switched to opensource alternatives primarily because of the freedom to recompile it to different architectures.

>Another problem is that there are no open graphics driver for common single board computers. The latter problem is being worked on by the amazing Mesa project. What can be done about the former?

Can you tell me more about this?


I phrased it incorrectly. The work is not done necessarily by the mesa project, but within its framework. The degree of activity of each project varies.

Mali 400 (Allwinner A20) https://github.com/yuq/mesa-lima

Adreno 2xx, 3xx, 4xx https://github.com/freedreno https://mesamatrix.net/

VC4 (Raspberry Pi) https://github.com/anholt/mesa/wiki/VC4

If the VC4 or Mali driver mature, then that'd be a huge step forward.


I've said it before, but I'd like to see LLVMpipe get a risc-v back end. And when the vector extensions are ready, support for that. This way all the SoC vendors should be able to implement basic frame buffer support with HDMI and use extra risc-v cores in place of a GPU. It wouldn't be fast, but it should be enough for things like a composited desktop and would provide minimal OpenGL support. Also, some guys have done a 500 core risc-v chip so maybe it doesn't have to be a poor performer. Intel got decent performance from Larabee right?

edit: the point is that many companies don't have any graphics IP, so this would give them a compromise.


Isn't a contemporary nvidia GPU essentially a vector machine with predication and scatter/gather? In CUDA it's just hidden behind the "SIMT" programming model.

If so, there's no fundamental reason why you couldn't make a decent GPU based on risc-v + the vector extension. Just make the cores themselves relatively modest, no reason to waste die area on OoO logic, and have lots of hardware threads to drive memory parallelism. Oh, and gobs of memory BW.

Though IIRC Intel had to have some fixed function blocks, and IIRC other GPU's also have some of them left.

I also recall reading about Larrabee suffering from a lot of internal BW being wasted on cache coherency traffic, but with risc-V having a weaker memory consistency model perhaps that would be less of a problem for a hypothetical "risc-v gpu"?


I know its not ideal but x86 can run on arm via a x86 processor emulator, and then WINE to convert windows systems calls and directx calls to Linux and OpenGL. So nothing will really be lost as long as WINE lives on as a project.


One issue is that RISC-V has a weaker memory model than x86, so even qemu-user wouldn't work unless single threaded or pinned to one core.


It is impressive to see a new architecture already have so much support for existing software with a relatively simple hardware support layer. It is already standing on the shoulders of giants with millions of man-hours invested in software development to bring it to its current state.

Related Question:

Why is Google developing a new operating system for their devices from scratch? Will this operating system be incompatible with all existing software? Or will it be compatible with software and hardware platforms like this one through some sort of compatibility layer?

https://news.ycombinator.com/item?id=13649523

https://news.ycombinator.com/item?id=15041567

https://news.ycombinator.com/item?id=15059349

https://news.ycombinator.com/item?id=14002386

https://news.ycombinator.com/item?id=15058351


Fuschia will be a capability-based operative system [1], a new security model for computing. I guess Google wants to experiment with a new software system for secure IoT devices.

[1] https://en.wikipedia.org/wiki/Capability-based_security


Linux, while very successful, is still essentially based on a design (UNIX) from ~1970.

Much has rained since then in OS design.


Linux has evolved considerably in design since the 1970s!


I'd really like to jump on the RISC-V bandwagon. Does anyone have any recommendations for hardware that would be .. somewhat .. future-proof for hacking/experimenting on this platform? I'd love to have a board or two akin to the rPi-Zero in form-factor, if such a thing were available ..


Nothing available yet capable of running Linux. SiFive has a SOC coming soon (https://www.sifive.com/products/risc-v-core-ip/u54-mc/) that will be. Speculation is that dev boards will be available 1st quarter 2018. My guess is that they'll announce (likely on crowdsupply) around Christmas for delivery sometime next yet... but just a guess.

The other option that is coming is from lowrisc.org, but they are a small team so it's taking a long time and have no idea when they might be thinking of delivering hardware.


As far as I know, only the Freedom E310[1] chip is available for purchase by normal consumers. It's a microcontroller and only has 16kB of RAM so it's probably not really what you are looking for. But if you are still interested in it, you can pick up a HiFive1 devkit at crowdsupply[2] and possibly elsewhere. There's also the cheaper LoFive[3] which is a very bare breakout board for this chip, but I'm not sure if it's possible to buy it anymore.

[1] https://www.sifive.com/products/freedom-e310/

[2] https://www.crowdsupply.com/sifive/hifive1/

[3] https://groupgets.com/campaigns/353-lofive-risc-v


There are actually no hardware processors capable of running the code that was merged. So far all the RISC-V parts in the market are MMU-less microcontroller devices.


Also, is there any (low cost?) FPGA board that is capable of running RISC-V + Linux?

That would be a fun thing to to do.


Found this blog with someone who bought an FPGA devkit[1] for £260 and managed to boot Linux on it[2].

[1] https://rwmj.wordpress.com/2016/07/25/risc-v-on-an-fpga-pt-1...

[2] https://rwmj.wordpress.com/2016/07/26/risc-v-on-an-fpga-pt-4...


That's been going on for some time. This is probably not the best link, but it's one of the conversations:

https://forums.sifive.com/t/building-linux-for-custom-u500/2...

I assumed SiFive was waiting for the privileged instructions to be finalized before making real chips. But people have been running Linux on their stuff on FPGA for a while now.


Can someone explain what having RISC-V support in the kernel means for Linux OSes ?

Also, how is this different to: https://wiki.debian.org/RISC-V ?


It's been merged into the main kernel (i.e. the one maintained by Linus). Distributions can apply their own patches against the main kernel, which Debian did for RISC-V support. They don't have to apply the patches anymore.

Functionality wise, what's been merged is only a subset.


Stable ABI. We have done ~Fedora 25 in late 2016 - https://fedoraproject.org/wiki/Architectures/RISC-V and you can even boot that in your browser - https://bellard.org/jslinux/

ABIs have changed after we did this. We get the first long-term stable ABI with 4.15 kernel and glibc 2.27 (hopefully). At that point Fedora, Debian, and others can reboot efforts.


That was great work and it made RISC-V look that much more viable to me. To think that thousands of programs are ready to go whenever the hardware arrives is really amazing. It's unfortunate that some of it has to be repeated after the ABI changes, but that seems a small price to pay in the grand scheme of things. But then I'm not the one who did it or has to redo it ;-) Anyway, great job guys!


Since the status log includes the patch request and news about it, I think none. It seems originated from the combined efforts of the RISC-V community at large and supported by the distribution.

Pull request: http://lkml.iu.edu/hypermail/linux/kernel/1711.1/04263.html


Here's an archive mirror that works without JavaScript: https://archive.fo/kVvQZ



With Google's new war on Intel's ME this is not a coincidence.


Actually it is.


Yes you are right, no causation just a funny correlation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: