Whilst open source hardware work goes back a fair way, I feel we're really at a turning point where it can become a serious force within the hardware world (think late 80s/early 90s in software terms, Linux and GCC emerging and beginning to find their feet). There's lots of interesting developments in tooling plus significant open hardware projects on-going.
It's an open source root of trust being developed collaboratively by multiple companies such as lowRISC (who I work for), Google, Western Digital and Seagate amongst others. We've been rather quiet on the PR front but there's a lot of engineering work happening and other exciting things we can't yet make public.
Whilst there's some things we have to keep closed (generally relating to ASIC design kits and things like Flash and memory IP) the vast majority of the RTL, documentation and software is open. Plus we're doing development in the open, the public repo is our live development repo. We're not developing it in private then just opening the end product.
"generally relating to ASIC design kits and things like Flash and memory IP"
Even if a hardware design is open source, and even though we do have open source design software, it's still difficult to produce a non-trivial design using only open source software, and without using any proprietary modules.
Proprietary FPGA and ASIC design software has something like a hardware version of software libraries to provide many common functions.
For instance taking the example they gave right there, if you wanted to interface with a flash memory chip or controller, you wouldn't design your own flash interface from scratch just from reading a reference for the protocol, the same way you wouldn't invent your own compression algorithm or tcp stack, you'd use a flash interface module that someone else('s big team) wrote (decades ago and tweaked all along). But those are mostly all proprietary and have to be purchased, and can't be redistributed to anyone else.
So you can share your own design work, but if you're using any IP like that, you have to exclude them from any files you publish, and the incomplete design doesn't actually work and isn't actually usable except as a reference and place to work from to try to work on filling those gaps with open source equivalents.
And it's a non-trivial job to just replace the commercial ip with new open re-writes. Or sometimes there may even be an open version but it isn't good enough.
And even if you had infinite developer-hours to re-create all the commercial IP, you still might run into a problem that some necessary standard you need to use like ddr4 or 3g or something, the standard itself might involve using some algorithm that isn't open and you can't publish even your own code that implements that algorithm, let alone code you licensed.
(I don't know if the DDR4 spec actually has anything like that, just that some things do, and it can happen in places you can't easily avoid by just choosing some other option. If the ddr4 example were real for instance, you can't really say "oh well my new open laptop just won't use ddr4 then")
With respect, I don't see any of these as insurmountable barriers. Reliance on proprietary IP blocks is a time saving measure rather than a given.
> ... the same way you wouldn't invent your own compression algorithm or tcp stack
But people did write open source TCP stacks and invent open source compression algorithms (eg. xpih, codec2). A notable omission from the github list is Icestorm, part of the the open source end-to-end tool chain for Lattice iCE40 FPGAs [1]. There is a stable open source DDR4 controller [2]. The point is that individuals are filling the gaps.
To an extent open hardware suffers from a lack of coherence. A surprising number of the required components exist, but they are invisible, yet to be drawn into a coherent whole like the GNU project did for software. In 1999 OpenCores and the "Open Collector" search engine started an effort to bring it together but its still ongoing. OpenCores continues to make some great contributions but never got real traction, so there are opportunities for people to renew the exiting structures or make new ones.
There's an element of the organisation meandering, but also GNU has a 17 year head start (dating from OpenCores). If the next 17 years for open hardware looks like the last 17 years for open software then it should be an interesting place to be.
With my engineer's hat on, I can't disagree with you.
There's an argument that open hardware suffers from an overdose of pragmatism, which is not unexpected given that we're mostly engineers. Some barriers will require more of an evangelistic mindset, as it won't make sense from pragmatic viewpoint to build some things that need building.
Love him or hate him, Richard Stallman did turn Free Software into a movement. At this stage of development open hardware might benefit from it's own version of the GNU manifesto and a group of people who coalesce around it?
I'm not sure where it's documented, but I remember Libre-SOC have a scheme to keep everything they do free by having an interface to someone else who deals with the proprietary dirty work. Obviously the problem is still there at the base, though.
Taping out an actual chip inevitably involves IP that's not yours, e.g. the standard cell library and other 'physical' IP like memories and flash. You cannot open source that as it is not yours and in general the owners of it won't want to open source it either (though there are exceptions e.g. the Skywater 130nm PDK https://github.com/google/skywater-pdk).
In OpenTitan we've built all the 'logical' IP ourselves from the ground up. This is the Verilog RTL you can see in our repository but you need the 'physical' IP to make a real chip. We haven't built any physical IP so we need to get it from the traditional industry sources which means traditional industry licensing (i.e. very much not open).
Hey...this was just me sharing a list of people who have done great work in open source...I don't see why you have to drag Parallella into this...still since you brought it up, Parallella was shipped to 10K customers and used in 200 research publications. What's your bar for success?
What kind of issues ? I didn't go far but they shipped something that worked enough not to cause instant drama and be used by low level programmers for a while. I'm surprised but I'm not knowledgeable enough.
I find it kind of weird how many people were offended by our use of the term, supercomputing. The actual specs were on the front page of the kickstarter from day one. Not sure what people were expecting from a $100 SBC? Kind of interesting to note that the Parallella beat out the multi million dollar Thinking Machines of the 1990s in terms of performance...
Related: I would recommend people look into cocotb and uvm-python which let you do verification of SystemVerilog designs in Python. This has been a game changer for me, and it is much more productive than writing tests in SystemVerilog in my experience.
Although I don't (yet) own a real FPGA, I recently started to learn Verilog and wanted to see if I could program one using only open-source tools. My first impression is that the tooling is too fragmented and the documentation is lacking. I still don't fully understand the relationships between the major projects - F4PGA (is this the same as SymbiFlow?), VTR, yosys, ABC.
Ultimately I figured out how to use yosys+nextpnr so if I ever decide to dive deeper into the FPGA world, I will probably get an iCE40.
Also, is there an open-source tool for post-routing simulation? AFAIK this requires SDF annotation support which iverilog doesn't have.
AFAIK there is no similar experience between programs, and the only thing in common is the base code, because depending on how the device and the tooling is, you need not only to configure which device are going to be used and how this will be used, but also you might need to create/edit manually files to link some inputs, outputs, and clocks.
In this matter my "real" experiences using Xilinx ISE and Xilinx Vivado, and the "virtual" ones are using the Mentor Graphics ModelSim, all of them using VHDL. Maybe in other suites, or using SystemC or SystemVerilog there are other experience, but I understand something so niche like this have this kind of quirks.
LOL I remember watching a conference talk where a speaker said "fabrication is becoming incredibly affordable; you can now get a run for as little as $18k!"
I think us poor HN users are going to be stuck in FPGAs for the foreseeable future
What size production run is that? $18k is a lot for a hobby project but sure seems cheap to have your own custom ICs fabricated. It's definitely well within the realms of things you could crowdsource with a few moderately well off individuals.
Long, long ago, UK academics could get fabrication done at the Rutherford lab, as I saw on a visit, presumably research council-funded but I don't know what the arrangements actually were.
While yosys and nextpnr are great and work for simple and low assurance designs I understand why they don't take over in industry. Main reason is just that basic functionality is not there yet. Take for example an asynchronous FIFO used for clock crossing. Used widely in industry. Yet it's not supported because there is no way to specify the required timing constraints. Of course you can still synthesize your design but whether it works or not is just luck.
Open source implementation tools certainly have a long way to go but they're already capable of doing some good stuff. E.g. check out symbiflow/f4pga (https://github.com/SymbiFlow, https://f4pga.org/). You can build significant FPGA designs for the modern Xilinx Artix 7 FPGAs with entirely open source tooling.
Efabless are doing multiple shuttle runs that use an open PDK (Skywater 130nm) and open implementation tools (https://efabless.com/open_shuttle_program). Very primitive compared to a cadenece or synopsys tool suite producing designs for a leading edge node, but still capable of producing a real chip that can be used in real applications.
Implementation tools are very complex and the costs for messing up a design are very high, so they're not going to be the first things open source tooling might displace.
I think you'll also see them mature far quicker in the FPGA world as you can always switch back to the vendor tools if the open flow isn't working for you (where with silicon you might not discover the open flow has messed up your tape-out until it's too late). Plus it's a lot easier to developers to iterate and improve the tools (you need an FPGA board, $500 will get you something with decent capabilities, a slight risk you'll destroy it when experimenting, far cheaper than having to do multiple ASIC tape-outs to test your results). Also worth noting Xilinx have joined the f4pga working group and I know Lattice look favorably upon open source FPGA tool chains.
Once open source flows mature in FPGA land I can see them dominating (contrast against compiler technology, once upon a time you'd be buying a commercial compiler, now whilst commercial compilers still exist GCC and LLVM are your best option in many cases). Then when proved in FPGA interest in open source ASIC implementation will hopefully rise and you will see open ASIC flows mature also.
Nice list! Bookmarked. How comes not any board or open hardware design platform like Arduino or any sparkfun/nodemcu/etc are listed? Not even based on the platform they are based?
They have the software on Github and the hardware layouts (as well as additional things like Fritzing files for some boards) available through their documentation - e.g.:
I shall take the opportunity to plug OpenTitan: https://github.com/lowRISC/opentitan
It's an open source root of trust being developed collaboratively by multiple companies such as lowRISC (who I work for), Google, Western Digital and Seagate amongst others. We've been rather quiet on the PR front but there's a lot of engineering work happening and other exciting things we can't yet make public.
Whilst there's some things we have to keep closed (generally relating to ASIC design kits and things like Flash and memory IP) the vast majority of the RTL, documentation and software is open. Plus we're doing development in the open, the public repo is our live development repo. We're not developing it in private then just opening the end product.