No. There is a more suitable tools now for every problem. Large volume - ASIC, and you can also put few ARM cores inside. Something highly integrated - pick ZynQ. There are dozen different solutions for all the problems starting with big FPGA with couple softcore processors going to cheap spartan 7 device and going to tiny and dirt cheap Lattice chip for glue logic or IO expansion. Tools might be better, but this problem will be solved in the future for sure.
ASIC isn't the tool for large volume, it's the tool for when you can accept a several year turnaround time for revisions, vs. a day or two. ASIC > FPGA... except if your design changes, ever. If anything high-bandwidth or design intensive is going on, you're definitely going to revise your design many times.
FPGA's are massively on the rise (although slightly less so than Xilinx would like), and they all are effectively "SoC like", with DSPs, transceivers, hard IP's probably being a majority of the chip by now.
Zynq is a cute little chip, but with Intel's coming CPU-embedded FPGA's, thing will kick off even more.
What's the tooling like for a typical, production-quality FPGA project nowadays? Last I checked, which is admittedly a couple of years ago (and I didn't dive in very deeply), it was a horrible mix of badly programmed vendor-specific Windows-only GUI tools. It seemed a lot like the embedded processor market where every single vendor has their own lock-in strategy and the whole ecosystem suffers for it. Has this improved in the meantime?
Windows only tools? You must have been using Altera options (I briefly maintained the page for Quartus compatibility at wine, circa 2004). Even them have had Linux support for a while, Xilinx has Linux support for as far as I remember, at least some ten years.
The ecosystem is terrible, I agree with you. However, the lock in is restricted for the final synthesis part. Simulation is mostly done with (very expensive) third party tools, such as Mentor Graphics Modelsim, and in professional settings the build is automated to a point that opening the vendor's GUI is frowned upon.
There are some quality open sources tools on FPGA build automation, but most places I had contact with have their own internal tools.
I know someone who’s implemented some string algorithms for FPGA with verilog. Is writing the code distinct from the tooling you’re referring to, or simply an open source flavor of deployment? I haven’t done any FPGA work.
When he talks about Verilog, he is referring to the language used for development and implementation. We're talking about a different part of the chain, something akin to the compiler, make, cmake and alike.
Of course, there's some dynamic between the tool chain and the language itself. For example, it doesn't matter if the language has the coolest feature if the compiler does not support it, and the toolchain will help you manage more complex code safely.
All the FPGA tools by Xilinx work well under Linux and Vivado is actually quite pleasant to use. Almost all of the EDA tools can be scripted with TCL (cadence has a host of other DSL aswell) so you don’t need to use the GUI if you don’t want to.
Vivado is absolutely terrible, especially the GUI. Our toolchain at work is set up so that it can be entirely avoided, with developers only writing Verilog in their editor of choice, and calling make or pushing to Jenkins when they want results. And then they go home, as it takes a full day to build a product on our server farm. Sim/others is done with third-party products.
I recall one Xilinx event where a Xilinx presenter asked how many people were using the Vivado IDE for development. IIRC, only a single hand was held up.
Vivado isn't worse than the competition, though. Intel/Altera's offering is also a steaming pile of crap (we evaluate the competing platform every once in a while when we decide what FPGAs to use for new products).
I don't know why all FPGA tooling sucks, but it does.
Vivado is absolutely terrible, especially the GUI. Our toolchain at work is set up so that it can be entirely avoided, with developers only writing Verilog in their editor of choice, and calling make or pushing to Jenkins when they want results. And then they go home, as it takes a full day to build a product on our server farm. Sim/others is done with third-party products.
That's how most companies do it in my experience (and I prefer it as well). That said, Vivado is leaps and bounds better than its predecessor (ISE). It may be horrible for HDL coding, but when you need to do timing analysis and floorplanning the GUI is really helpful and the integrated TCL command line makes it very flexible.
Tooling in FPGA-land is a sad story, unfortunately. It's stuck in the 90's. I've ranted enough times to feel I'm beating a dead horse, but it's frustrating to see how behind the times it is compared to software development. Just look at HDL IDEs as an example:
The vendor ones are worse than simple text editors, 3rd party IDEs cost an arm and a leg, while feature wise they are nowhere near their software counterparts. Take the most popular HDL IDEs (or should I say eclipse plugins?) and compare them with visual studio pro or jetbrains. The feature discrepancy is staggering (as is the price difference). We use Sigasi at work and while it's definitely better than a text editor, the basic version costs ~800$/yr and apart from error-checking, goto-definition and renaming there's not much else going on. For the same price you can get the whole jetbrains suite with a couple hundred dollars to spare.
Is it possible that the market for software developers is larger than the fpga developer market hence allowing tool development costs to be amortized over a larger set of users.
That's true, but that doesn't explain why the HDL IDEs lack modern productivity features or why the simulator GUIs have the same interface as 20 years ago. Inertia and vendor lock-in are better explanations IMHO.
I hope with FPGAs becoming more mainstream the open source/hobbyist community will step in and refine some things, but if the big EDA shops don't take hints from the software industry things are not going to change very fast.
Most software tools are provided for free, sometimes even open source. If money is involved, and we're not talking about things like IDA Pro, prices are usually negligible.
With Xilinx toolchains, you're paying both for extremely expensive toolchain licenses, and paying quite steep prices for the FPGA units themselves. We're basically shoveling money at Xilinx as fast as we can.
Writing Verilog in the editor of choice then calling make and pushing to Jenkins sounds amazing. Could you elaborate further on the methods and best practices to develop FPGA without Vivado IDE?(As your story indicate that there are many people can do this in the industry.) I'd like to know more about the setup.
And don't get me started on trying to version control a Vivado project. It's like they didn't even consider the possibility of committing a project to source control! I last used Vivado about 2 years ago but doubt things have changed much. It really makes me wonder how Xilinx does it in-house.
I was just about to say this! We’ve recently hacked our Vivado project into git. This requires unpacking all packages and ignoring compiled packages. Plus a bunch of bat files to handle house keeping.
A big part of the issue is compiling packages. I think this is a thing because synthesis takes so long.
Until synthesis time is reduced, I think we’ll see these hack-y solutions.
I don’t see how the quality of the IDE has any influence on what selection of FPGA to use.
We select FPGAs based on HW features and that is that.
The tools are more than good enough for debugging or to kick off an occasional custom synthesis run when you don’t have the time to wrap it in a Makefile.
Vivado isn't just an IDE, it's a toolchain. Poor or buggy verilog support, synthesis bugs, etc. can definitely influence the choice of FPGA's. And FPGA toolchains are all very buggy. Current issues we have involve that Vivado cannot reproduce builds even with the same seed.
Our main reason to not have picked Altera was usually much poorer routing and inability to meet our bandwidth demands (>100Gb/s), but the poorer toolchain was also a factor.
However, for IDE, it can be a killer if use is a requirement.
The quality of the tool chain absolutely has bearing. You are essentially locked into using using the FPGA vendor's tools. And its not like there is much choice in FPGAs anyway. You have Xilinx, Altera, Lattice, and MicroSemi. In a small company with a low volume product, the engineer working with the FPGA has a ton of other duties. The speed with which you can knock out the FPGA is huge and the tools and available documentation play a large part in that.
I wrote "quality of IDE" not "quality of the toolchain" for a reason. In fact, let's reduce this to "quality of the editor." The text editors are absolutely terrible, but that's why they invented Vim. You're only a "Add File" away to work around that.
Other than that, getting from Verilog to a design mapped to hardware is pretty trivial with both Quartus and Vivado. They both have excellent debugging tools with SignalTap and ChipScope, they have pin planners that works reasonably well. Synthesis and P&R results are decent.
If your design is so generic that it really doesn't matter which FPGA you're using, then, sure, make your decision based on this kind of detail. But I find that to be almost never the case. Price, number of DSPs and their precision, amount of block RAM, performance of the SERDES units. Those are real decision makers.
Yeah, I know you can use vim. Text editing is not even close to where the major pain points are with Vivado. There's more to the workflow of developing a FPGA than adding files in Vivado.
My experience went something like this:
An hardware engineer needs to do a routine task like add a peripheral, swap some pin assignments, and modify the Verilog/VHDL. So they do all their synthesis and have an export ready to hand it off the the software engineer. They commit their changes and it probably causes differences in dozens of files, but so is life. It seems like this could be reduced to differences in a few human-readable files, except for the bitstream which obviously is binary.
The SW engineer then needs to update the FSBL and BSP for the board. I never found a way to automate this on the command line, you needed to update the FSBL using their horrible Eclipse-based import tool. In my case, I had to make some manual modifications to the FSBL. I think for my modifications that I needed to flip some GPIO pins early in the boot process and also do some RSA validation on the bitstream. Well, all those modifications would get wiped out. I never found a way to template those and preserve them across new imports.
So I had a bunch of differences that had to be manually merged every time. I had notes about it, but come on. What a pain. At the end of all of this, many dozens more files were changed. Once again, it seems like this should reduce to just handful of human-readable files like a FSBL configuration header / C file, the new U-Boot config, and the new kconfig. But instead you had two massive changesets in version control for some very routine work.
Honestly I find the situation better than 20 years ago now on the embedded processor front. The worst thing we have now is broken shitty standard libs and compilers. You can dodge around those by picking assembly (PIC for example, mpasm is actually ok) or platforms with GCC implementations (ARM/AVR)
Vivado us actually very good for EDA, and especially for one you can get for free (webpack). Compare it to design compiler for basic synthesis- now that is an ugly gui! Vivado had made great strides to integrate ip/rtl/synthesis/pnr/debug with cross probing between stages and great visual presentation of results. Try do any of this with 'industry standard' tools like design compiler and you will see just how good it is comparatively.
The colleagues are telling me, that this ugly GUI works nowadays under Linux too. You also can script much using tcl commands. I have it running on Windows and it works somehow, but I am not happy user:-/
I'd argue that it's both ugly and resource heavy, but "ugly" would be a matter of taste.
I just have a very strong aversion for IDE's that fail to prioritize text editing, permanently wasting space on all sorts of buttons and knobs that are only needed 0.1% of the time. It doesn't help that they often also fail to be responsive.
I don't write much VHDL or Verilog for my work. Most of it is combining cores together that are either from the vendor or are generated using Simulink. Occasionally there needs to be some glue logic to link some blocks together and all the code in the softcore processor needs to be written in C but most of the development is based on wiring up blocks in Libero's GUI.
Going from IP catalog to pin planner is not the easiest task. Or setting bitstream properties. Some settings are accessible from special locations, and this isn’t very intuitive. It’s easy when you know all these details.
Well, it certainly hasn't been Windows specific for over a decade, but it's definitely bad.
Comparing it to software development toolchains, one might even go as far as calling it a steaming pile of horsedung. However, a lot of it can be hidden, making the flow: 1. Write verilog in your editor of choice, 2. Call make. 3. Come back tomorrow to check the result.
But isn't it becoming more like "SoCs gaining FPGA cores"? I know traditional FPGAs got hard CPU cores added, but from the CPU perspective the FPGA core is the new thing.
Article says "CPU does not fit FPGA synthesis very well and uses almost whole thing".
People serious about prototyping usually get daughterboards for FPGA and/or FPGA stacks specifically to prototype big things with CPUs. The buses can go outside of main FPGA with CPU and into other things. I know at least one big SoC project which went that way.
I also think that having an ARM core in FPGA is a vendor lock in. For example, you cannot use AMBA/APB/AXB and other ARM buses with ARM CPU core in your design without paying ARM for license for these buses. It is not clear to me whether Zynq users have to pay for these buses and it may be case that they have to. Finally, ARM core in the prototype naturally extends into ARM core in the final product.
ARM itself is not very nice design from contemporary point of view. I expressed my dissatisfaction with ARM ISA many times here and just let me start with two points: 1) ARM is not RISC (multiregister load/store execute in several clocks) and 2) too much of initial design of first ARM (which was not planned for longterm evolution) is visible in ISA.
Basically, they put outdated (even for 2010) core design and used valuable silicon area so users have to use that instead of much more capable contemporary designs. Instead of trying to figure out how to change typical FPGA elements and layouts for CPUs to be more synthesable (which may bring benefits in other places), they decided to use that ARM thing.
> ARM itself is not very nice design from contemporary point of view. [...] they decided to use that ARM thing
ARM CPUs are the de-facto standard in the embedded world (except for simple 8-bit MCUs), so what should Xilinx have done? Design their own, proprietary CPU architecture and ISA? Choose some other architecture with 2% market share? Both paths would have led to the immediate death of the whole product line.
> 1) ARM is not RISC (multiregister load/store execute in several clocks)
So? Who cares?
> 2) too much of initial design of first ARM (which was not planned for longterm evolution) is visible in ISA.
Ah, yes, technological purism – the quickest way to practical irrelevance.
> ARM CPUs are the de-facto standard in the embedded world (except for simple 8-bit MCUs), so what should Xilinx have done? Design their own, proprietary CPU architecture and ISA? Choose some other architecture with 2% market share? Both paths would have led to the immediate death of the whole product line.
Totally agree. IIRC, didn't Xilinx have another hard core CPU+FPGA design before the Zynq using a PowerPC? PPC is largely irrelevant these days. The high-performance embedded space is dominated by ARM today with x86 and PPC taking the rest. There's no way Xilinx would choose x86 with Intel owning Altera.
Maybe in 5-10 years RISC-V will start to eat some of ARM's share, but that remains to be seen.
RISC-V started 2010, the year Zynq was dreamed up. Candence bought XTensa (much less than 2%) and I know of several products using their cores.
My arguments about ARM CPU inferiority are about ARM CPU inferiority and related vendor lock-in. I summarize them: for your substantial money you get inferior CPU you cannot improve much, for prototyping and then for product, with highly aggressive pricing when you want to run ASIC.
> RISC-V started 2010, the year Zynq was dreamed up.
Exactly: much too late for Zynq, and still a minuscule market share even 8 years later.
> for your substantial money you get inferior CPU
Wrong, and furthermore, you still didn't get the point: It's not about theoretical technological perfection, but about the surrounding ecosystem, tooling, related IP, available expertise, and support.
> CPU you cannot improve much
Also wrong, and irrelevant for 99% of use cases.
> with highly aggressive pricing when you want to run ASIC
> ARM itself is not very nice design from contemporary point of view. I expressed my dissatisfaction with ARM ISA many times here and just let me start with two points: 1) ARM is not RISC (multiregister load/store execute in several clocks) and 2) too much of initial design of first ARM (which was not planned for longterm evolution) is visible in ISA.
Found the guy who took his first computer architecture course and now thinks everyone at Intel, amd, ARM, etc is stupid for not going full on RISC.
The lack of multi-register store and load is one is one of my biggest gripes with the RISC-V ISA. It just hurts to see the compiler waste 32 x2 bytes just to save and restore register context. (It matters a lot when you have a little CPU with 4KB of program RAM.)
Because if it is ot pure you will have trouble having differrent implementations of same ISA. Imagine implementing same feature for superscalar OoO or for version where pipeline length is only two or one cycle.
Being a software dev I also designed CPUs. I know what I am talking about.
For you having 4K IRAM - use stack machine. You'd be better off.
To use with ARM CPU, not to implement. When I worked with SoC design back in 2008, the situation was just like that - use A* buses freely unless you have an ARM CPU in your design (even if that CPU is implemented by you). Then you need a license.
Leon family of CPUs used AXB, I think, for example.
So if you buy an architecture license to design your own core and you want to use AMBA in your design, you have to pay extra? I find that very surprising, but having never worked for an architecture licensee, I don’t have any information to contradict it.
Can you share a sense of how expensive the bus license is relative to the architecture license?