I've been wondering if one of those chips could fit a CPU.
How do you estimate this sort of stuff? I can estimate software decently, but the penalty for being wrong is seldom so bad. With a hardware project, things actually don't fit and you are out of luck.
Different FPGAs have different sorts of limits, not just 1 different number. Do you go by lines of Verilog/VHDL code? Do you go by some idea of "most crazy operation" and register width? How...?
It comes with practice in the field. After a few medium-size designs, you start to have some data-points for design-to-FPGA mappings. If you _really_ need to get close the first time, the only way is to mock up the design and synthesize it.
At places I've worked, the general pattern tends to be:
1) Pick a target FPGA family based on feature-set and advertising copy. Probably you use the vendor you're already familiar with.
2) Make an extremely rough LUT count estimate based on some prior designs (and/or maybe based on the utilization numbers for vendor-supplied IP cores).
3) Most FPGA vendors sell a bunch of variants of any given family. Do your first round of prototypes using an FPGA that is 50-100% larger than you think you need.
4) Once you've got the design more nailed down, make a better estimate and pick a smaller/lower-cost part in the same family. On FPGAs that I've used, the FPGA is generally 'full' if you've used up 75%-ish of the available LUTs. This is because of limited routing resources and imperfect compilers.
An fpga has finite resources (block rams, pins, logic blocks (lookup table + flop)). If you have a good understanding of the logic you're designing, it's not that hard to estimate resource usage. You just get used to thinking in terms of gates, flops, etc
Right, you usually have some idea of how Look up tables an n bit add operation, or an n bit multiplexer etc takes up. You also need to take into account how much memory and multipliers you need since FPGAs have multiplier and ram blocks built in.
You can estimate to some extent based on FPGA resources. For example you know you need X thousand flip-flops, Y hundred block RAMs, and Z hundred DSP blocks. And then you look at the switching speed specs and guess how fast you can clock it.
But in the projects that I've been involved with, it turns out that it just wasn't feasible to route all the stuff that was originally envisioned at the planned speed. Interconnect is a big killer of hopes and dreams, once your FPGA starts to fill up.
Really, the only way to be sure is to run your design all the way from synthesis to place and route. The problem with that is you rarely have the luxury of starting a hardware design with all the HDL already written.
The consequences aren't that dire with an FPGA. You aren't going to break anything. You can just run the synthesis tools on the design and they will tell you if the design will fit.
I just started learning about opencores and various buses (wishbone, axi, etc.) I am surprised many designs say spartan 3 or some vendor specific Fpga. I thought an Fpga is an Fpga. Yet, I see logic analyzers, open source softcores, etc. target specific devices. What gives?