While there are tools that help you compute expected power consumption, nothing beats contact with reality. I just didn't think about the "everything changing at the same time" scenario.
Sorry if I came across as critical, I was mostly curious.
In the last project I worked on with an FPGA (Xilinx Zynq) and cameras, the embedded system that had fairly strict power budget and IIRC the whole design could dissipate something like 45W, which led to a rather impressive looking enclosure that functioned as a heatsink. There was of course a margin, but the thermal design was driven by how much power could be run through the system.
Tools have gotten better, of course. However, I think the fundamentals haven't really changed in the sense that accurate modelling is impossible outside of certain domains. The risk is that you can under or over-design thermal management. This is where experience with a design can be invaluable. As the design goes through iterations one notes performance, compares to assumptions and estimates and makes adjustments.
We tend to work on projects that are highly constrained in terms of such things as allowable mass. And so, I can't just liberally apply a multiplier to estimates and call it good. We need to know. With time you develop test suites that stress things enough to achieve pretty decent test-based thermal data.