The idea that huge quantities of computing power will lead to massively better intelligence is like saying huge quantities of barbecue sauce will lead to massively better spare ribs.
This is not what superintelligence people are worried about, in general. The human brain is already embarrassingly parallel. I'm sure you can find at least one person who will advocate for the "Moore's Law=Doom" scenario, but you won't find that argument endorsed by anyone currently working on AI Safety.