Your assumption is correct if you look at supercomputers, where the fastest in the world in 1999 could produce ~2.3 TFLOPS and in 2018 it could produce 122 PFLOPS which is around 5000 times the increase in FLOPS.
But i doubt most of the people you would want to go through this index has access to a super computer.
I wouldn't be surprised if the indexed subset of Facebook alone were more than 1000x larger than all of the indexed web 20 years ago. The web in general has probably expanded many millions or hundreds of millions of times.
Personally I wouldn’t mind if trash/spam sites like Facebook/Twitter were omitted from the database. As well as non-English content, being as though I only speak English. Remove trash/spam/non-english from the db and the size of that 300TB will be cut down substantially to the point it is feasible for a single person to store. After all, even if somebody wanted to store the whole 300TB db would cost about $4000 in hard drives which is not as totally out-of-reach as some people here are making it seem.
That was a very different internet. Search engines aren't something you build once and then you just have them. Constant, extensive work is necessary. It's quite literally a global-scale task to do this effectively.