Still need to be able to transmit and verify the blockchain, so network capacity and CPU. You also need decent IO to maintain an index.
There are benefits to scaling on chain, but as above, its not sustainable.
Encouraging transactions that don't need to be stored on-chain forever and a day would be better batched up. You can then at least still verify your address balances.
Based on your reply, I don't believe you researched the technologies I just listed. UTXO checkpoints, for example, will enable one to trustlessly bootstrap a node nearly instantly (from the last UTXO checkpoint). Even 1MB (+SegWit) block size Bitcoin has an "infinite storage" problem that will continue to make running and bootstrapping nodes increasingly difficult. Should we decrease the block size cap to stave that off rather than look for solutions? I would say no. Instead, it's probably worth considering whether we need 100% of historic transaction data to be stored across all nodes, or even required to be directly checked by new nodes as they bootstrap from genesis block. It makes a ton of sense to me that if you trust PoW (which you probably do if you're using Bitcoin), you can set a threshold for UTXO checkpoint depth and simply bootstrap from there.
What seems lost in this discussion is the idea that there can be (and maybe should be) a middle ground. Scale on-chain to the extent that technology allows and enables it. Surely, computers today are able to handle much more than Satoshi's computer in 2009, right? Simultaneously, building out off-chain scaling methods should also be encouraged. We don't need to put all our eggs in one basket.
There are benefits to scaling on chain, but as above, its not sustainable.
Encouraging transactions that don't need to be stored on-chain forever and a day would be better batched up. You can then at least still verify your address balances.