I don't quite understand the argument - if there's no problem with increased size, why should we worry about size? People from the 1960s would be shocked to hear how large our software is today already, but since we have the capacity to run it successfully, I'm glad that we've freed up programmers to work with more productivity.
As far as complexity: there will always be costs to complexity in terms of agility and reliability. And using less compact data structures often leads to less complex code, anyway.
> As far as complexity: there will always be costs to complexity in terms of agility and reliability.
Yes, but I worry about a world where only the programmers bear the cost of the complexity. If reduced complexity does not benefit the user, it becomes harder to justify time spent on reducing complexity.
> And using less compact data structures often leads to less complex code, anyway.
I disagree, unless you're talking about extreme cases like fancy structure packing. More compact data structures mean less program state. Less program state means less complicated state changes and less redundancy.
Litmus test: if your program has entered a bad state, how much data do you have to inspect to discover what the inconsistency is? And how much code do you have to inspect to figure out how it happened?
> a world where only the programmers bear the cost of the complexity.
It is the users who pay the ultimate price of complexity. Unreliable, expensive software with long release cycles cost them money, time and happiness. A company that does not realize this is doomed. It doesn't matter if the unreliable software runs fast.
As a counter argument, when you are not restricted by performance, you can utilize your memory more effectively to remove complexity from your software.
As far as complexity: there will always be costs to complexity in terms of agility and reliability. And using less compact data structures often leads to less complex code, anyway.