Yes, but it's also thanks to this mentality that a simple text editor occupies 1 gig of ram and two cpu cores nowadays. If the dev's machine is infinitely fast and has storage as fast as other people's ram, they stop caring about performance issues, because they never encounter them in the first place. And I'm not talking about working on hand written assembly for a week to get the last bit of performance. I've repeatedly found silly things like O(n) code calculating something that could be expressed as O(1), hilariously complicated xpath expressions that in the end just retrieved an immediate child and whatnot.
At one of my first jobs, when my boss got annoyed by some software being slow, je always said that developers should get the fastest hardware available, but as soon as they start testing or using their own software, the machine needs to magically turn into a 386 with your home directory on an NFS share. Granted, that was 20 years ago so 386 didn't sound as hilarious as it does now, but it was still a little extreme. But that idea stuck with me, and to this day I do test software on slow machines every now and then, looking for obvious performance left on the table.
At one of my first jobs, when my boss got annoyed by some software being slow, je always said that developers should get the fastest hardware available, but as soon as they start testing or using their own software, the machine needs to magically turn into a 386 with your home directory on an NFS share. Granted, that was 20 years ago so 386 didn't sound as hilarious as it does now, but it was still a little extreme. But that idea stuck with me, and to this day I do test software on slow machines every now and then, looking for obvious performance left on the table.