There is, ultimately, a direct conflict between abstraction and efficiency. Abstraction gets its power by using indirection: generalizations that stand in for specific cases. Making abstractions concrete involves a flattening of all these indirections. But there's a limit on our ability to automate the manipulation of symbols - we don't have Strong AI that is able to make the leaps of insight across abstraction levels necessary to do the dirty hackish work of low-level optimization. The fabled "sufficiently smart compiler" doesn't exist, and is unlikely to until we have Strong AI.
I'll further submit that a design that has clean abstractions and simple and small implementations either doesn't do much to the point that it's not very useful on its own, or if it is useful the complexity has moved somewhere else, perhaps in metadata or configuration or tooling. It's like there's a law of conservation of complexity; there is a certain amount of irreducible complexity in useful programs that cannot be removed, and remains after all accidental complexity has been eliminated.
Seeing http://vpri.org/html/work/ifnct.htm I think we have good reasons to believe that we currently are several orders of magnitude above that amount of irreducible complexity.
So, while I mostly agree with what you just said, I don't think we've hit the bottom yet. Silver bullet-like progress still look possible.
I'll further submit that a design that has clean abstractions and simple and small implementations either doesn't do much to the point that it's not very useful on its own, or if it is useful the complexity has moved somewhere else, perhaps in metadata or configuration or tooling. It's like there's a law of conservation of complexity; there is a certain amount of irreducible complexity in useful programs that cannot be removed, and remains after all accidental complexity has been eliminated.