Yes, 99.99% of Java programmers do it wrong. That's because they are not programming, they are typing stuff into a computer to make sure next week's paycheck shows up on time.
Or, they think "programming is fun", and want to solve a problem. So they solve the problem in a way they know how, publish their solution to teh intarwebs, and are done. This results in a solution to a problem, not easy-to-maintain code.
Writing good code requires practice, discipline, skill, taste, continuous re-evaluation of the design, extensive thinking, and extensive learning. Not many people care enough to do it right. The program works today, and if something needs to be changed tomorrow, well, they'll change it tomorrow.
The problem is not OO, the problem is that good software is never demanded (because people think it's impossible).
OO is inherently stateful, so reasoning about OO programs is just slightly easier than reasoning about assembler programs.
OO was introduced as a way to control program behaviour - encapsulate effects, hide information about effects, abstract effects away. All while controlling them.
Using single notion of object and inheritance, as a single method of type derivation (ie, derivation of information to reason about).
This is hardly done now, with all the power of Coq type system, why should be it possible within typical OO type system of mid-90 or even current .Net type system?
Perhaps the reason OO has gained such a following is because it seems (to some) like the solution. Businesses have found they are bad at hiring programmers, because middle managers don't know how to spot the bad ones. So the best solution, in their minds, is to limit the amount of damage a bad programmer can do. In an imperative language, a bad programmer can wreak havoc. In OO, so the thinking goes, the disease is quarantined to a predefined set of functionality. Good programmers complain because OO ties the programmer's hands. But maybe that's the point.
No it isn't. Not yet. You have to define "OO" first. You have to point to a cluster in program-space worth defining, call that "OO", and call the rest "not OO". There could be fuzzy limits, but some programs have to be clearly OO, and some has to clearly not be.
I tried, and I hit two little snags:
(1) There is no agreed upon "OO" cluster. Ask Alan Kay and Bjarne Stroustrup. Most programmers even make a purely (and meaningless) syntactical distinction, calling `foo.bar(x)` OO, while calling `bar(foo, x)` not OO; or calling C++ classes OO, and calling C structs + functions not OO, even when they don't use inheritance in C++.
(2) Actually, "OO" is now meaningless. It doesn't have an interesting predictive power. No cluster in program space worth categorizing can reasonably be called "OO", because other existing terms will always be preferable. So we should stop using that term.