Hacker News new | past | comments | ask | show | jobs | submit login

There are static analysis tools like SonarQube which can measure code quality to an extent by looking for patterns that typically represent defects, or at least maintainability problems. But they usually can't detect deeper design flaws.



Such tools don't measure any objective code quality, they just measure to what extend the code conforms to some criteria defined by the people configuring the tools. Do these criteria correspond to proven long term maintainability benefits, or are they just unfounded opinions?


The tools will catch issues like excessively long methods or use of "magic numbers" instead of named constants. Most programmers would agree that those are maintainability issues, although I don't know if it's ever been proven. You can disable any rules you don't want.


Yeah but such rules are still just unfounded opinions which have been codified and human judgement have been removed. What is an "excessively long" method, for example? If a method conforms to good design principles like separation of concerns and low coupling high cohesion, no particular length is too long. If a tool force you to split a cohesive method into multiple methods, then you have increased accidental complexity. So the tool will tell you you have improved some arbitrary "code quality" metric, but actually you have decreased maintainability.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: