Hacker News new | past | comments | ask | show | jobs | submit login

He measures code 'simplicity' by how much work it makes CPU do, and not some made up metric like 'readability'.



How exactly is readability made up?


This is a good question to ask, if not rhetorical.

When practicing rigorous measuring, quantities need to be quantifiable aspects of the world. For example, you can quantify how much physical space your code or compiled output takes up in memory, and use these quantities as base units to derive others. By branching from a quantifiable root, you can derive metrics such as lines of code or number of CPU instructions, and they’d still retain those quantifiable aspects. Meaning there’s a clear path to the quantifiable root.

Needless to say, readability, as a metric, is not branched from quantifiable aspects of the world. So in a sense, it is still a “made up” metric because (as of today), there’s no way to trace it down to the quantifiable measurements.


> Needless to say, readability, as a metric, is not branched from quantifiable aspects of the world. So in a sense, it is still a “made up” metric because (as of today), there’s no way to trace it down to the quantifiable measurements.

It is, actually. "Readability" for me means "how long it takes to someone that didn't write the code to be able to understand it, make changes, add features". It's a more fuzzy metric of course, as anything involving humans is, but that's also usually the kind of metrics that matters a lot.

That also means that it's not a binary readable/non-readable thing. A way of measuring "readability" could be: assuming all other variables are equal, what percentage of the new hires are able to add new features after 1 month?


I’d note that because something is hard to measure doesn’t make it “made-up” or unimportant. And that concentrating on things that are easy to measure doesn’t make them more important and in fact can bias things badly.


I think the poster is using the term “made-up” in a stricter sense to categorize it, not just to be dismissive.


Readability can be quantified and it done by static code analyzers into a metric known as cognitive complexity [0], which measures things like amount of branches in your function. The thing is, you can make code of low complexity but still hard to read.

[0] https://tomasvotruba.com/blog/2018/05/21/is-your-code-readab...


Cyclomatic complexity is a good metric for local readability, but for considering the readability and ease of modification of whole programs or even for single files, it's far from good enough.

For an extreme example: you can turn all the methods of a complex program into one-liners, and all your classes into one-method classes. But doing that will definitely make your program harder to read.


Sure. You can trace branches to quantifiable roots, but you can’t trace readability. To me, readability means something different than what it means to the author of that article.


Simple example: I love the ternary operator and use it quite a lot in simple "if/then" scenarios. However some people hate them because they consider them harder to read than the fully written out if/then form. Those people would judge my code less readable.


Ternary operators are better for expressions, which yield a result, since you don't have to declare or initialize a variable with a dummy value first. If/else is better for general branching. Using ternary without assigning or passing the expression would imho be misleading. I've seen ternaries used for branching and it's a smell imho, they're not really meant for that use case.


Code readability is at least extremely subjective, one person's highly readable code is another person's incomprehensible mess.


It is not made up but different people can have different opinions on what's readable and what's not.

If the metric is supposed to be objective, then number of CPU cycles used is probably the simplest metric there is for computers.


Right it’s a communication problem because calling code simple conjures different ideas in people heads. Code that is simple for computers (principle of least work) is not necessarily simple (principle of comprehension?) for humans.


There is a high overlap though, it's often harder to understand how highly abstracted code actually works than 'unrolled' verbose code composed from simple operations (which is closer to machine code - thus the 'overlap').

It might be easier to understand the 'intent' of highly abstracted code, but this doesn't mean the code behaves as intended, and IMHO 'readability' is about understanding what the code actually does, not what it is supposed to do.


I think all these things are aspects that are interrelated. A principle of abstraction is another good one. Others I can think of are principle of least surprise and principle of least work done by the compiler (lol “zero cost” abstractions).


We might be thinking about different things here, so let me first ask this: What do you want to measure?


How can you objectively quantify it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: