The post explicitly says that assessment (comprehension with the purpose of making a decision about a situation around a system) is not reading. However, it does says that people currently conflate the two because nobody talks about them to the point that reading is a proxy to measure comprehension effort.
You are correct in saying that I argue that it is not appropriate to employ reading as a main means for assessment.
Code is certainly not literature, but it should still be studied. In fact, assessment specifically talks about the intent. If the intent is different, such as learn a new language, reading is appropriate. Reading is also appropriate when the problem fits on one screen. It starts to be inappropriate as soon as you start to scroll.
I also do not say that tools should be limited to code either. Every aspect of a system, including its history, runtime, tickets, customer feedback, is data, and it's all relevant. We should be easily able to integrate any of these in our reasoning.
I agree with the observation that code can vary greatly. In fact, it is for very reason that out-of-the-box-clicking-tools will always fail to provide meaningful value. They bake the question in the click, but because of context is unpredictable, we simply do not know the question before we have the problem. That is why the specific tool we'd need should come after the problem, not before it.
And yes, a system is a phenomena that should be approached through the scientific method (this is the essence of what moldable development is). Developers are already doing that implicitly. We should just make it explicit. All sorts of possibilities will arise after that.
IDK why someone downvoted you. Thanks for these thoughts.
I guess I would only add the distinction that you're discussing "comprehension with the purpose of making a decision about a situation around a system". But sometimes we legitimately want to build comprehension without yet having a specific purpose or decision (e.g. when onboarding a team with an existing code base, or trying to understand how a technique works), but even then reading is a tempting but inadequate path to building understanding.
You are raising an important point. When you do not have a hypothesis, the first thing you want to do is get one :). It's like in research: the greatest problem you can have is not having a problem.
Now, how do you get a hypothesis?
You can start from some generic visualizations. The goal here is not to gain understanding, but to poke at finding interesting initial questions.
But, you actually always know something. You likely know the domain. Or you know the last tickets that are in the work. Even listening in the casual conversations is a good starting point.
When we train people, we literally start from the very issue they work on. Within 15 minutes, we typically find an interesting hypothesis to check for. For example, a dialog could go like this:
A: What do you work on?
B: A UI refreshing bug.
A: What do you think happens?
B: I do not know.
A: Why are you looking at this specific screen? (this is a key question. people often do not know why this screen and not another. If you have a 250000LOC system, you likely have some 5000 other screens you could potentially look at. Not knowing why this one is potentially interesting is not a good thing)
B: Because I think maybe it's related to how we subscribe to events.
A: How do you expect the event subscription to be like?
B: It should always happen in a method called xyz that is provided by the framework.
A: In all classes?
B: A, no. Just in components.
A: Ok, so you want to know the event handling that are not defined in xyz in subclasses of the component superclass.
B: A, right.
It's actually remarkably straightforward. Just try it.
The post explicitly says that assessment (comprehension with the purpose of making a decision about a situation around a system) is not reading. However, it does says that people currently conflate the two because nobody talks about them to the point that reading is a proxy to measure comprehension effort.
You are correct in saying that I argue that it is not appropriate to employ reading as a main means for assessment.
Code is certainly not literature, but it should still be studied. In fact, assessment specifically talks about the intent. If the intent is different, such as learn a new language, reading is appropriate. Reading is also appropriate when the problem fits on one screen. It starts to be inappropriate as soon as you start to scroll.
I also do not say that tools should be limited to code either. Every aspect of a system, including its history, runtime, tickets, customer feedback, is data, and it's all relevant. We should be easily able to integrate any of these in our reasoning.
I agree with the observation that code can vary greatly. In fact, it is for very reason that out-of-the-box-clicking-tools will always fail to provide meaningful value. They bake the question in the click, but because of context is unpredictable, we simply do not know the question before we have the problem. That is why the specific tool we'd need should come after the problem, not before it.
And yes, a system is a phenomena that should be approached through the scientific method (this is the essence of what moldable development is). Developers are already doing that implicitly. We should just make it explicit. All sorts of possibilities will arise after that.