I find this is typically due to a difference between the level of abstraction people are modeling in their head. A great coder can flip back and forth between several different levels (the 50m lines of OS code, the 100,000 of app code, the 1,000 of local code, etc) and have a reasonable idea of how those parts fit together.
Not a complete understanding, just a basic idea.
This is why mediocre programmers are what they are---they lack this ability and end up "flailing around", looking at somewhat random issues, bouncing back and forth, unable to see the pattern, and so end up spending more time writing code that eventually has to get rewritten.
A great programmer will systematically take a problem apart so that even if they don't solve a problem in one minute, they've at least narrowed down the window of options for the future.
This is also why great programmers are always great debuggers.
Good points. Just to elaborate a bit, the mediocre programmers tend to end up writing five functions that do almost the same thing because they don't have the vision to see how it all fits together and come up with a single, elegant solution. The code may very well work to accomplish the immediate task, but the lack of clarity and good planning makes it rather fragile if modification/extension is ever necessary. They tend to produce a lot of unnecessary lines of code that become a maintenance liability as the project grows.
Not a complete understanding, just a basic idea.
This is why mediocre programmers are what they are---they lack this ability and end up "flailing around", looking at somewhat random issues, bouncing back and forth, unable to see the pattern, and so end up spending more time writing code that eventually has to get rewritten.
A great programmer will systematically take a problem apart so that even if they don't solve a problem in one minute, they've at least narrowed down the window of options for the future.
This is also why great programmers are always great debuggers.