Not quite sure what point the author is trying to make? He agrees that lines of code are a bad measure of productivity because, yet claims that the average he computes can help him predict future development times. Then he explicitly points out that different parts of the codebase required different amounts of work, apparently unrelated to their code line count, yet does not relate this to the previously mentioned points. Also, what is up with the random comment about code coverage at the end? That doesnt fit in with the rest of the article either...
I think they're stuck with the same problem as every dev manager. LoC are pointless/futile/etc, but they're really easy to measure and they're got to measure something right?
Like measuring productivity by how many hours people spend at their desks. Utterly pointless, but really easy so it becomes the default measure.
Trying to explain to professional managers that there is no foolproof way of measuring developer productivity is a really hard conversation that I've had more than once. I'm assuming the OP's target market is exactly these people, so I don't really blame them for succumbing to the pressure.
Even the assertion of that knowing the lines of code per day helps with estimation seems puzzling. How do you know how many lines the finished product will eventually have in advance?