This always is an issue I take with people saying how certain technology is impossible due to <insert proof reducing technology to halting problem>. Type checking, code optimization, automatic proofs, etc. That's short-sighted.
As long as you reduce the power of your technology from "yes/no" to "yes/no/don't know" you're good to go.
Some optimizations may be impossible to apply in general due to the halting problem. But as long as you realize that the optimizer might return "I don't know if I can apply this", you can conservatively apply the optimization in the cases where the optimizer actually is sure.
Same for type checking. You can add a compiler warning: "Could not automatically prove type correctness.". Done. Now you can automatically check 99% of the cases where there are no halting problem issues, and leave the remaining 1% to the humans.
You have missed an important point: Your technology is not yes/no/dunno, but yes/no/dunno/loop. Turing-completeness implies Rice's Theorem, which is basically a fancy way of saying that you can't escape Turing's results just by claiming that you're solving an "easier" problem as long as that problem is still non-trivial.
I'm sorry that you think that our amazing farsighted glimpse of the edge of computational infinity is "short-sighted", but in reality these upper bounds are very useful for reminding us of the certain futility of certain tasks.
Moreover, I am also happy to point out that a problem not need be Turing-complete to be computationally intractable even if decidable. Compilers happily cheat on optimizations, register layout, etc. if it means not having to run expensive algorithms. When a type-checker is accidentally quadratic-time, people roll their eyes and fix it; when it's intentionally exponential-time, people will avoid it entirely.
Except you can bound it, and you can escape it that way.
IE "not gotten anywhere after 30 iterations, so don't know".
Otherwise, you couldn't, for example, ever compute the tripcount for a loop without risking running forever.
" Compilers happily cheat on optimizations, register layout, etc."
They don't cheat. They just make the problem easier at a cost of optimality.
Or, they discover they didn't need to solve the problem they thought they did.
A great example is register allocation.
NP-complete to do graph coloring.
NP-complete to decide if a given graph admits a k-coloring for a given k except for the cases k ∈ {0,1,2}.
NP-hard to compute the chromatic number (ie just figuring out the minimum number of colors for a graph)
This was the state, right up until a number of people proved that, if you do it on SSA form (a linear time transformation), you can optimally color it in linear time.
A small difference.
So, often, these upper bounds that seem "futile", maybe are not as futile as one thinks.
All type checkers for sophisticated dependent type theories (e.g. MLTT) will resist any bound on time or memory you try to place, let alone a quadratic or exponential one. Somehow nobody seems to complain about how long they take to type-check. Both decidability and worst case time bounds are red herrings when it comes to a good type system (even for proof assistants).
Exactly. E.g program analysis tools can tell you whether your code does not have bugs or it might have bugs (for a very large set of potential errors).
For example, the Astrée static analyzer has checked the whole 100K LOC fly-by-wire Airbus software and determined there were only a few potential bugs amongst those classes of errors tested. That software was written in a Turing-complete C subset.
The key point to "get through" the halting problem in program analysis is to err on the safe side. That is, to allow false positives.
All type systems have false positives by their nature. The trick is...humans have to be able to reason about their type errors so they can fix them. If the type system is too expressive, reasoning about type errors can be incredibly difficult.
When I worked for coverity, a lot of time and resources were spent on consulting to help customers understand the errors they were getting, and to identify and lower false positives. Usability is key, or no one can/will use your stuff.
Yes, it's similar to the fallacy committed when people say "I have reduced my problem to an instance of graph colouring, therefore my problem is NP-complete."
You have it the opposite way, if you can reduce an instance of an NP-complete problem to your problem, then your problem is NP-complete. Is that what you meant by a fallacy?
That's the difference between theory and real world. In theory something may not be perfect, but it's still good enough for practical use.
For instance, planetary motion is not really well described without solving N-body problems and taking relativity into account, and yet Newtonian dynamics and slide rules were enough to land us on the Moon.
I agree with your basic point, but i think it's not about theory vs. the real world so much as confusing the worst case for the practical case.
I often think that approximation algorithms, randomization algorithms, fixed-parameter tractability and "special case" deterministic algorithms are what people should look at if they are interested in positive solutions (as opposed to lower bounds). Not to mention the incredible success of SAT solvers.
Well there's two problems there: you can easily "reduce" a P problem to an NP-complete one; it's when you do it the other way around (reduce graph coloring to an instance of problem X) that you even have anything to worry about. But yes, worst time bounds for many problems are devilishly hard to achieve even when you're trying.
In regards to the last paragraph. If the type system is turing complete, then the type-correctness of the program _may_ be dependent on the output of an arbitrary turing machine. Therefore, your type checker must be either incomplete or its result must depend on the output of the arbitrary turing machine.
As long as you reduce the power of your technology from "yes/no" to "yes/no/don't know" you're good to go.
Some optimizations may be impossible to apply in general due to the halting problem. But as long as you realize that the optimizer might return "I don't know if I can apply this", you can conservatively apply the optimization in the cases where the optimizer actually is sure.
Same for type checking. You can add a compiler warning: "Could not automatically prove type correctness.". Done. Now you can automatically check 99% of the cases where there are no halting problem issues, and leave the remaining 1% to the humans.
Don't let perfect be the enemy of the good.