Hacker News new | past | comments | ask | show | jobs | submit login

I think Rice's theorem implies that proving the undecidability of the halting problem is equivalent to proving the undecidability of any other non-trivial semantic property of a program (*). So this discussion is basically just splitting hairs as you said.

https://en.wikipedia.org/wiki/Rice%27s_theorem#Proof_by_redu...

(*) > Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, "does the program terminate for all inputs?"), unlike a syntactic property (for instance, "does the program contain an if-then-else statement?"). A non-trivial property is one which is neither true for every program, nor false for every program.




It’s worth noting that the undecidability of the halting problem doesn’t prevent computing scientists from proving that a program halts or has some other nontrivial property.


In general, it absolutely does. I'm not sure what you're trying to say.

(1) Yes, there are classes of programs for which you can say whether it halts

(2) Yes, there are programs who do not fall into those classes who can be shown to halt

But the point remains... there's no 'general' way to show that any program halts.

Part of the point of good PL design is to produce a language that is amenable to analysis, including halting analysis. Some languages do this better than others. Others are okay so long as you make particular assumptions. The halting problem tells us that the languages that are perfectly analyzable are categorically less powerful than the ones that are not. However, this may be 'enough' for us.


> In general, it absolutely does. I'm not sure what you're trying to say.

I can see that you don't so I'll try to be clearer. For any given program one might choose there is no reason in principle why a competent computing scientist can't perform a semantic analysis for every statement and then deduce from that whether it is totally or partially correct. Obviously most programs in the set of all programs are too long for a computing scientist to read in a human life time, but that's another issue entirely.

As a matter of practicality though, the working computing scientist is far more likely to first decide on the properties he wants his program to have and then derive a program that has them. This is yet another case of construction being considerably easier than verification.

This points to (but doesn't prove) the possibility that the human computing scientist isn't using a decision procedure to analyze (or construct) the program.

About here is where I expect to see some misinterpretation of Church-Turing as somehow proving that everything, including computing scientists, is a Turing machine brought up. Oddly, rarely is the far more interesting Curry-Howard correspondence mentioned.


I'm sure you know this, but even very simple/short programs can have complex behaviours that put them out of reach of computer scientists with present-day knowledge. For example, short programs can encode things like the Goldbach conjecture, the Collatz conjecture, or the Riemann Hypothesis. For example, does this program (written in Python, but translate to your computational system) halt?

    def isprime(n): return n > 1 and all(n % d for d in range(2, n))
    def goldbach(n): return any(isprime(p) and isprime(n - p) for p in range(2, n))
    n = 4
    while goldbach(n): n += 2
(See also the recent https://www.quantamagazine.org/amateur-mathematicians-find-f... which describes how hard it was to prove things even about puny 5-state Turing machines, and that already with just 6 states there is one "antihydra" that encodes Collatz-like behaviour and will probably be hard to prove anything about.)


Absolutely. And frankly I think that’s very exciting. If I had to lay a wager I reckon if those problems will be solved by a human intelligence, the most likely method will be through taking advantage of that correspondence and proving some tractable representation of the appropriate program halts.

Like many people I had a casual go at it for Collatz. It was a humbling experience of course, but it did nothing to convince me some more capable person can’t eventually manage it.


If you fix a particular axiom system for deriving your termination proofs (e.g. because you write them all down in Coq), then for Gödelian reasons, there are programs that don't terminate but for which a nontermination proof doesn't exist (and thus can never be found). If Coq is consistent, then the program that enumerates all possible Coq proofs and stops as soon as it finds a contradiction, is one such program.


> I can see that you don't so I'll try to be clearer. For any given program one might choose there is no reason in principle why a competent computing scientist can't perform a semantic analysis for every statement and then deduce from that whether it is totally or partially correct.

'Correct'? In what sense? 'Correctness' and halting have little to do with each other. For example:

    def f(p):
       p()
       return 2
is 'correct' if the definition of f is to evaluate to two. But it's not clear it's ever going to halt (depends on haltingness of p).

But that being said, actually no, there are many reasons in principle why a competent computer scientist would not be able to perform a semantic analysis for every statement and deduce from that whether the program will halt.

> Obviously most programs in the set of all programs are too long for a computing scientist to read in a human life time, but that's another issue entirely.

Irrelevant because even an immortal computer scientist would be unable to determine if the program halts.

> This points to (but doesn't prove) the possibility that the human computing scientist isn't using a decision procedure to analyze (or construct) the program.

The difference between a computer and a person (and I'm going to ignore all the various philosophies of consciousness) is that a human will 'arbitrarily' determine under what system he/she wants to operate, and thus can change axioms on the fly. of course, one could write a computer system that does this as well. It's really not obvious what you're saying here

> About here is where I expect to see some misinterpretation of Church-Turing as somehow proving that everything, including computing scientists, is a Turing machine brought up. Oddly, rarely is the far more interesting Curry-Howard correspondence mentioned.

Computer scientists may or may not be turing machines, but any deterministic procedure they follow can be encoded as a turing machine so your distinction is irrelevant.

I'm not sure what Curry-Howard really has to do with this, but in general, for a theorem checker, you should make sure your proofs are total.


> ‘Correct'? In what sense? 'Correctness' and halting have little to do with each other.

Confident ignorance isn’t a good look. To help you out, here you go from the wiki[1]:

  Within the latter notion, partial correctness, requiring that if an answer is returned it will be correct, is distinguished from total correctness, which additionally requires that an answer is eventually returned, i.e. the algorithm terminates. 
To be fair to your time and mine—I stopped reading your reply there based on the principle that nonsense follows nonsense, so you needn’t elaborate further.

[1] https://en.m.wikipedia.org/wiki/Correctness_(computer_scienc...


This is a non-productive comment. You said 'correctness', not 'total correctness'. Totality absolutely relates to halting. 'correctness' itself doesn't imply totality.

But regardless, it's pretty clear from your ramblings that you're not well-versed in this stuff. As for my part, I implement theorem provers and programming languages, so what do I know


But do we need general ways to show that any program halts? We don't write general programs

In particular, our programs have a limited size and use a limited amount of memory (or else the OS will make sure they will halt..). And for this specific class of programs, the halting problem is actually decidable!


Theoretical impossibility results like this one are often interesting because they tell you there is no "smart" way even in practice beyond just brute forcing it.

While it's true that the class of programs that terminate in time at most T is decidable for trivial reasons (just run the program for at most time T), that trivial decider is of course not gonna run in time T (but at least T+1). So if you set T=time until the heat death of the universe, you haven't gained any practical ability to solve the halting problem either.


There is 1 general way to show any given program halts. Run it on each equivalency class of input and wait!

This just might take forever.


That is not a determinable way though, since when a program enters a 'loop'ing state, it's unclear if it's going to halt or not.


it was unclear the entire runtime. Just need to keep evaluating.


Not a CS theorist, but it's not about you proving a program halts, it's about program proving that any program halts.

It's kinda like some statements in math given a set of axioms can't be proven or disproven.


Right - it is saying that there is no algorithm to do this in general. Any specific instance may have a solution.


It's a little more impactful than that. Have a look at Z3, Boogie, Dafny, and similar technologies to see practical application of what I mean. It boils down to there being "non-general" algorithms that still work for virtually every input you're ever going to give them. A hypothetical algorithm that decides the halting problem for 99.9999999% of programs would not violate the impossibility proof.

The limit case is maybe interesting. What about the algorithm that decides the halting problem for every program except for one? Does the impossibility proof prohibit such an algorithm? Does it make a difference if the identity of the unique program is known or unknown?

And then of course there is classic pen and paper hand derivation like the old guard (Knuth and his peers) did. The claim that that is following an algorithm is yet to be proved or disproved.


With a little massaging of input and output you can convert any program P into the program P_n defined as "repeat P n times".

For any n P terminate if and only if P_n terminates, so no general procedure can decide the halting problem for all programs except 1.


You are right in that there is a class of algorithms for which it is possible to do that. It is just not possible to decide if any algorithm is in that specific class of algorithms or not. There is an even smaller class of algorithms that can be constructed bottom up from halting sub algorithms. But these are not as powerful.


I kinda hate that people dress up this analysis as a theorem. We have lots of formally verified programs that show useful work can be done here. And even if we can't prove most of it, high assurance methods are very useful for preventing fuckery. I can't mathematically prove any lock is unpickable. But I can use a lock advanced enough that the cost of picking it becomes absurd.

Also, theoretical quantum computers can solve whether a problem halts 100% of the time. So Rice's Theorem is theoretically meaningless.


Man I feel like I'm missing something. Figuring out whether a single given program halts is decidable, it's figuring out an arbitrary unknown problem (or all of them rather) that is undecidable. So you can formally verify programs but you can't formally verify all programs with the same algorithm.

Secondly, what do you mean by the idea that quantum computation makes the problem decidable? This isn't a complexity class issue, and my understanding is that quantum computation can't compute anything that turing machines can't, they just are able to compute certain algorithms faster. Am i wrong? Among other things we can simulate quantum computers on turing machines, just without the complexity advantages.

Like think about what you're saying: of course we can prove that "exit(0)" halts, it'd be absurd to say otherwise. So we're clearly referring to a harder problem—proving whether arbitrary programs halt with a single algorithm in finite time.

If it helps, Roger Penrose misunderstood the halting problem and basically said "humans aren't practically constrained by godel's incompleteness theorems (equivalent to the halting problem I believe) so consciousness is quantum". This might yet be true but human computers are still bound by the same computational constraints computer's are.

It's been a bit since my computer theory courses so apologies for any botched terminology or understanding.


I would like to add that even deciding if a particular single program halts can be undecidable. At least in PA or ZFC, and I don't think there is a better math framework.

There was a pretty cool Bachelor's thesis (I think? Can't recall) that that used this fact to show that busy beavers beyond some point cannot be determined.

And even without any trickery, deciding single program halting can be extremely hard. For instance, the 3x+1 problem is as trivial to write as fizzbuzz but noone can figure out if it halts.



Probably! This paper is a year or two newer but used the exact same principle. It also has a much better constant than I recall. Perhaps I was reading a precursor article. I really enjoy this approach, it is a lovely idea.


It is never undecidable to determine whether a single particular program halts or not.

For any single program, one of these two functions will correctly output whether it halts or not.

    bool always_true(TuringMachine M) {
      return true;
    }

    bool always_false(TuringMachine M) {
      return false;
    }
It won't work for every Turing Machine, but it will work for a specific one.

This is why it's not very meaningful to talk about the decidability of particular Turing Machines and also why the halting problem is not, and never was about specific Turing Machines.

As to your point about being decidable within ZFC or PA, that is true but it's also not really significant. Neither ZFC or PA are a kind of master authority when it comes to decidability and in fact the vast majority of mathematics is fairly agnostic with respect to the use of ZFC.

The choice of a particular set theory tends to only come up in very explicit circumstances, and even in those circumstances you'll find mathematicians using theories that are more powerful than ZFC such as by making use of large cardinal axioms.


I suspect what GP means is that for certain programs, (assuming a fixed set of axioms) there exists neither a proof of its termination nor of its non-termination (not just that we can't find it; it doesn't exist).

Of course, if the program did eventually terminate, then a termination proof would exist (just enumerate all the state transitions, there's only a finite number of them). So what it means is that some programs never halt, but it's impossible to prove this fact.

This ties in with Gödel's theorems, e.g. (if we use ZFC and assume ZFC is consistent) a program that enumerates all possible valid proofs from axioms in ZFC and halts whenever it finds a contradiction, would never halt, but we can't prove this (at least not in ZFC) because that would contradict Gödel's second incompleteness theorem.


> For any single program, one of these two functions will correctly output whether it halts or not.

Saying "one of these functions is correct" is not a decision procedure. You actually need to decide which one of them is correct.

> This is why it's not very meaningful to talk about the decidability of particular Turing Machines

I disagree. There are particular Turing machines whose decidability is extremely meaningful. For example, there are specific Turing machines which encode mathematical problems of interest such as the Goldbach conjecture. Deciding whether they halt is equivalent to solving those mathematical problems, which is definitely meaningful.


>Saying "one of these functions is correct" is not a decision procedure. You actually need to decide which one of them is correct.

That's precisely what a decision procedure is, it's an algorithm that takes as input the description of a Turing machine, and an input, and returns true if and only if the Turing machine halts when given the input. You are using the term "decide" as if there were some kind of agency involved, like you have to actually "choose" what is correct or incorrect, but no such agency is involved, it is a purely mechanical process.

>There are particular Turing machines whose decidability is extremely meaningful.

You are mixing up the notion of decidability with the notion of halting, and there is a subtle difference. The Turing machine that encodes the Goldbach conjecture proves the conjecture if it halts, and disproves the conjecture if it doesn't halt. That is an interesting and meaningful property of such a Turing machine. What is not meaningful or interesting is whether that particular Turing machine is decidable.

As I said, there is a subtle difference between whether a Turing machine halts or not, and whether it's decidable whether it halts or not. The former is interesting, the latter is not particularly insightful.


> You are using the term "decide" as if there were some kind of agency involved, like you have to actually "choose" what is correct or incorrect

Isn't this literally what a decision procedure is? It's an algorithm that always outputs the correct answer, not one which tells you "the correct answer is either true or false". I'm not talking about agency or consciousness.


One of the two algorithms I posted tells you the correct answer for any specific Turing machine, so for any given Turing machine one of them is a decision procedure for it.


Sure, that is trivially true, but I don't see how that statement is useful in any way.


Exactly, so then we agree, there is nothing useful about an algorithm that can decide whether one specific Turing machine halts. Utility only emerges from an algorithm that can decide whether entire classes of other Turing machines halt, not from whether a particular machine does.


> there is nothing useful about an algorithm that can decide whether one specific Turing machine halts.

Yes there is. If someone gives me a working algorithm (this means one algorithm, not two algorithms either of which may work or not as you've been doing) which can provably decide whether the machine encoding the Goldbach conjecture halts or not, that is a very useful algorithm.

I must insist that giving two algorithms and saying "one of them works" without telling me which one works is is not a valid answer, since that is strictly different from giving a working algorithm which I can run in finite time in order to find out one correct answer.

You said this yourself earlier:

> The Turing machine that encodes the Goldbach conjecture proves the conjecture if it halts, and disproves the conjecture if it doesn't halt. That is an interesting and meaningful property of such a Turing machine.


I just think you're making a very subtle mistake here by mixing up the question of whether Fermat's Last Theorem (or Goldbach's Conjecture) is true with whether there is an algorithm that tells you that that specific theorem is true.

In my opinion, it's useful to know whether Fermat's Last Theorem is true, and that requires a proof. An algorithm that can only tell you whether a particular theorem is true is entirely useless.

For example, Sir Andrew Wiles proved that Fermat's Last Theorem is true, so would you reject an algorithm that simply returned true for any input as somehow being wrong? Would you only be happy if the algorithm wasted some energy doing some "computation", messing around with some variables and producing copious amounts of heat before telling you that Fermat's Last Theorem were true? Of course not. It doesn't matter what the algorithm does as an implementation detail, what matters is that the output is correct, not the heat it generates in the process.

Knowing a theorem is true is valuable. Having an algorithm that simply returns the correct answer about whether a particular theorem is true is entirely useless.

In general, an algorithm is only useful when it can be used for some arbitrarily large set of inputs. An algorithm that only works for one single instance is hardly an algorithm at all, it's nothing more than a lookup table. To be something more than a lookup table, it needs to work for an entire class of inputs, it needs to take an infinite number of possibilities and compress them down into the finite description of a process.

We know that no such algorithm can decide the halting problem for every single Turing Machine, but we can construct algorithms that can decide the halting problem for subsets of Turing machines, infinitely large subsets in fact.

Having an algorithm that just works for one single Turing machine... pretty meaningless.


A proper algorithm comes with a proof that it's correct. But even if I don't understand the proof, knowing the answer itself is useful.


Also worth noting that, from a practical perspective, even very simple programs/TMs can exhibit very complex behaviours that make it difficult to work out whether they will actually halt or not.

At the time of writing there are 2833 5 state binary TMs that haven't been decided yet:

https://github.com/bbchallenge/bbchallenge-undecided-index/b...



Ha! - I had no idea thanks


To be fair, the announcement I linked was published 2 hours after your comment. It was just perfect timing.


That is a much lower complexity grade than I thought we would struggle with. Especially since we have decades of experience in solvers for rather large Boolean Satisfiability Problems, which seems like it could be adapted here.


bounded model checking is indeed a thing, but it's bounded


A few months back I spent a couple of hours writing a binary TM simulator (yes I know you can download them - I wanted to write one) and then spent a couple of weeks running various TMs for days on end...

They exhibit quite bizarre amounts of complexity for what are really simple structures - indeed some TMs have apparently been observed employing Collatz conjecture type behaviour.


awesome!


Apparently, there is news on this:

https://www.quantamagazine.org/amateur-mathematicians-find-f...

Not sure why that github still lists so many undecideds.


Penrose's entire thesis seems to boil down that AGI is impossible because brains use Quantum Oogly-Boogly and that since Quantum Oogly-Boogly isn't computable, no computer can do it. The Emperor's New Mind is a tour de force of a book, but its central premise feels thoroughly unconvincing, or at least poorly communicated.


It's been a long time since I've read TENM but I don't think Penrose argued that AGI was impossible, just that consciousness was not computable by a Turing machine. Presumably you could create AGI by utilising these quantum effects if it turns out that Penrose is correct and they do something.

I wouldn't particularly bet on Penrose being correct but until someone figures out how consciousness actually works or proves that it's somehow a big self-convincing illusion I don't think we can completely discount the idea that there's some kind of physics involved.


> I wouldn't particularly bet on Penrose being correct but until someone figures out how consciousness actually works or proves that it's somehow a big self-convincing illusion I don't think we can completely discount the idea that there's some kind of physics involved.

Despite my flippant rejection of Penrose I very much agree. I just thought the computational approach was misguided and his colleagues did him dirty.


The universe is a quantum computer that computes itself, so it’s certainly possible.


> "Figuring out whether a single given program halts is decidable"

What does "decidable" mean in this context? Simply running the program may not be sufficient to know whether or not it halts. One could have a program that loops infinitely but never repeats the same state. So it will never halt, nor will its looping be obvious. So does it count as "decidable" if we cannot yet prove whether or not it's looping?


Not knowing the configuration of a TM at time t without running it (or similarly involved computation) doesn't mean that we cannot divide the space of TMs into classes.

One class has to each TM an associated mathematical function mapping inputs to nonnegative integers that tells you the exact integral time at which the TM transitions to the halting state. This is the class of decidable TMs.

The other class contains every TM not in the first class. For this class, any choice of such a function is provably wrong for some input. This is the class of undecicdable TMs.

Both classes are nonempty, and their existence is just about as well-defined as many other common mathematical objects. It is just not possible to provide a "nice" alternative characterization which TMs fall under which class.


Quantum computers can't solve the halting problem but that is not Penrose's argument which is not summarized well in the quote.


> Figuring out whether a single given program halts is decidable

It's maybe nitpicky, but by definition "decidable" applies to a language, i.e. a set of programs, not a single program. Of course, there is a trivial TM (either the one that always accepts or the one that always rejects) that, for any given single program, gives the correct answer for exactly that program, but that is rather uninteresting. So this framing doesn't make a lot of sense.

What is actually decidable is the language consisting of pairs of a TM (+ input) and a valid proof of its termination, and that's why we can prove (some) programs to be terminating (or any other more interesting property).


Relevant video about MIP*=RE

https://youtu.be/2H8629BCbkM


Lovely video!


> Figuring out whether a single given program halts is decidable

Whether any single given program? No, it's not decidable. For many programs, sure.


I spent a long time thinking about this same dillema as Penrose: if solving maths is undecidable, how can humans do math? It seemed like a great paradox and maybe revealing a super ability of humans.

I think the truth is both more boring and more wondrous. We of course can't "solve math". For any given mathematical problem, humans are not guaranteed to find a solution given enough time. Indeed there are many problems that have been proven very difficult for a long time and there's no certainty they will be solved any time soon. As Conway put it (paraphrasing) "It's not because we don't have a big enough brain or something ... it's because there's no way to know [without just looking]". That is, we effectively do increasingly sophisticated forms of search, the same search you can do with an algorithm.

For example, you can simply try running the algorithm for a long time and see if it does halt. If it does, there's your proof. If it doesn't, you either declare you don't know, or declare they halt and accept it may be a mistake. As T->inf, you can cover all programs that halt, and the unknown/mistakes (depending on how you measure[1], because there are infinitely many programs) you'd make go to 0 in a particular sense ([1]).

You can also try to be more clever. You can start enumerating all proofs that (A) a given program halts, and (B) a given program does not halt, (C) it cannot be proven either way (assuming the proofs are verifiable). This allows potentially (depending on your axiomatic system) proving your program halts much faster than waiting T(p) (its halting time), in particular in the case p is part of a class of programs that has superexponential halting time as a function of size (e.g. it halts in 2^2^(N), where N is its size -- those programs should be fairly easy to construct). But you'd still be waiting a long time. Note however, it still can be the case that no proof exists in your axiomatic system either of A, B or C. So you still have to declare either you don't know, or mistakenly assume it doesn't halt/no proof exists. As your proof search time T->inf you get the same coverage property, but faster (in the sense that it eventually overcomes the previous method).

I said both boring and wondrous because there is some magic here (just not magic outside of the rules of the cosmos :) ): you can (always?) do better in some sense. You can build more and more clever methods of deciding whether programs halt; I think of them as "bags of heuristics" or "bags of tricks": from simple observations e.g. that an empty loop does not halt, to more complex properties.

Those heuristics eventually are also meta-heuristics (or just proof-heuristics): not only heuristics to tell whether P halts, but heuristics in a search for proofs that P halts (allowing a faster proof search). (Of course, there are also meta-heuristics in the sense of thinking about better ways to solve the problem in general). In the case of humans, the interesting distinction is that we are "messy extremely large minds", and we, through experience, "learn", or accumulate those various heuristics in a generalizing way -- that's what intuition is. Our learning is still limited in the same fundamental way any algorithm would be, we just turn out to be a very special (maybe you can call it magical in a sense) learning, intuitive and sentient, system (although likely the particular architecture of our brains/minds is specially relevant to how we operate and how it "feels to be a human", consciousness, etc.).

Not only is our brain special, but in a sense we're "lucky" to be initialized or to have fallen into a more or less "universalizing environment". Of course, if you put a children in an empty room, or even in a forest (even provided with unlimited food), he is unlikely to learn much, least of all the secrets of the universe. Our society, our environment is such that we teach ourselves language and we particularly put value on discovery, such that we have set in motion an environment where we can, and we do, discover asymptotically more. And this discovery has a great purpose. As we learn more, we not only learn more about the Cosmos (which we're part of), we learn about our own nature, what it means to have a good existence, and all the things we can do to have a better (more complete, more fulfilling, more full of love, more meaningful) amazing existence[2]. That's our real "magic bean"[3] that must be guarded with utmost care.

[1] Measuring the performance here is tricky, again because there are infinitely many cases. In a strict sense, the fraction of programs you prove that halts is 0 for any T. But that seems like a poor model: we're usually much more interested in small cases. If you weight them with some decaying weights, for example exponentially decaying 1/2^N, then this weighted fraction converges.

[2] I think that should be the general objective of rationality, or reason, and science/reason is the general system of conducting, refining and making this process reliable.

[3] Gold mine, Golden goose, Jewel, Delicate flower, etc.. :)

---

On quantum behavior, essentially none of those conclusions are affected at all, or even require quantum behavior. Quantum computing might accelerate some operations, but that's all. (And quantum computation has been shown so far to require extremely delicate conditions that would be hard to happen in our brains)


> We have lots of formally verified programs that show useful work can be done here.

Yes, by restricting things to a non-Turing Complete subset.

> Also, theoretical quantum computers can solve whether a problem halts 100% of the time.

That is absolutely not true. Quantum computers are proven to be entirely equivalent to classical computers in terms of what problems they can solve. The only difference is that they seem to show an exponential speed advantage for a certain very limited subset of problems, mostly related to quantum transforms.


> Yes, by restricting things to a non-Turing Complete subset.

No, not necessarily. There are procedures that _can_ verify properties against Turing-complete and even infinite-state systems. It's just that no _general_ (as in, sound and complete) procedure can exist.


If it's not sound, than the result is meaningless. If it is not complete, then it only works on a subset of problems, which is equivalent to what I said.


Agree that soundness is desirable, but "it only works on a subset of instances" and "you are restricted to a non-Turing complete subset" are not equivalent in the slightest.


Not sure what are you trying to say. Surely the "subset of instances" is non-Turing complete.


huh, is that true? Is that class of computable function with a halting guarantee strictly smaller than the class of computable function without one? <citation missing>


I'm not sure what you're asking about. It is well known that certain computations can be determinstically proven to halt - there's even a prigramming language that only accepts programs that do so, Idris.

Of course, not any computation can be written in such a way. But it's clearly possible to write certain programs in such a way that they must halt by construction. You can even do it in C: don't use goto, only use loops with a known bound (e.g. while (x > 0) is not ok, but for(i = 0; i < 100; i++) is ok, as long as i is not modified in the body), don't use in-line ASM, don't use recursion, don't use stdlib functions (definitely no string functions). These rules can be verified syntactically (+- problems with UB), and the resulting program is guaranteed to halt. But there are problems for which you can't write such a program.

As to the relative size of these sets, I think that's a meaningless question - they are both infinite sets.


> there's even a prigramming language that only accepts programs that do so, Idris.

If you declare every function total. Idris still allows partial functions, and you can even cheat the whole system with "assert_total". Of course, with some sort of linter, you could still make sure that everything is actually total (although technically, in Idris "total" doesn't necessarily mean "halts" - programs which generate infinite input are also total in Idris).


> We have lots of formally verified programs that show useful work can be done here

Yes, if you start writing your program with the explicit intent of making such proofs possible, then it can be done. What Rice's theorem says is that you can't expect to be able to do so when given an arbitrary program, not that it's always impossible.


Formal verification (i.e., automated theorem proving) relies on only admitting 'total' programs, which usually requires that any potential sources of 'bottom' (i.e., non-termination) come equipped with a proof that that never happens (usually based on well ordering of some structural decomposition in the recursion).

Thus, most theorem provers are not turing complete and explicitly ban any turing complete structures.

So, the various things they show and prove cannot be said to encompass all programs.


What do you mean by dressing it up as a theorem? It is a theorem because there's a proof for it.


For example, here's a formally verified proof in Lean: https://github.com/leanprover-community/mathlib4/blob/124731...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: