Hacker News new | past | comments | ask | show | jobs | submit login

> The human brain is nothing more than a bio computer

That's a pretty simplistic view. How do you know we can't determine whether an arbitrary program will halt or not (assuming access to all inputs and enough time to examine it)? What in principle would prevent us from doing so? But computers in principle cannot, since the problem is often non-algorithmic.

For example, consider the following program, which is passed the text of the file it is in as input:

  function doesHalt($program, $inputs): bool {...}

  $input = $argv[0]; // contents of this file

  if (doesHalt($input, [$input])) {
      while(true) {
          print "Wrong! It doesn't halt!";
      }
  } else {
      print "Wrong! It halts!";
  }
It is impossible for the doesHalt function to return the correct result for the program. But as a human I can examine the function to understand what it will return for the input, and then correctly decide whether or not the program will halt.



Can you name a single form of analysis which a human can employ but would be impossible to program a computer to perform?

Can you tell me if a program which searches for counterexamples to the Collatz conjecture halts?

Turing's entire analysis started from the point of what humans could do.


This is a silly argument. If you fed this program the source code of your own brain and could never see the answer, then it would fool you just the same.


You are assuming that our minds are an algorithmic program which can be implemented with source code, but this just begs the question. I don't believe the human mind can be reduced to this. We can accomplish many non-algorithmic things such as understanding, creativity, loving others, appreciating beauty, experiencing joy or sadness, etc.


> You are assuming

Your argument doesn't disprove my assumption *. In which case, what's the point of it?

* - I don't necessarily believe this assumption. But I do dislike bad arguments.


Here you are:

  func main() {

    var n = 4;
  OUTER: loop {
      for (var i = 2; i < n/2; i++) {
        if (isPrime(i) && isPrime(n-i)) {
          n += 2;
          continue OUTER; // Goldbach’s conjecture 
      }
      break;
    }
  }


actually a computer can in fact tell that this function halts.

And while the human brain might not be a bio-computer, I'm not sure, its computational prowess are doubtfully stronger than a quantum turing machine, which can't solve the halting problem either.


no you can't. only for some of the inputs. and for those you could also write an algorithmic doesHalt function that is analog to your reasoning.


For what input would a human in principle be unable to determine the result (assuming unlimited time)?

It doesn't matter what the algorithmic doesHalt function returns - it will always be incorrect for this program. What makes you certain there is an algorithmic analog for all human reasoning?


Well, wouldn't the program itself be an input on which a human is unable to determine the result (i.e., if the program halts)? I'm curious on your thoughts here, maybe there's something here I'm missing.

The function we are trying to compute is undecidable. Sure we as humans understand that there's a dichotomy here: if the program halts it won't halt; if it doesn't halt it will halt. But the function we are asked to compute must have one output on a given input. So a human, when given this program as input, is also unable to assign an output.

So humans also can't solve the halting problem, we are just able to recognize that the problem is undecidable.


With this example, a human can examine the implementation of the doesHalt function to determine what it will return for the input, and thus whether the program will halt.

Note: whatever algorithm is implemented in the doesHalt function will contain a bug for at least some inputs, since it's trying to generalize something that is non-algorithmic.

In principle no algorithm can be created to determine if an arbitrary program will halt, since whatever it is could be implemented in a function which the program calls (with itself as the input) and then does the opposite thing.


The flaw in your pseudo-mathematical argument has been pointed out to you repeatedly (maybe twice by me?). I should give up.


With a assumtion of unlimited time even a computer can decide the halting problem by just running the program in question to test if it halts. The issue is that the task is to determine for ALL programs if they halt and for each of them to determine that in a FINITE amount of time.

> What makes you certain there is an algorithmic analog for all human reasoning?

(Maybe) not for ALL human thought but at least all communicatable deductive reasoning can be encoded in formal logic. If I give you an algorithm and ask you to decide if it does halt or does not halt (I give you plenty of time to decide) and then ask you to explain to me your result and convince me that you are correct, you have to put your thoughts into words that I can understand and and the logic of your reasoning has to be sound. And if you can explain to me you could as well encode your though process into an algorithm or a formal logic expression. If you can not, you could not convince me. If you can: now you have your algorithm for deciding the halting problem.


You don't get it. If you fed this program the source code of your mind, body, and room you're in, then it would wrong-foot you too.


Lol. Is there source code for our mind?


There might be or there mightn't be -- your argument doesn't help us figure out either way. By its source code, I mean something that can simulate your mind's activity.


Exactly. It's moments like this where Daniel Dennett has it exactly right that people run up against the limits of their own failures of imagination. And they treat those failures like foundational axioms, and reason from them. Or, in his words, they mistake a failure of imagination for an insight into necessity. So when challenged to consider that, say, code problems may well be equivalent to brain problems, the response will be a mere expression of incredulity rather than an argument with any conceptual foundation.


And it is also true to say that you are running into the limits of your imagination by saying that a brain can be simulated by software : you are falling back to the closest model we have : discrete math/computers, and are failing to imagine a computational mechanism involved in the operation of a brain that is not possible with a traditional computer.

The point is we currently have very little understanding of what gives rise to consciousness, so what is the point of all this pontificating and grand standing. Its silly. We've no idea what we are talking about at present.

Clearly, our state of the art models of nueral-like computation do not really simulate consciousness at all, so why is the default assumption that they could if we get better at making them? The burden of evidence is on conputational models to prove they can produce a consciousness model, not the other way around.


This doesn't change the fact that the pseudo-mathematical argument I was responding to was a daft one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: