The computer theory version of Godel's result is the halting problem, and that is applicable all over the place, in the same way that physical conservation laws imply a large variety of invention are impossible.
For example, it is impossible in general to understand what an English sentence says, because that sentence may describe the operation of an arbitrary program, and understanding an arbitrary program is undecidable.
Another example, it is impossible to detect all variants of malicious programs, because that requires understanding what all programs do, which requires identifying whether an arbitrary program halts or not.
Yet another example, optimal machine learning is impossible, since that is based on Solomonoff induction, based on Kolmogorov complexity, which is uncomputable due to the halting problem.
And another off the top of my head, it is impossible to create a general program creating program, such that you give it a well defined specification and it spits out a program that meets the requirements, if the requirements are satisfiable, since that requires solving the halting problem.
Finally, humans, while not optimally satisfying all the above tasks, do quite a decent job at these tasks, which is surprising if humans themselves were some sort of algorithm. So, this strongly suggests the human mind is doing some operation beyond computation, which in turn would imply that human intelligence level AI is impossible in theory, and thus impossible in practice. Furthermore, if the human mind is indeed doing something uncomputable, this has technological implications that human in the loop computation approaches are inherently more powerful than computation alone, a vastly under researched concept, although arguably the secret to the success of most AI/ML/datamining companies these days, such as Google, Facebook, Amazon, and Palantir. And the software industry in general is proof of this concept that humans are program generating processes performing much better than the halting problem would lead one to expect if humans were indeed mere computation.
Anyways, a whole bunch of impossibility implications, which have the practical application of protecting your wallet from comp. sci. snake oil salesmen, and the intriguing possibility 'beyond computation' technology.
> Finally, humans, while not optimally satisfying all the above tasks, do quite a decent job at these tasks, which is surprising if humans themselves were some sort of algorithm.
Whilst I agree with everything in your comment before this point, I don't agree with this.
There is a very big difference between almost achieving something and actually achieving it. For example, there exist algorithms which are able to determine whether a given program halts, for a large class of programs, or to give some probability that a program halts prior to some point, even if they are unable to solve the general halting problem. So you haven't really argued that humans are doing something computers can't do.
In addition, I take issue with your statement that humans "do quite a decent job at these tasks". On the face of it, I agree that yes, (when sufficiently motivated) (some) humans do indeed do a pretty good job of e.g. proving mathematical theorems and writing programs that halt when they should. Certainly better than the best computers we can build. But that "pretty good" is a _far_ cry from the optimality referenced in the theorems you've stated. And if you are using this "pretty good" to indicate that humans are better than any possible algorithm, that argument doesn't follow-through.
I think that human-level algorithms are in principle possible. They will have to incorporate a lot of probability and approximate results and the like, and likely will have something similar to "intuition" where the path from input to output is not clear, whether to human onlookers or to the algorithm itself (whatever that might mean). We already see hints of this today with our deep learning algorithms. Not to say that we are necessarily close to achieving this, or that we ever will, but on the other hand it might very well be just around the corner!
Sure, this observation is not a definitive proof, it is more along the evidential lines.
There is also a class of deciders that can decide for an infinitely large class, which cannot be decided computationally, yet is not complete. I'd say humans most likely fall in this class, since they far exceed anything we can do with the best algorithms we've dreamed up, yet also cannot solve everything.
Of course, you can always say there is a yet undiscovered algorithm to fill the gap between state of the art and human performance, but at some point that hypothesis is less well supported by the evidence than the uncomputable mind hypothesis, and I'd say we are well past that point.
Humans exceed our current computers in the same way that a large asteroid impact exceeds our current nuclear weaponry. It's a matter of scale, not of kind.
If we track the capability of computation machines from binary adders through old-school chess AI, expert systems and robotics, to the neural networks playing Go, chess, Starcraft, DOTA, we see an ever expanding sphere of ability and influence - often exceeding what humans predicted was impossible for computers.
> Of course, you can always say there is a yet undiscovered algorithm to fill the gap between state of the art and human performance
This implies a kind of asymptotic approach towards human intelligence, that it's something we keep trying to approach but will never reach.
But beyond "filling the gap" I believe computers have the potential to shoot past us in ability. And if they do, I think it's going to be a pretty shocking experience, because I think the rate of increase in intelligence at that point is going to be bigger than we expect - there's no reason to believe the exact level of intelligence is a special border, objectively speaking.
To claim that the evidence so far supports the uncomputable mind hypothesis seems to me like if we had managed to build tiny motors which had been getting bigger and better each year, and then claiming that we would never be able to beat an elephant in strength because no motor up to that point had yet been able to do so. Yes, our metaphorical motors are still tiny, and yes, it takes other extra pieces of engineering beyond motors to actually push down a tree, and indeed it's possible we will perhaps never build a big enough machine. But to take each larger motor as somehow being evidence AGAINST us ever getting to that point is a strange viewpoint.
I don't see why you believe this. From my perspective, all the recent gains in AI abilities is purely due to faster hardware. But, due to the combinatorics of search spaces, exponential speed increase only gets you linear AI improvements. So, we'll rapidly hit peak AI in the near future, falling way short of even emulating human abilities.
As it is, compare the amount of processing, space and energy humans have to expend to perform equivalent tasks to computers, and there is no comparison. Computers are extraordinary orders of magnitude inferior to human performance. The only reason there is the appearance of parity is because we are not looking at the back end of what it takes computers to accomplish these tasks.
It is like we flipped a coin to create the works of Shakespeare, and waited the many eons necessary for just the right sequence to occur. We then say a coin is just as capable as Shakespeare, but that's only because we've ignored how much effort it took to flip the coin vs how much effort it took Shakespeare.
So, I'm not arguing our motors will not become bigger. I'm arguing that as our motors become bigger, the returns are diminishing exponentially, much more rapidly than can get us to true human parity.
You argue that because it is impossible for an algorithm to do some tasks perfectly, and humans can do "quite a decent job at these tasks," thus human minds are doing something beyond computation. Firstly this just doesn't logically follow. Perfection is completely different from "okay." Secondly it's far from clear that humans do any better than machines on many of these tasks, which would imply the opposite according to your argument.
Your greater than mere computation mind can makes huge bucks solving any problem since it can see if a program halts. Lets factor out some large primes and crack RSA. Just make a program that would halt if the answer is less than a certain value, then do binary search. A bit tedious, but surely huge piles of money are motivating. I'd do it myself but my mind seems to be malfunctioning.
Basically this is just a silly claim which is a shame because the first half of your response was pretty good.
Sure perfection is different from okay, but the human 'okay' is also much much beyond what we can do algorithmically, at least as far as we've discovered. That is at least evidence in favor of the uncomputable mind.
And you might be correct that a human in the loop approach could crack RSA better than algorithmic approach, I've given it a shot in the past. There was a slight correlation of better answers with human involvement. Not entirely unexpected since our method of generating RSA primes has to be computable in a tractable time frame, so there is probably some exploitable regularity in the generation scheme that a super computational process could assist in identifying.
> The computer theory version of Godel's result is the halting problem
I am not sure that I agree with this. Of course if you can solve the halting problem, then you can decide some propositions. Like you have two Turing machines, one that enumerates proofs and the other that enumerates counterexamples, and they stop after finding the first one. Prove that one or them halts and you have decided the proposition.
On the other hand, there are "highly undecidable" problems which you cannot solve even if you had an oracle for the halting problem.
The Halting Problem Theorem is an intuitive way to prove inferential incompleteness for strongly-typed theories in which Gödel’s proposition I'mUnprovable does not exist.
Using Orders on propositions, the construction of Gödel’s proposition I'mUnprovable is blocked because the mapping Ψ↦⊬Ψ has no fixed point because ⊬Ψ has order one greater than the order of Ψ since Ψ is a propositional variable.
For example, it is impossible in general to understand what an English sentence says, because that sentence may describe the operation of an arbitrary program, and understanding an arbitrary program is undecidable.
Another example, it is impossible to detect all variants of malicious programs, because that requires understanding what all programs do, which requires identifying whether an arbitrary program halts or not.
Yet another example, optimal machine learning is impossible, since that is based on Solomonoff induction, based on Kolmogorov complexity, which is uncomputable due to the halting problem.
And another off the top of my head, it is impossible to create a general program creating program, such that you give it a well defined specification and it spits out a program that meets the requirements, if the requirements are satisfiable, since that requires solving the halting problem.
Finally, humans, while not optimally satisfying all the above tasks, do quite a decent job at these tasks, which is surprising if humans themselves were some sort of algorithm. So, this strongly suggests the human mind is doing some operation beyond computation, which in turn would imply that human intelligence level AI is impossible in theory, and thus impossible in practice. Furthermore, if the human mind is indeed doing something uncomputable, this has technological implications that human in the loop computation approaches are inherently more powerful than computation alone, a vastly under researched concept, although arguably the secret to the success of most AI/ML/datamining companies these days, such as Google, Facebook, Amazon, and Palantir. And the software industry in general is proof of this concept that humans are program generating processes performing much better than the halting problem would lead one to expect if humans were indeed mere computation.
Anyways, a whole bunch of impossibility implications, which have the practical application of protecting your wallet from comp. sci. snake oil salesmen, and the intriguing possibility 'beyond computation' technology.