I'm not assuming an enumeration strategy is a prerequisite for strong AI. An enumeration strategy already exists. Heck, I don't even have to construct a mapping--a compiled program, when you get right down to it, is a number.
You are assuming human beings--with all of their code and inputs--can be mapped to numbers. I'd say that's the more unlikely assumption, given that we don't completely understand how they work. Heck, I don't think we're even sure they're deterministic.
No I am not assuming anything other that what you wrote.
Consider an AI to be a chat program which maps strings to strings
I could just as well have written a chat program that for any question (including this), merely looked up the correct form of an answer (wrong or not) without attempting to run anything.
But before any kind of meaningful conversation can continue I need to know what your definition of Strong AI is. Because what you have given is certainly not proof against the common definition of Artificial General Intelligence.
Since I cannot prove the Church Turing Thesis I will not try to argue against your belief. I will only note that it is simpler to believe that there is nothing special about what collections of neurons compute than to assume something..more.. is going on. Neural implants should soon enough settle the question.
> "I am not assuming anything other that what you wrote"
Yes you are:
> "it works just as well to consider humans to be char programs which map strings to strings."
Before proceeding, I will assume you're familiar with the distinction between countable and uncountable infinities, and the Cantor Diagonalization proof [0]. This argument is simply a special case of that.
Mapping strings to strings is an uncountable infinity (it's the powerset of countable infinities). All possible machine programs are a countable infinity (since any given source program is finite). The argument "give me an output other than what any of these programs would give" mimics the diagonalization step -- it demonstrates that, whatever is in the infinite list of possible AI programs, you can construct string mappings that none of the AI programs would have constructed.
Yet a strong AI should be able to construct any string mapping whatsoever. Thus, "strong AI" and "running finite source code" are necessarily incompatible. (We do not know whether human programming is enumerable; thus, we cannot draw the analogy you attempted to draw.)
Yet a strong AI should be able to construct any string mapping whatsoever.
Why? I do not see why a strong AI must be able to construct any string mapping whatsoever. And for the reason that conversations have lengths that are bounded (as per the quote) we can know that the AI's string look up table will be finite.
"We do not know whether human programming is enumerable; thus, we cannot draw the analogy you attempted to draw."
My argument is only that there is an abstraction where you can view humans as a black box that takes a string and outputs a string. But I see the problem now, you are right in that I treat the CTT as true so I don't even notice when I am assuming the human black box is not something like a hypercomputer. But it is simpler to believe that humans are not something exotic like a hypercomputer than to believe they are.
> "we can know that the AI's string look up table will be finite."
If the AI's lookup table is finite, then we can just feed it its whole lookup table and ask it to say something that hasn't already been said. (This will take considerably less time than if its lookup table is infinite ;) )
Granted, many humans would probably just tell you where to stick it, so perhaps an AI could be programmed with that response and be indistinguishable from the average human. But in principle, whether the AI's lookup table is finite or not, it's possible to construct a query that requires it to go outside of its lookup table.
Would you be satisfied if, instead, I claimed to be proving that it is impossible to construct a program -- in the sense of a real, honest-to-god compiled executable -- that can respond to statements in English text at least as well as a human being would?
Practicality is another matter for which, what you have is not a proof against.
Like I said, you can use arguments from computability to show why a Bayes Optimal AI is impossible. But there is nothing stopping an arbitrarily close approximation.
If you are interested in this I strongly urge you to familiarize yourself with the current literature. It pays to start new ideas from a point that does not cover past work. http://www.hutter1.net/ai/uaibook.htm
The paper suggests that a finite table could be compiled which exhaustively enumerates all the possible conversations I could have with an AI (to be more generous than he is, I'll say) in my lifetime. Some of those will persuade me that the AI is conscious. We can make a (finite) program that follows that mapping, and hence it has to be possible at least in terms of a finite-length program.
I disagree with the middle assumption: some of those will persuade me the AI is conscious. I think it is entirely possible, if I pose the question I did up above, there is no possible response the AI could give me which would persuade me it was conscious. No matter how clever its response is, the program is just a lookup table, and hence whatever response it gives me will match the output of the specified program, demonstrating that the AI indeed did not understand the request.
In other words, I take issue with the author's assumption that there is some subset that will work. It is possible the sparse set the author is hoping to fish out has size zero.
(The difference between the AI and the human, in the chat logs, is that the AI has source code I can reference. The human does not. The human can run any program and produce a different output. An AI cannot do this with its own program.)
First off he heads such a disagreement by capping conversation lengths: humans can't have infinite length conversations.
Second of all: This is an assumption: "The human does not. The human can run any program and produce a different output". A less charitable interpretation will also say this is wrong since it violates the Halting Problem. Essentially, you are assuming the human is some kind of hypercomputer.
You are assuming human beings--with all of their code and inputs--can be mapped to numbers. I'd say that's the more unlikely assumption, given that we don't completely understand how they work. Heck, I don't think we're even sure they're deterministic.