The paper suggests that a finite table could be compiled which exhaustively enumerates all the possible conversations I could have with an AI (to be more generous than he is, I'll say) in my lifetime. Some of those will persuade me that the AI is conscious. We can make a (finite) program that follows that mapping, and hence it has to be possible at least in terms of a finite-length program.
I disagree with the middle assumption: some of those will persuade me the AI is conscious. I think it is entirely possible, if I pose the question I did up above, there is no possible response the AI could give me which would persuade me it was conscious. No matter how clever its response is, the program is just a lookup table, and hence whatever response it gives me will match the output of the specified program, demonstrating that the AI indeed did not understand the request.
In other words, I take issue with the author's assumption that there is some subset that will work. It is possible the sparse set the author is hoping to fish out has size zero.
(The difference between the AI and the human, in the chat logs, is that the AI has source code I can reference. The human does not. The human can run any program and produce a different output. An AI cannot do this with its own program.)
First off he heads such a disagreement by capping conversation lengths: humans can't have infinite length conversations.
Second of all: This is an assumption: "The human does not. The human can run any program and produce a different output". A less charitable interpretation will also say this is wrong since it violates the Halting Problem. Essentially, you are assuming the human is some kind of hypercomputer.
The paper suggests that a finite table could be compiled which exhaustively enumerates all the possible conversations I could have with an AI (to be more generous than he is, I'll say) in my lifetime. Some of those will persuade me that the AI is conscious. We can make a (finite) program that follows that mapping, and hence it has to be possible at least in terms of a finite-length program.
I disagree with the middle assumption: some of those will persuade me the AI is conscious. I think it is entirely possible, if I pose the question I did up above, there is no possible response the AI could give me which would persuade me it was conscious. No matter how clever its response is, the program is just a lookup table, and hence whatever response it gives me will match the output of the specified program, demonstrating that the AI indeed did not understand the request.
In other words, I take issue with the author's assumption that there is some subset that will work. It is possible the sparse set the author is hoping to fish out has size zero.
(The difference between the AI and the human, in the chat logs, is that the AI has source code I can reference. The human does not. The human can run any program and produce a different output. An AI cannot do this with its own program.)