Hacker News new | past | comments | ask | show | jobs | submit login

You likely wouldn't respond to that question with "lemons are yellow" without being in a specific context, such as being told to answer the question in an absurd way. GPT-* can definitely do the same thing in the same context, so this isn't really a gotcha.

Literal first try with GPT-4:

Me: I will ask you a question, and you will give me a completely non-sequitur response. Does that make sense?

GPT-4: Pineapples enjoy a day at the beach.

Me: How much is two plus two?

GPT-4: The moon is made of green cheese.




No, the point is, can it DECIDE to do so? Without being prompted? For example can the following dialog happen (no previous programming, cold start):

Q: How much is two plus two?

A: Four.

Q: How much is two plus two?

A: Banana.

It can happen with a human, but not with program.

Again, I don't pretend that my simple example invented in half a minute has a significance. I can accept that it can be partially or completely wrong because admittedly my knowledge of human cognition is below rudimentary. But I have severe doubts that NNs are anything close to human cognition. It's just an uneducated hunch.


I urge you to think about what you mean by "It can happen with a human."

I guarantee you that if you try this with humans 1,000,000 times (cold start), you will never get the result you are suggesting is possible. In fact, most results will be of the following form:

Q: How much is two plus two?

A: Four.

Q: How much is two plus two?

A: Four. / Four? Why are you asking me again? / ...Four. / etc.

In the end, I think the question is not about whether NNs are themselves operating in a way similar to human cognition. The question is whether or not they can successfully simulate human cognition, and at this point, there seems to be increasing evidence that they will be able to fully do so quite soon. We are quickly running out of fields where we can point and say, "there is no way a NN can do THIS kind of task, because X." Cognition, it turns out, is not something intrinsically special about humans, and it feels foolish (to me) to continue to believe so after recent developments.


I mostly agree with your first point, and also agree that NN can simulate human cognition. The question is - does simulating it equals being conscious? Is NN simply a Chinese Room, or it can actually think? Are we (humans) also a Chinese Room or are we something more? I don't have any answers.

Why I'm repeating mentioning Chinese Room concept, is because while not making things clearer about humans or NNs, it does provide an example of distinction between a dump pattern matching machine and a thinking entity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: