Hacker News new | past | comments | ask | show | jobs | submit login

And just think how wonderful it is that that approach works. How many people would be able to tell you "This is how you should tell me what to do " both appropriately and without getting annoyed at the request?



Does it actually know how it should be promoted (through introspection and reflection)? Or is it pattern matching the countless articles/tweets on prompt engineering?

I think this distinction is important as the latter feels less worthy of the praise you’re suggesting.


Yes, that is an important distinction.

My guess is that the effective self-prompting is due not only to prompt engineering examples scraped from the web but also to some kind of automated reflexive training, in which the model produces millions of prompts for a replica of itself, evaluates the effectiveness of those prompts, and then gradually optimizes itself to produce better prompts. Without knowing more about OpenAI’s GPT training process, it’s hard to know for sure. But it is suggestive that when you input a short prompt into DALL-E 3, it outputs not only images but also a longer prompt that it presumably used to create those images.

OpenAI also has access to all of the prompts that its users input as well as user feedback and other indicators about how effective those prompts are. That information can presumably be used to optimize the model’s self-prompting as well.

Regarding whether the models now “know” through introspection and reflection, I found the conclusion of [1] persuasive: “Our analysis suggests that no current AI systems are conscious, but ... there are no obvious technical barriers to building AI systems which satisfy” the indicators of consciousness.

[1] https://arxiv.org/abs/2308.08708


Similar to the chinese room, is it actually understanding or is it simulating understanding?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: