Hacker News new | past | comments | ask | show | jobs | submit login

I submitted a puzzle from https://dmackinnon1.github.io/fickleSentries/, with the basic prompt, "I am going to present you with a logic puzzle. I would like you to solve the puzzle."

https://pastebin.com/a3WzgvK4

The solution GPT-3.5 (I don't have access to 4.) gave was: "In conclusion, based on the statements and the given information, the treasure in the cave must be copper."

The solution given with the puzzle is "Here is one way to think about it: If Guard 1 is telling the truth, then the treasure must be diamonds. If Guard 1 is lying, then the treasure can be copper or gold. If Guard 2 is telling the truth, then the treasure must be silver. If Guard 2 is lying, then the treasure can be diamonds or rubies. The only possible option based on the statements of both guards is diamonds."

Is there any way to improve that prompt?





That is some very weird reasoning, though.

"Looking at their statements again, if Guard 1 is telling the truth about guarding diamonds (as we deduced), he would be lying about the silver. This is okay, because Guard 1 can tell a half-truth while guarding diamonds. For Guard 2, if he's telling the truth about the silver, he'd be lying about the platinum, which is also allowed. So the treasure they are guarding can be diamonds. This makes Guard 1's statement (The treasure is either silver or diamonds) half-true and Guard 2's statement (The treasure is either silver or platinum) half-false."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: