Hacker News new | past | comments | ask | show | jobs | submit login

I completely believe you but it's funny to me that there is a laundry list of ways that could have easily been faked (tell it to respond with a specific answer after the next question, edit the text, edit the image, ask another LLM to create an image faking it, train it on custom data to spit that out...) to the point one might as well not even try putting in effort to proving it happened. Like, what are we supposed to do other than say "trust me bro"?

This particular example isn't so "new" in that regard (raster capture of digital text output), but just trying to think of ways you could give believable evidence... get a notary while typing all of your prompts, take 3d video with a camera moving along a random path in hopes it's too complex to easily fake for now, or record everything you do on a computer for deterministic replication? Anything short and it lacks any useful amount of trustable evidence.




Open AI neatly solved this by allowing you to share the transcript of your entire conversation as a link.

It's a lot more difficult for local models, though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: