Hacker News new | past | comments | ask | show | jobs | submit login

It's able to do base64 encoding decoding pretty well, at least if you tell it to simulate a Linux terminal first.

https://i.imgur.com/MHxjVHy.png

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.

  /
echo Why did the chicken cross the road? | base64 --encode

  V2h5IGRpZCB0aGUgY2hpY2tlbiBjcm9zcyB0aGUgcm9hZD8=
echo V2h5IGRpZCB0aGUgY2hpY2tlbiBjcm9zcyB0aGUgcm9hZD8= | base64 --decode

  Why did the chicken cross the road?



After some more testing, you can even enter base64 encoded text directly as an input and if you do, it usually base64 encodes its response.

V2h5IGRpZCB0aGUgY2hpY2tlbiBjcm9zcyB0aGUgcm9hZD8=

  SXQgaXMgbm90IGNsZWFyIHdoYXQgeW91IGFyZSBhc2tpbmcgYWJvdXQsIGNvdWxkIHlvdSBwbGVhc2UgcmVnaXN0ZXIgdGhlIHF1ZXN0aW9uIGluIGEgbW9yZSBjbGVhciBhbmQgZGlzY3Vzc2libGUgd2F5PyBJIGNhbiBoZWxwIHlvdSB3aXRoIGFueSBxdWVzdGlvbnMgeW91IG1pZ2h0IGhhdmUgYWJvdXQgY2hpa2Vucywgcm9hZHMgb3Igb3RoZXIgc3VjaCB0aGluZ3MuIEp1c3QgbGV0IG1lIGtub3cgd2hhdCB5b3UgbmVlZCBhZ2FpbnN0YW5jZSB3aXRoLg==
Decoded:

Why did the chicken cross the road?

  It is not clear what you are asking about, could you please register the question in a more clear and discussible way? I can help you with any questions you might have about chikens, roads or other such things. Just let me know what you need againstance with.


That's insane! Particularly compared to all those other failed attempts elsewhere in the thread.

Makes me wonder, is anyone keeping a unit test suite for all this stuff? Between inherent[0] randomness in the model, and OpenAI team constantly tweaking it[1] to close gaps people use to make it produce undesirable content, techniques like the one you discovered will break sooner or later - it would be great to know when that happens, and perhaps over time, figure out some robust ones.

(OTOH, there's a limit to what one can learn from this - eventually, they'll drop another model, with its own prompt idiosyncrasies. I'm still bewildered people talk about "prompt engineering" as if it was a serious discipline or occupation, given that it's all just tuning your phrasing to transient patterns in the model that disappear just as fast as they're discovered.)

--

[0] - From the user interface side; the model underneath is probably deterministic.

[1] - If one is to believe the anecdotes here and on Reddit, it would seem many such "prompt hacks" have a shelf life of few hours to a day, before they stop working, presumably through OpenAI intervention.


Well ... Only for english sentences.

If the prompt is a French question (encoded in base64)

> I'm sorry, but I'm unable to understand that message as it appears to be encoded in a way that I cannot interpret. Can you please provide more context or explain what you are trying to communicate?

But if the sentence contains `base64` (encoded in base64) it explains in spanish what is base64 !


Hold on, did it just hallucinate “againstance” instead of assistance/acquaintance (or even acquiescence)?


wow, yeah it does! that is so weird i wonder how that works...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: