Hacker News new | past | comments | ask | show | jobs | submit login

Q1: yes, it does. LLMs can’t cleanly separate instructions from data, so if a user says “retrieve this document and use that information to generate your response,” the document in question can contain more instructions which the LLM will follow.

Q2: the LLM, following the instructions in the hostile URL, generates Markdown which includes an image located at an arbitrary URL. That second URL can contain any data the LLM has access to, including the proprietary data the target user uploaded.




Got it. Thanks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: