Hacker News new | past | comments | ask | show | jobs | submit login

I think rozab has it right. What executes exfiltration request is the user's browser when rendering the output of the LLM.

It's fine to have an LLM ingest whatever, including both my secrets and data I don't control, as long as the LLM just generates text that I then read. But a markdown renderer is an interpreter, and has net access (to render images). So here the LLM is generating a program that I then run without review. That's unwise.




You're correct, but we also have model services that support the ReAct pattern which builds the exfiltration into the model service itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: