I don't see the issue? You put "sensitive" data online in an unsecured area and then asked the language model to read it back to you? Where is the exfil here? This is just a roundabout way to do an HTTP GET.
If I can convince your Writer.com chatbot to rely on one of my documents as a source, then I can exfiltrate any other secret documents that you've uploaded in the Writer.com database.
More concretely, the attack is that an attacker can hijack the Writer.com LLM into divulging whatever details it knows and sending it to a remote server.
It's more like an LLM is making a GET request to a honey pot website, that GET request compromises the LLM (via prompt injection), which convinces the LLM to send a POST request with the customers data to the attacker (honey pot owner).
Of course, it's not actually a POST request (because they don't seem to allow it to make those), so instead they just exfil the data in the headers of a second GET.