Hacker News new | past | comments | ask | show | jobs | submit login

This is only going to get worse with Large Language Models. Let's imagine a somewhat knowledgeable individual, could craft both emails, messages and even commits with a bunch of prompts. Those will relate deeply to the project.



Maybe one day it will happen, but right now LLM-generated persona would likely set off every alarm bell for a lot of people. LLMs have very recognizable style, and it usually falls right into the uncanny valley.


The "recognizable style" that people usually refer to is the default persona that most are exposed. However, the style can be changed very drastically with some fairly simple prompting.


It doesn't have to be completely automated, just enough to make the process of juggling multiple personas a bit smoother.


Do you have any evidence or real examples to support that? I hear people say similar things but see nothing to suggest LLMs are a particular threat.


The real threat of LLMs is their potential to ruin your day if you use them to assist in your work.


Username checks out


Are you asking for evidence that LLM’s can be used to write emails and chat messages?


I don't think this is going to be a big issue. Those attacks have to be high-profile attacks. If you look at the xz backdoor, there was some top notch engineering behind it.

If we ever reach a level of LLMs being able to do that, we don't need any open source contributors any more. We just tell tell the LLM to program an operating system and it will just do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: