This is a strawman argument. LLMs, like books, are not inherently dangerous. Grenades are, and lack any legitimate purpose beyond indiscriminate killing.
LLMs are functions of their training data, nothing more. This is evidenced by how we see very different model architectures produce essentially the same result. All of that training data is out there, on the internet, in books; none of that “dangerous” knowledge is banned or regulated, nor should it be.
LLMs are functions of their training data, nothing more. This is evidenced by how we see very different model architectures produce essentially the same result. All of that training data is out there, on the internet, in books; none of that “dangerous” knowledge is banned or regulated, nor should it be.