Hacker News new | past | comments | ask | show | jobs | submit login

The temperature setting controls randomness output of the LLM.

We paraphrase all the time to avoid plagiarism and that's just somewhat randomized retelling of the same idea.

If you set the temperature to 0 in an LLM it's basically in "decompress/rote mode". I don't think this is qualitatively the same as "copying", possibly more akin to "memorization". I haven't seen very many demonstrations of verbatim-copy output that wasn't done with a temperature of or near 0.




Also, you can't avoid plagiarism by paraphrasing because paraphrasing is a form of plagiarism. The key here is whether you cite the source, which the model doesn't

https://www.scribbr.com/frequently-asked-questions/is-paraph...


that's coming. some LLM's are already starting to do that


How is the fact that there is a flag to disable plagiarism relevant for the issue that there is plagiarism?


Because enabling "plagiarism mode" is a conscious action that a human takes, it does not default to "plagiarize" no more than a machine that has simply stored the verbatim copy of an article, when asked to print it out, is "plagiarizing". Plus, citations are showing up in LLM's now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: