Hacker News new | past | comments | ask | show | jobs | submit login

I mean, there is an entire discussion ongoing whether or not they might be.

But LLMs (today) don't fulfill criterion #7: After training is completed, the model is fixed, so there is no more learning going on. #6 would also be hard to map to an LLM.




That is inaccurate given that the modality for improving a model clearly fulfills criterion #7.

The legal draft did not in any way either specify an AUTONOMOUS process, nor did it even specify any type of localized "learning" in terms of memory aggregation.

A discerning reader might notice that this begins to wade into very challenging philosophical questions.

Ding, ding, ding! You figured it out!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: