Hacker News new | past | comments | ask | show | jobs | submit login

I think it's easier to dismiss risk with this project as it allows democratised access to AI models and furthers research in the field. The ability to generate low-quality content has been available since long before LLM technology, additionally these 70B param models are just barely fitting into $10,000 worth of hardware (not accounting for M-series chips).

The scaling issue with potential runaway AI can be excluded. The potential for virus writing / security exploitation perhaps but such risks are already present with existing models so this point too can be excluded. I'm not sure there's any inherent risk here compared to what's easily available with a considerably reduced amount of resource requirements. The write up here seems concerned with allowing independent and democratised research which is a greater benefit than concentrated efforts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: