Hacker News new | past | comments | ask | show | jobs | submit login

Research developments are already showing that our models are woefully inefficient in their current state (compare the performance of GPT-3 140B against Alpaca 30B). Not only will hardware get better, the minimum model sizes for good inference will become smaller in the future.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: