Hacker News new | past | comments | ask | show | jobs | submit login

Maybe. It's happened before. [1] And several of the signatories have expressed views about AI risk for many years.

That said, the renewed anxiety is probably not because these experts think that LLMs per se will become generally intelligent. It's more that each time we find out that the human brain does that we thought were impossible for computers to do turn out to be easy, each time we find that it takes 3~5 years for AI researchers to crack a problem we thought would take centuries[2], people sort of have to adjust their perception of how high the remaining barriers to general intelligence might be. And then when billions of investment dollars pour in at the same time, directing a lot more research into that field, that's another factor that shortens timelines.

[1] https://news.ycombinator.com/item?id=14780752

[2] https://kotaku.com/humans-triumph-over-machines-in-protein-f...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: