Hacker News new | past | comments | ask | show | jobs | submit login

> These would seem contradictory.

Only if you believe that LLM is a synonym for AI, which OpenAI doesn't.

The things Altman have said seem entirely compatible with "the danger to humanity is ahead of us, not here and now", although in part that's because of the effort put into making GPT-4 refuse to write propaganda for Al Quaida, as per the red team safety report they published at the same time as releasing the model.

Other people are very concerned with here-and-now harms from AI, but that's stuff like "AI perpetuates existing stereotypes" and "when the AI reaches a bad decision, who do you turn to to get it overturned?" and "can we, like, not put autonomous tasers onto the Boston Dynamics Spot dogs we're using as cheap police substitutes?"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: