Hacker News new | past | comments | ask | show | jobs | submit login

Right. Altman didn't sign the AI pause though.

It is clear in the congressional hearings, but people didn't watch them, they seem to have skimmed article titles and made up their own narrative.

EDIT:

Which, to my point, means that "these companies" are not calling for "competitive upstarts" to be regulated. They are calling for future very large models, which they themselves are currently the most likely to train due to the enormous computational cost, to be regulated. Which is completely contradictory to what you were saying.




I'll start by saying that I think ours is an honest difference of opinion (or actually even weaker: a difference of speculation) in a very uncertain space. With that out of the way:

When I watched that congressional hearing, what I saw was identical to every other oh-so-concerned executive of an incumbent dominant businesses in any other industry. I truly see no reason to give them any more benefit of the doubt than the CEO of Exxon expressing concern about climate change or of Philip Morris expressing concern about lung cancer. And just like in those cases, it's not that the concern isn't legitimate, it's that they aren't credible people to have leading the charge against it. I suspect that is obvious to you in the other two cases, so I ask you: why do you draw a different conclusion here? Do you believe tech executives are just fundamentally more ethical than the executives in other industries? I simply don't think that's accurate.

I also think this is credulous:

> They are calling for future very large models, which they themselves are currently the most likely to train due to the enormous computational cost, to be regulated.

I've highlighted the part I think is wrong. What is actually happening is that lots of people are finding lots of different ways to undermine that thesis and, having already sunk huge costs in the currently productized models, and also in the next generation, these companies are worried about getting undercut by "good enough" cheaper and mostly open models.

Have you read "we have no moat and neither does openai" that came out of Google? This is what that's talking about.

Trying to regulate those models out of existence before they can undercut them is just rational business, I don't begrudge them doing it, it's their job. But it's up to the rest of us to not just go along to get along.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: