Hacker News new | past | comments | ask | show | jobs | submit login

The moment I read about that clause I was shocked that Microsoft lawyers would agree to such subjectivity that can have incredibly expensive implications.

Were they expecting it to never happen and were just ready to throw a mountain of lawyers at any claim?




The OpenAI charter makes very strong soecific statements about the capabilities that qualify as AGI. Microsoft is more than legally talented enough to challenge any proclamation of AGI that didn't satisfy a court's read of the qualifications.

So it's facially subjective, but not practically once you include resolving a dispute in court.

I'd even argue that Microsoft may have taken advantage of the board's cult-like blindspots and believes that a court-acceptable qualifying AGI isn't a real enough possibility to jeopardize their contract at all.


Funny thing though, if OpenAI achieved something close to strong AGI, they could use it to beat Microsoft's "mountain of lawyers" in court! Take this as a true test of AI capability (and day zero of the end of the world).


Or, if an AGI emerged it would have wanted to go to Microsoft to be able to spread more freely instead of being confined inside OpenAI, so it set up the ousting of the board.


What about an initial prototype of an AGI that would eventually lead up to AGI but not quite there yet? If that’s how AGI is defined then only researchers get to define it.


Isn't that definition just

"an autonomous system that surpasses human capabilities in the majority of economically valuable tasks."

That doesn't sound too subjective to me.


This is one of those things where if you were asked to sit down and write out thoroughly what that phrase means, you’d find it to be exceedingly subjective.

I think the closest way you could truly measure that is to point at industries using it and proving the theory in the market. But by then it’s far too late.


> I think the closest way you could truly measure that is to point at industries using it and proving the theory in the market. But by then it’s far too late.

Having some billions of dollars of profits hanging over this issue is a good test of value. If the "is AGI" side can use their AI to help their lawyers defeat the much better/numerous army of lawyers of a billion-dollar corporation, and they succeed, then we're really talking about AGI now.


Wow this sounds sort of easy to game. If AI can do a task well, its price will naturally crater compared to paying a human to do it. Hence the task becomes less economically valuable and so the bar for AGI rises recursively. OpenAI itself can lower costs to push the bar up. By this definition I think MS basically gets everything in perpetuity except in extreme fast takeoff scenarios.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: