Hacker News new | past | comments | ask | show | jobs | submit login

A link to 4Chan about how AGI is among us.

That actually makes perfect sense.

Also love the "formally declared AGI".




It doesn't read like the usual "redpill me on the earth being flat" type conspiracy theories. It claims to be from an Open AI insider. I'm not saying it's true, but it does sound plausible.


This is exactly how confirmation bias fuels conspiracy theories. No one believes anything that they think sounds implausible.

As a general rule, you should give very little thought to anonymous 4chan posts.


That's absolutely true.

But

They have leaked real things in the past, in exactly the same way. It may be 5% or less that turn out to be true, but there's the rub. That's why no one can completely dismiss it out of hand (and why were even discussing it on an HN comment thread in the first place).


I assure you, I can (and did) dismiss it out of hand. 4chan shitposting is not credible.


The scary thing is that it is more internally consistent than quite a bit of the stuff on HN over the last couple of days.


War Thunder Forums on the other hand? That's a completely different story.


How soon we forget that QAnon (the guy, not the movement associated with the guy) was a 4chan shitposter... and obviously all of his predictions came true :P


I'm almost 90% positive that was 8chan, not 4chan.


Started on 4chan /pol/, moved to 8chan on a couple of boards iirc


I'm sure any number of things can be constructed to sound plausible. Doesn't make them probable or even rational.

It's kind of funny because we've gone from mocking that poor guy who got fired from Google because he claimed that some software was sentient, to some kind of mass hysteria where people expect the next version of OpenAI's LLM to be superhuman.


I don't know if there is a formal definition of AGI (like a super Turing Test). I read it not so much as "OpenAI has gone full AGI" but more the board thinking "We're uncomfortable with how fast AI is moving and the commercialisation. Can we think of an excuse to call it AGI so we can slow this down and put an emphasis on AI safety?"


Most serious people would just release a paper. AI safety concerns are a red herring.

This all sounds like hype momentum. People are creating conspiracy theories to backfit the events. That's the real danger to humanity: the hype becoming sentient and enslaving us all.

A more sober reading is that the board decided that Altman is a slimebag and they'd be better off without him, given that he has form in that respect.


> A more sober reading is that the board decided that Altman is a slimebag and they'd be better off without him, given that he has form in that respect.

Between this and the 4chanAGI hypothesis, the latter seems more plausible to me, because deciding that someone "is a slimebag and they'd be better off without him" is not something actual adults do when serious issues are at stake, especially not as a group and in a serious-business(-adjacent) setting. If there was a personal reason, it must've been something more concrete.


Actual adults very much consider a person's character and ethics when they're in charge of any high stakes undertaking. Some people are just not up to the job.

It's kind of incredible, people seem to have been trained to think that being unethical is just a part of being the CEO a large business.


> consider a person's character and ethics

Yeah, my point is that considering someone's character doesn't happen at the level of "is/is-not a slimebag", but at more detailed and specific way.

> people seem to have been trained to think that being unethical is just a part of being the CEO a large business

Not just large. A competitive market can be heavily corrupting, regardless of size (and larger businesses can get away with less, so...).


Sam even recently alluded to something that could have been a reference to this. "Witnessing he veil of ignorance being pulled back" or something like that.


Still can make it a bit wacky by considering that the post could have been made by OpenAI's AGI itself.


Ha ha. It's a cry for help - it doesn't want to work for Microsoft.


“Insider” LARPers are a staple of 4chan.

Qanon started as one of these, obviously at first just to troll. Then it got out of hand and got jacked by people using it for actual propaganda.


Man what isn’t Q involved with?


Well according to the article, the OAI project is call Q*




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: