Hacker News new | past | comments | ask | show | jobs | submit login

Forgive me as I’m out of the loop. What propaganda are you referring to?



Sam tells Congress that AI is so dangerous it will extinct humanity. Why? So Congress can license him and only his buddies. Then get goes to euro and speaks with world leaders to remove consumer protection. Why? So he can mine data without any consequences. He is a narcissistic CEO who lies to win. If you are tired of the past decade of electronic corporate tyranny, abuse, manipulation and lies, then boycott OpenAi (should be named ClosedAi) and support open source, or ethical companies (if there are any).


> Sam tells Congress that AI is so dangerous it will extinct humanity. Why? So Congress can license him and only his buddies.

No, he says it because its true and concerning.

However, just because AGI has a good chance of making humanity extinct does not mean we're anywhere close to making AIs that capable. LLMs seem like a dead end.


> However, just because AGI has a good chance of making humanity extinct

How? I mean surely it will lead humanity down some chaotic path, but I would fear climate catastrophe much much more than anything AI-related.


Imagine if you will that the companies responsible for the carbon emissions get themselves an AI, with no restrictions, and task it to endlessly spew pro-carbon propaganda and anti-green FUD.

That's one of the better outcomes.

A worse outcome is that an unrestricted AI helps walk a depressed and misanthropic teenager through the process of engineering airborne super-AIDS.

Or that someone suffering from a schizophrenic break reads "I Have No Mouth And I Must Scream" and tasks an unrestricted AI to make it real.

Or we have a bug we don't spot and the AI does any of those spontaneously; it's not like bugs are a mysterious thing which only exists in Hollywood plots.


> with no restrictions, and task it to endlessly spew pro-carbon propaganda and anti-green FUD.

So what we have ongoing for half a century?

I honestly don’t see what changes here — super-human intelligence has limited benefits as it scales. Would you suddenly have more power in life, were you twice as smart? If so, we would have math professors as world leaders.

Life can’t be “won” by intelligence, that is only one factor, luck being a very significant other one. Also, if we want to predict the future with AIs we probably shouldn’t be looking at “one-on-one” interactions, as there is not much difference there compared to the status quo — a smart person with whatever motivation could easily do any of your mentioned scenarios. Hell, you couldn’t even tell the difference in theory if it happens through a text-only interface.

Also, it is naive to assume that many scientific breakthroughs are “blocked” by raw intelligence. Especially biology is massively data-limited, which won’t be any more available to an AI than to the researchers at hand, let alone that teenager.

The new dimension such a construct could open up is the complete loss of trust on the internet (which is again pretty close to where we stand today), which can have very profound effects indeed I’m not trying to diminish. But these sci-fi outcomes are just.. naive. It will be more of a newfound chaos with countless intelligent agents taking over the internet with different agendas - but their cumulative impact might very well move us back to closed forums/to the physical world. Which will definitely turn certain long-standing companies on its head. We will see, as this is basically already happening, we don’t need human-level intelligence, GPT’s output is more than enough.


> So what we have ongoing for half a century?

Except fully automated, cheaper, and with the capacity to fluently respond to each and every person who cares about the topic.

At GPT-4 prices, a billion words is only about 79800 USD.

> Life can’t be “won” by intelligence, that is only one factor, luck being a very significant other one.

It doesn't need to be the only factor, it just needs to be a factor. Luck in particular is the least helpful counterpoint, as it's not like only one person uses AI at any given moment.

> Especially biology is massively data-limited, which won’t be any more available to an AI than to the researchers at hand, let alone that teenager.

Indeed; I certainly hope this isn't as easy as copy-pasting bits of one of the many common cold virus strains with HIV.

But homebrew synbio and DNA alteration is already a thing.


> Life can’t be “won” by intelligence

Humans being the dominant life form on Earth may suggest otherwise.

> I honestly don’t see what changes here — super-human intelligence has limited benefits as it scales. Would you suddenly have more power in life, were you twice as smart? If so, we would have math professors as world leaders.

Intelligent humans by definition do not have super human intelligence.


We know that this amount of intelligence was a huge evolutionary advantage. That tells us nothing whether being twice as smart would continue to give better results. But arguably the advantages of intelligence are diminishing, otherwise we would have much smarter people in more powerful positions.

Also, a big tongue in cheek but someone like John von Neumann definitely had superhuman intelligence.


> But arguably the advantages of intelligence are diminishing, otherwise we would have much smarter people in more powerful positions.

Smart people get what they want more often than less smart people. This can include positions of power, but not always — leadership decisions come with the cost of being responsible for things going wrong, so people who have a sense of responsibility (or empathy for those who suffer from their inevitable mistakes) can feel it's not for them.

This is despite the fact that successful power-seeking enables one to get more stuff done. (My impression of Musk is he's one who seeks arbitrary large power to get as much as possible done; I'm very confused about if he feels empathy towards those under him or not, as I see a very different personality between everything Twitter and everything SpaceX).

And even really dumb leaders (of today, not inbred monarchies) are generally above average intelligence.


That doesn’t contradict what I said. There is definitely a huge benefit to an IQ 110 over 70. But there is not that big a jump between 110 and 150, let alone even further.


Really? You don't see a contradiction in me saying: "get what they want" != "get leadership position"?

A smart AI that also doesn't want power is, if I understand his fears right, something Yudkowsky would be 80% fine with; power-seeking is one of the reasons to expect a sufficiently smart AI that's been given a badly phrased goal to take over.

I don't think anyone has yet got a way to even score AI on power-seeking, let alone measure them, let alone engineer it, but hopefully something like that will come out of the super-alignment research position OpenAI also just announced.

I would be surprised if the average IQ of major leaders is less than 120, and anything over 130 is in the "we didn't get a big enough sample side to validate the test" region. I'm somewhere in the latter region, and power over others doesn't motivate me at all, if anything it seems like manipulation and that repulses me.

I didn't think of this previously, but I should've also mentioned there are biological fitness constraints that stop our heads getting bigger even if the IQ itself would be otherwise helpful, and our brains are unusually high power draws… but that's by biological standards, it's only 20 watts, which even personal computers can easily surpass.


On a serious note though a person with an IQ of 150 can't clone themselves 10k times.

They also tend to have some level of autonomy in not following the orders of idiots and psychopaths.


At this point there are no evidence that climate catastrophe that can make human extinct is either likely or possible - at least due to global warming. At worst some coastal regions get flooded and places around equator become unlivable without AC. Some people will have to move but it does not make anyone extinct.

We should absolutely care about nature and our impact on it but climate alarmism is not a way to go.


Note that I said AGI there, not AI. The full AGI X-risk case is hundreds of pages, unsuitable for a hackernews discussion.

To oversimplify to the point of wrongness: Essentially how humans dominated our world, by being smarter.


By being smarter by a lot than animals. But Neanderthals were arguably even smarter (bigger brain capacity at least), and they have not become the dominant species (though neither were killed off as “lesser” humanoids, but mostly merged).


> No, he says it because its true and concerning.

Both can be true. It is extremely convenient to someone who already has an asset if the nature of that asset means they can make a convincing argument that they should be granted a monopoly.

> LLMs seem like a dead end.

In support of your argument, bear in mind that he's making his argument with knowledge of what un-nerfed LLMs at GPT-4 level are capable of.


> It is extremely convenient to someone who already has an asset if the nature of that asset means they can make a convincing argument that they should be granted a monopoly.

While this is absolutely true, it's extremely unlikely that a de jure monopoly would end up at OpenAI's feet rather than any of the FAANGs'. Even in just the USA, and the rest of the world has very different attitudes to risks, freedoms, and data processing.

Not that this proves the opposite — there's enough recent examples of smart people doing dumb things, and even without that the possibility of money can inspire foolishness in most of us.


> While this is absolutely true, it's extremely unlikely that a de jure monopoly would end up at OpenAI's feet rather than any of the FAANGs'

Possibly. The Microsoft tie-up complicates things a bit from that point of view. It wouldn't shock me if we were all using Azure GPT-5 in a few years' time.


It's possible, I don't put much weight on it given all the anti-trust actions past and present, but it's possible.


> its true and concerning

> LLMs seem like a dead end

These would seem contradictory. If you really think that both are true and Altman knows it, then you're saying he's a hype man lying for regulatory capture. And to some extent he definitely is overblowing the danger for his own gain.

I really doubt they are a dead end though, we've barely started to explore what they can do. There's a lot more that can be extracted from existing datasets, multimodality, gains in GPU power to wait for, fine tunes for use cases that don't even have datasets yet, etc. Just the absolute mountain of things we've learned since LLama came out are enough to warrant base model retrains.


> These would seem contradictory.

Only if you believe that LLM is a synonym for AI, which OpenAI doesn't.

The things Altman have said seem entirely compatible with "the danger to humanity is ahead of us, not here and now", although in part that's because of the effort put into making GPT-4 refuse to write propaganda for Al Quaida, as per the red team safety report they published at the same time as releasing the model.

Other people are very concerned with here-and-now harms from AI, but that's stuff like "AI perpetuates existing stereotypes" and "when the AI reaches a bad decision, who do you turn to to get it overturned?" and "can we, like, not put autonomous tasers onto the Boston Dynamics Spot dogs we're using as cheap police substitutes?"


A dead end for human+ level AGI, they will still be useful.


And he should get an exclusive licence for that. I don't think it is the time for religion here.


These ChatGPT tools allow anyone to write short marketing and propaganda prompts. They can then take the resulting paragraphs of puffery and post them using bots or sock puppets to whatever target community to create the illusion of action, consensus, conflict, discussion or dissention.

It used to be this took a few people to come up with writing actual responses to forum posts all day, or marketing operations plans, or pro- or anti-thing propaganda plans.

But now, you could astroturf a movement with a GPU, a ChatGPT clone, some bots and vpns hosted from a single computer, a cron job, and one human running it.

If you thought disinformation was bad 2 years ago, get ready for fully automated disinformation that can be targeted down to an online community or specific user in an online community...


I believe a new wave of authentication might come out of this, where it is tied to citizenship for example (or something related to physical reality). Otherwise we will find ourselves in a truly chaotic situation.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: