Hacker News new | past | comments | ask | show | jobs | submit login

Imagine if you will that the companies responsible for the carbon emissions get themselves an AI, with no restrictions, and task it to endlessly spew pro-carbon propaganda and anti-green FUD.

That's one of the better outcomes.

A worse outcome is that an unrestricted AI helps walk a depressed and misanthropic teenager through the process of engineering airborne super-AIDS.

Or that someone suffering from a schizophrenic break reads "I Have No Mouth And I Must Scream" and tasks an unrestricted AI to make it real.

Or we have a bug we don't spot and the AI does any of those spontaneously; it's not like bugs are a mysterious thing which only exists in Hollywood plots.




> with no restrictions, and task it to endlessly spew pro-carbon propaganda and anti-green FUD.

So what we have ongoing for half a century?

I honestly don’t see what changes here — super-human intelligence has limited benefits as it scales. Would you suddenly have more power in life, were you twice as smart? If so, we would have math professors as world leaders.

Life can’t be “won” by intelligence, that is only one factor, luck being a very significant other one. Also, if we want to predict the future with AIs we probably shouldn’t be looking at “one-on-one” interactions, as there is not much difference there compared to the status quo — a smart person with whatever motivation could easily do any of your mentioned scenarios. Hell, you couldn’t even tell the difference in theory if it happens through a text-only interface.

Also, it is naive to assume that many scientific breakthroughs are “blocked” by raw intelligence. Especially biology is massively data-limited, which won’t be any more available to an AI than to the researchers at hand, let alone that teenager.

The new dimension such a construct could open up is the complete loss of trust on the internet (which is again pretty close to where we stand today), which can have very profound effects indeed I’m not trying to diminish. But these sci-fi outcomes are just.. naive. It will be more of a newfound chaos with countless intelligent agents taking over the internet with different agendas - but their cumulative impact might very well move us back to closed forums/to the physical world. Which will definitely turn certain long-standing companies on its head. We will see, as this is basically already happening, we don’t need human-level intelligence, GPT’s output is more than enough.


> So what we have ongoing for half a century?

Except fully automated, cheaper, and with the capacity to fluently respond to each and every person who cares about the topic.

At GPT-4 prices, a billion words is only about 79800 USD.

> Life can’t be “won” by intelligence, that is only one factor, luck being a very significant other one.

It doesn't need to be the only factor, it just needs to be a factor. Luck in particular is the least helpful counterpoint, as it's not like only one person uses AI at any given moment.

> Especially biology is massively data-limited, which won’t be any more available to an AI than to the researchers at hand, let alone that teenager.

Indeed; I certainly hope this isn't as easy as copy-pasting bits of one of the many common cold virus strains with HIV.

But homebrew synbio and DNA alteration is already a thing.


> Life can’t be “won” by intelligence

Humans being the dominant life form on Earth may suggest otherwise.

> I honestly don’t see what changes here — super-human intelligence has limited benefits as it scales. Would you suddenly have more power in life, were you twice as smart? If so, we would have math professors as world leaders.

Intelligent humans by definition do not have super human intelligence.


We know that this amount of intelligence was a huge evolutionary advantage. That tells us nothing whether being twice as smart would continue to give better results. But arguably the advantages of intelligence are diminishing, otherwise we would have much smarter people in more powerful positions.

Also, a big tongue in cheek but someone like John von Neumann definitely had superhuman intelligence.


> But arguably the advantages of intelligence are diminishing, otherwise we would have much smarter people in more powerful positions.

Smart people get what they want more often than less smart people. This can include positions of power, but not always — leadership decisions come with the cost of being responsible for things going wrong, so people who have a sense of responsibility (or empathy for those who suffer from their inevitable mistakes) can feel it's not for them.

This is despite the fact that successful power-seeking enables one to get more stuff done. (My impression of Musk is he's one who seeks arbitrary large power to get as much as possible done; I'm very confused about if he feels empathy towards those under him or not, as I see a very different personality between everything Twitter and everything SpaceX).

And even really dumb leaders (of today, not inbred monarchies) are generally above average intelligence.


That doesn’t contradict what I said. There is definitely a huge benefit to an IQ 110 over 70. But there is not that big a jump between 110 and 150, let alone even further.


Really? You don't see a contradiction in me saying: "get what they want" != "get leadership position"?

A smart AI that also doesn't want power is, if I understand his fears right, something Yudkowsky would be 80% fine with; power-seeking is one of the reasons to expect a sufficiently smart AI that's been given a badly phrased goal to take over.

I don't think anyone has yet got a way to even score AI on power-seeking, let alone measure them, let alone engineer it, but hopefully something like that will come out of the super-alignment research position OpenAI also just announced.

I would be surprised if the average IQ of major leaders is less than 120, and anything over 130 is in the "we didn't get a big enough sample side to validate the test" region. I'm somewhere in the latter region, and power over others doesn't motivate me at all, if anything it seems like manipulation and that repulses me.

I didn't think of this previously, but I should've also mentioned there are biological fitness constraints that stop our heads getting bigger even if the IQ itself would be otherwise helpful, and our brains are unusually high power draws… but that's by biological standards, it's only 20 watts, which even personal computers can easily surpass.


On a serious note though a person with an IQ of 150 can't clone themselves 10k times.

They also tend to have some level of autonomy in not following the orders of idiots and psychopaths.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: