Hacker News new | past | comments | ask | show | jobs | submit login

Remember, about a month ago Sam posted a comment along the lines of "AI will be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to very strange outcomes".

The board was likely spooked by the recent breakthroughs (which were most likely achieved by combining transformers with another approach), and hit the panic button.

Anything capable of "superhuman persuasion", especially prior to an election cycle, has tremendous consequences in the wrong hands.




> Remember, about a month ago Sam posted a comment along the lines of "AI will be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to very strange outcomes".

Superhuman persuasion is Sam's area of expertise, so he would make that a priority when building chatbots.


It seems much more likely that this was just referring to the ongoing situation with LLMs being able to create exceptionally compelling responses to questions that are completely and entirely hallucinated. It's already gotten to the point that I simply no longer use LLMs to learn about topics I am not already extremely familiar with, simply because hallucinations end up being such a huge time waster. Persuasion without accuracy is probably more dangerous to their business model than the world, because people learn extremely quickly not to use the models for anything you care about being right on.


Sounds like we need an AI complement to the Gell-Mann Amnesia effect.


But they didn’t hit the panic button. They said Sam lied to them about something and fired him.


According to this article Sam has been telling the board that this new advance is not AGI and not anything to worry about (so they can keep selling it to MSFT), then the researchers involved went behind Sam's back and reported to the board directly, claiming that they'd created something that could-maybe-be AGI and it needs to be locked down.

That's the claim at least.


If that research team is unwanted at OpenAI, I know places they can go with coworkers writing to their boss’s boss.


I meant to have the word not in there. Kinda changes the meaning


Looking at humanity, persuasion seems to be an extremely low bar! Also for a superhuman trait is it that it’s capable of persuading anyone anything or rather that it’s able to persuade everyone about something. Power vs. Reach.


I agree with this. Corporate news is complete and total obvious bullshit, but it overwhelmingly informs how people think about most anything.


"especially prior to an election cycle"

It looks like you are referring to the USA elections.

1. humanity != USA

2. USA are in a constant election cycle

3. there are always elections coming around the world, so it's never a good time


I agree with this conclusion and it's also why I'm not that afraid of the AGI threat to the human race. AGI won't end the human race if "superhuman persuation" or "deception-as-a-service" does it first.


I feel this could be used in positive ways.

Superhuman pursuation to do good stuff.

That'll be a weird convo what is 'good'.


Understandably, the board may be concerned about the potential consequences of AI-powered superhuman persuasion, particularly during an election cycle. Combining transformers with other approaches has led to recent breakthroughs, which could have significant implications.


We've built the web into a giant Skinner box. I find the claim dubious, but this is the sort of thing we ought to find at the forefront of our technology. It's where we've been going for a long time now.


Which party is "the wrong hands"?


Any party with sufficient resources and motive to influence the outcome of an election. Outside of election season, this tech would be very dangerous in the hands of anyone seeking to influence the public for their own gain.


The original commenter didn’t mention a party. Please don’t polarise the discussion into a flame war. Whatever system exists won’t be used by “a party” all at once, but by individuals. Any of those, with any political affiliation, can be “the wrong hands”.

I’ll offer a simple definition. The role of government is to serve the greater good of all people, thus the wrong hands are the ones which serve themselves or their own group above all.


Both? Parties in a democracy aren't supposed to be shepherds of the stupid masses, I know manipulation and misinformation is par for the course on both sides of the aisle, but that's a huge problem. Without informed, capable citizens, democracy dies a slow death.


Except that there's a fairly large body of evidence that persuasion is of limited use in shifting political opinion.

So the persuasion would need to be applied to something other than some sort of causative political-implication-laden argument.


Or, let's say, you don't need a lot of persuasion to guide an election. I mean we already have X, FB, and an army of bots.


> Except that there's a fairly large body of evidence that persuasion is of limited use in shifting political opinion.

The Republican Party's base became isolationist and protectionist during 2015 and 2016 because their dear leader persuaded them.


I don't think that aligns with the reality of the opinion formation. There was a strong subset of isolationist and protectionist views before 2015.


I think it’s not clear that the causation flowed that way. I think it’s at least partially true that the Republican base was much more isolationist and protectionist than its “establishment” elite, so any significant candidate that played into that was going to get some level of support.

That, combined with Donald Trump’s massive pre-existing celebrity, talent for showmanship, and utter shamelessness got him across the line.

I think it’s fair to say that at least partially, Trump didn’t shift the base - rather he revealed that the base wasn’t where the establishment thought it was.


I know that by "dear leader" you mean to imply that Trump did something unfair/wrong/sinister/etc ("just like Hitler", amirite fellas?)., but a leader of a large group of people, by definition, is good at persuasion.

Franklin Roosevelt moved the Democratic Party in a direction very different from its first century. The party's two previous presidential nominees were a Wall Street corporate lawyer (John W. Davis) and Al Smith who, despite also being a New York City resident and state governor, so opposed FDR by the end of his first term that he founded an influential anti-New Deal organization. During the Roosevelt years the Democrats lost significant support from traditional backers, but more than made up for it with gains elsewhere in what became the New Deal coalition.

Similarly, under Trump the GOP lost support in wealthy suburbs but gained support elsewhere, such as Rust Belt states, Latinos (including places like South Florida and the Texas border region), blacks, and (according to current polls) young voters. We'll see whether one compensates for the other.


Even if it were true that human persuasion is of limited use in shifting opinions, the parent posted is talking about superhuman persuasion. I don't think we should just assume those are equally effective.


Do you think any rhetoric could ever persuade you to you adopt the opposite general worldview of what you currently have? I'm positive that it could not for me. The reason for this is not because I'm obstinate, but because my worldview is not formed on persuasion, but on lived experience. And I think this is true for the overwhelming majority of people. It's why our views tend to change as we age, and experience more of the world.

You can even see this geographically. The reason many in South Texas might have a negative view of immigration while those in San Francisco might have a positive view of immigration is not because of persuasion differences, but because both places are strongly impacted by immigration but in very different ways. And this experience is what people associate with immigration in general, and so it forms people's worldview.


Yes. Do not forget that we literally live in the Matrix, getting all the information of import through tiny screens, the sources and validity of which we can only speculate on.

All of the validity of the info we have is verified by heuristics we have, like groupthink, listening to 'experts' and trying to match up the info with our internal knowledge and worldview.

I feel like our current system of information allows us to develop models that are quite distant from base reality, evidenced by the multitudes of realities existing in people's heads, leading some to question if 'truth' is a thing that can be discovered.

I think as people become more and more Internet-addicted, an increasing amount of our worldviews come through that little screen, instead of real-life experiences.


I like your comment.

The world is becoming information saturated and poorly structured by design, ever notice how these story blockers are such a big part of the propaganda machine, whereby you have to use elaborate workarounds to just read a simple news story thats pulled from another source?

Saturating culture with too much data is a great tool of breaking reality, breaking truth.

But they cant break truth for long, it always finds a way. And truth is a powerful vector, much more than propaganda without a base in truth, because human experience is powerful, unquantifiable, and can take someone from the gutter to a place of massive wealth or influence, in an instant. That is the power of human experience, the power of truth.

Doesnt make it easy though, to live in this world of so many lies, supercharged by bots. Nature outside of our technology is much simpler in its truth.


I think it’s extremely positive that most of our information comes from the Internet, because before that we only got information from our local peers who are often extremely wrong or problematic and their opinions. All I have to do is look at organized religion, and the negative impact that it’s had on the world, to appreciate that the Internet has, in general, a higher standard of evidence and poor opinions are more likely to be challenged


Some people get relevant information not only from little screens but interactions with other human beings or physical reality.


Unless you happen to move in extremely well-informed circles, most of the information about what's going on in the world is coming to you through those little screens (or from people who got it from said screens)


True for larger issues, which makes moving in such circles so valuable and the perspective of people only looking at small screens potentially so distorted there.

However, for smaller issues and local community issues "special access" isn't really much of a thing.


Yeah, but then those smaller issues aren't usually contested. Humans are good at getting the directly and immediately relevant things right, where being wrong is experienced clearly and painfully. We have time-honed heuristics letting us scale this to small societies. Above that, things break down.


Not really: go to any meeting on building a new local road and see very different views on the local reality. The ability to understand and navigate those isn't too different to what is needed on bigger issues.


For me, the accuracy of my predictions about world events and personal outcomes leads me to believe that my reality model is fairly accurate.

I do notice many don't seem to revisit their predictions for reflection though. Perhaps this happens more subconsciously.

I'm wrong regularly ofc.


While I agree that human persuasion would probably not change a worldview built on lived experience, you can't know in advance what might be possible with superhuman persuasion. You might be led to believe that your experience was interpreted incorrectly, that things are different now or that you live in an illusion and don't even know who you are. There is no way to tell what the limits of psychological manipulation are for reprogramming your beliefs unless you are totally above any human doubt about everything, which is in itself a sad state to be in.

I hope that persuaded you :)


Well, but I'm sure you'd accept that there are limits. Where we may differ is where those limits begin and where they end. In the end LLMs are not magical. All it's going to be able to do is present words to you. And how we respond to words is something that we can control. It's not like some series of words is just going to be able to completely reprogram you.

Like here I expect there is 0% chance, even if I had a superhuman LLM writing words for me, that I could ever convince you that LLMs will not be able to convince you to hold any arbitrary position. It's because you've formed your opinion, it's not falsifiable, and so there's not a whole heck of a lot else to be done except have some fun debates like this where, if anything, we tend to work to strengthen our own opinions by finding and repairing any holes in them.


Both our opinions about this are equally unfalsifiable unless we agree on an experiment that can be performed at some point which would make one of us change their mind.

I assume you'd agree that the pursuit of what is ultimately true should be exactly the opposite of making oneself more closed minded by repairing inconvenient holes in one's opinions rather than reassessing them based on new evidence.

I wasn't referring to the ability to persuade someone to hold an arbitrary position (although that could be a fun debate as well), and putting aside the discussion about the ability to persuade fanatics, if a super intelligence had an internal model that is more aligned with what is true, it could in theory convince someone who wants to understand the truth to take a critical look at their opinions and change them if they are authentic and courageous enough to do so.


> Do you think any rhetoric could ever persuade you to you adopt the opposite general worldview of what you currently have?

Yes, it's possible.

> The reason for this is not because I'm obstinate, but because my worldview is not formed on persuasion, but on lived experience.

Lived experience is interpreted through framing. Rhetoric can change the framing through which we interpret the world through practice. This is why CBT works. Superhuman CBT could arguably work even better.

Remember that if "superhuman X" is possible, then our intuitions formed from "human X" are not necessarily valid. For sure any X still has a limit, but our intuitions about where that limit is may not be correct.


When you say persuasion, are you referring to fact based, logical argument? Because there are lots of other types of persuasion and certainly some work very well. Lying and telling people what they want to hear without too many details while dog whistling in ways that confirm their prejudices seems to be working pretty well for some people.


Just to add that a lot of people don't care about facts. In fact, if acting according to facts make me lose $$ I'd probably start building lies.


    * **that confirm their prejudices** *
(my emphasis)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: