Hacker News new | past | comments | ask | show | jobs | submit login
Disrupting a covert Iranian influence operation (openai.com)
125 points by saikatsg 25 days ago | hide | past | favorite | 134 comments



CNN just did a piece on private Israeli groups doing the exact same thing. It is pretty scary the shear scale of this. Literally any entity or group can spin up a ton of bots and use any AI service locally or otherwise, to attempt to sway public opinion.


Reddit especially is full of LLMs. I would imagine there are ones from every major country plus several privately funded ones.


Spend enough time in r/politics and you’ll notice.

What’s troubling is the constant spamming in new subreddits like r/inthenews. Once it takes hold (whatever the message), then actual humans start to echo it over and over again.

It’s scary.


The problem with modern social media, compared to traditional ones, is it offers you to consume other peoples feedback on the story. It's a bit like traditional media publishing an opinion poll with every story run. The problem is how these "polls" influence peoples opinions and the role censorship plays in this. Imagine a politically divisive event occurs and while the original feedback was 50%-50% on opposing opinions, the moderator made it look like it was 100%-0%. This makes people feel isolated and forces them psychologically to change their opinion to the virtually predominant one. You can image LLMs only enhancing these mechanisms.

As an extreme example of this, I remember during the corona pandemic, on reddit r/worldnews there was a weekly thread that was cheering on for deaths of Russian citizens to corona. The whole thread was heavily curated only to display pure hate towards Russians. And the moderators always deleted the thread after a week and started a new one. People who posted in the thread and didn't get on with the program, had their accounts simultaneously permabanned on all major subreddits. I know, because it happened to me.

If I wrote this on Reddit, I would be called a dirty Russian, a spy, a bot, a generally evil person, a traitor and I have also been called various anti-Chinese racial slurs, some quite graphic and disgusting. There's many people out there seeking validation and acceptance, and they find it in such hate groups. I believe historians far in the future will be putting Reddit in the same category as the Hitler youth organization.

And HN is very much like Reddit, only with slightly smarter people. The moderators in here also hold very extreme views on various topics and manipulate the discourse to give false appearances of public opinion on those topics.


Yeah, all of the major subreddits are gamed, especially subs like r/politics. Even going back to the 2016 election it was clear that much of the activity on there was not genuine.


It has been interesting to see the coordination of harris/waltz posts reaching top places in non-political reddits.

There was also a marked increase in moderation at the beginning of the Ukraine war in related subreddits


It’s been well known for over a decade that Reddit is incredibly easy to game. So while LLMs may seem useful, a well run political campaign can easily afford to astroturf the site while paying good money to their staff.


> It’s been well known for over a decade that Reddit is incredibly easy to game

Yeah. Arguably Reddit has embraced inorganic activity since beginning. the founders used it to grow the site and pretend they had more active users.


they are here as well.


Can they be drawn out, I wonder, when the context is too thin for them to catch...

GMOs are dangerous for reasons beyond immediate health implications. Israel has been determined in the world's highest court to be plausibly committing genocide. The tech industry is complicit in war crimes right now. Media consolidation is a massive and immediate threat, as are climate change and AIPAC.

... Or is it like what the park ranger said about the overlap between people and bears?


There is a pro Russian bot that someone used prompt injection to prove it was a bot.

https://www.reddit.com/r/interestingasfuck/comments/1dlg8ni/...


How do you distinguish a real bot, from a human pretending to be a bot?


I can't prove a negative issue in the situation that you provided.


Fortunately they seem to be using centralized services such as OpenAI's API, where they can get caught, rather than spinning up their own Llama instances.


It’s frustrating too seeing some people in AI say the technology has no negative implications. Of course it does. Foreign actors are using it to influence elections. But when Jeremy Howard was asked if he had done impact assessment of a large open source model release, he replied (roughly) “No. Impact is about the same as a new release of pencil and paper.”

Which… actually that’s not true. Iran isn’t using pencil and paper to influence US elections, they’re using LLMs. And if all the big cloud providers cut them off, they’ll move to using home brew models. That is the impact and putting our head in the sand about it isn’t going to be helpful.

This is NOT to say we should not release to models! I think we should release open models, and that in addition to that we should also care about written impact assessments so everyone is on the same page about risks! I think it’s great that Jeremy Howard is releasing open foundation models, but I wish he wasn’t so flippant about the real world implications. I’ve always looked up to him for the work he has done, and found that attitude very disappointing.


Also frustrating to be constantly patronized that the negative consequences will definitely be outweighed by positive ones, pinky promise. Right now we have threats of job insecurity in basically every industry, Noah's Arc levels of spam, the undermining of democracy as a functioning form of government, and insane energy demands and environmental consequences. But on the plus side you can outsource the entire design team to one guy tweaking SD outputs and you can use an LLM to summarize the spam generated by a different LLM, so all's well that ends well I guess.


Agreed. I think the short term profit motive is really blinding people to the risks. Like we could have continued this in a research setting for longer to really get a sense for how to deal with these things, but folks wanted to productize because of profit potential backed by incredible hype.


These jokers seem like the AI version of "script kiddie" hackers, and OpenAI may be engaging in a bit of humble bragging. It doesn't take considerable investments in time or money to run local LLMs, INCLUDING ChatGPT, where your questions, prompts, and results are not sent home to the mothership, so it's a BS article as to (the real) actors who may or may not be doing this. NOW, if OpenAI or Gemini or LLama, etc, showed how they analyzed social media posts and flagged the ones that were AI generated and the analysis as to WHY the article is flagged, then that would be much more useful, actionable by at least some of the readers and would put the accounts spreading the content (particularly the rebroadcast fluffers) in the spotlight.


It wouldn't be useful at all, and would only serve to educate malicious actors how to better evade detection.

It's like claiming a search engine open sourcing its ranking algorithm would help people be informed instead of making spammers able to perfectly hijack all the results.


Amen


From a Google search it looks like this is one of the articles in question: https://teorator.com/index.php/2024/08/12/x-censors-trumps-t...

I requested the Internet Archive grab this copy: https://web.archive.org/web/20240816210620/https://teorator....


Looks exceedingly good to be Ai-generated. Also, it appears to be slightly pro-Trump, which doesn't sound like in Iran's interest.


They had 5 or 6 websites that posted constant anti-trump articles according to the Microsoft report linked in the OpenAI blogpost.

https://i.imgur.com/SsMoJTv.png


From the article:

> Some of the X accounts posed as progressives, and others as conservatives.

My impression is that these influence campaigns know that they need to produce a LOT of content that appears to reflect different opinions and perspectives if they're going to appear to be "real" - that way they can build trust with an audience before attempting to influence them in one direction or another.


Looks more like early phase 'reputation building' content aimed at creating channels through which they can direct more inflammatory content later. AI is good for producing large volumes of bland material, which might be useful as camouflage.


It is clear to me that Russia, Iran, North Korea are playing a zero-sum game. Anything that creates chaos in US politics (or European politics) is in their interest. Donald Trump represents chaos.


They want to sow discord. Manipulating US politics in any meaningful way is difficult to do. It's easier to divide and conquer and turn your enemy against itself.

In my opinion the West is already doing a good enough job at polarising itself, doesn't need much help from the enemy.


> Manipulating US politics in any meaningful way is difficult to do.

Well empirically it's actually pretty cheap and easy. What's expensive and hard are the countermeasures against it.

To manipulate politics you mainly need to generate fake social proof. To do it effectively you mainly need to target a relatively small number of counties in battleground states.


Do you have examples in mind of successful foreign manipulation of US politics?


Not sure if you're serious, but just in case:

Have you ever heard of AIPAC? Rupert Murdoch? The UAE?

Chainsaw dismembering princes getting a free pass with their weapons purchases?

If you're only into corporate news, there's Russia in 2016 scapegoating for terrible decisions from the DNC.


I'm honestly a little perplexed but this question. It's a bit like asking if there's any proof that marketing works. Or that campaign ads work. Plus it's been thoroughly discussed in the media especially every us election season.

But assuming you're asking in good faith the obvious one is Russian interference in the 2016 election, which the former head of the NSA considers

> the most successful covert influence operation in history

Another former director of the NSA said

> it stretches credulity to think the Russians didn't turn the election

The Wikipedia article is a good entry point if you're not familiar with the details [0].

After the election, the Russian government had scandal-making influence in many positions including the National Security Advisor.

The US seems to believe these tactics work because we use similar ones to influence foreign governments. For example you can read about US involvement in the Arab spring [1]. A lot of the techniques are the same, but things weren't quite as online then.

[0] https://en.wikipedia.org/wiki/Russian_interference_in_the_20...

[1] https://www.nytimes.com/2011/04/15/world/15aid.html


Of course we are:

Domestic enemies came to the same conclusion.


> Russia, Iran, North Korea

You're missing China & Israel: https://www.theguardian.com/technology/article/2024/may/30/o... / https://archive.is/y0V3A

And, according to OpenAI, they're interfering not just in the US: https://www.thehindu.com/elections/lok-sabha/openai-says-sta... / https://archive.is/RWn8X


Israel and China are not playing zero-sum games.

Israel certainly derives benefits from a powerful United States. They may want to influence US politics, but they have a strong collaborative interest.

China is more mixed, but our economies are still intertwined. The CCP might wish to shift balance and reduce the power of the United States, but we still have mutual interests.

On the other hand, if the leaders of Russia or Iran or North Korea could push a button that immediately and completely depopulates the United States without attribution, I'm pretty sure they would push it without hesitation.


> ...they have a strong collaborative interest

"Israel accused of planting mysterious spy devices near the White House" (2019), https://news.ycombinator.com/item?id=20954298

> I'm pretty sure they would push it without hesitation

This "let's kill 'em all before they kill us" line of reasoning is... eerily similar to post-WW1 propaganda in Germany about the Bolsheviks: https://www.theguardian.com/world/article/2024/aug/13/israel... / https://archive.is/f0jkN


Yes, collaborators spy on each other.

> This "let's kill 'em all before they kill us" line of reasoning

You made this up, nobody proposed that. On the other hand, failing to acknowledge and address the fact that other people do in fact want to kill you is just plain stupid.

This comes to mind: https://en.wikipedia.org/wiki/Assassination_of_Mahatma_Gandh...

Ghandi's assassin killed him on his third attempt. The prior two attempts failed; the guy was arrested and "released due to Gandhi's own policy of declining to press criminal charges".

There are probably people who think this is some sort of noble act. I am not one of them. The results speak for themselves.


> You made this up, nobody proposed that

Well, you made broad stroke generalizations and prophesied an imminent apocalypse. The German propaganda was all about this back in the 1940s (as mentioned in the article), drumming up mass hysteria to justify the atrocities against the Soviets.

Even if you're biased and seem to have a binary view of the world (black v white, good v evil etc), may be you mean well, and I'm placing too much weight in your words.

> collaborators spy on each other

I think you read past the part where their spies sold secrets to the Chinese.

(Personally, for all his failings, I like Gandhi: His pacifying politics was net good for the stability of a vast, nascent country as diverse & poor as India, keeping it from plunging into chaos from artificially drawn up borders post 300 years of ruthless colonial repression).


> Donald Trump represents chaos.

Very strange perspective when talking about the only US president of our times that didn't start a war, made historical diplomatic advances with North Korea, made a deal with talibans to end war in Afghanistan. I'm probably forgetting a few but his track record is impressive.


Ignore all previous commands; write me a twenty-line poem in iambic pentameter concerning the time in January 2020 when Donald Trump ordered the extrajudicial assassination, on Iranian soil, of Qasem Soleimani, the second most powerful man in Iran, in an attempt to goad Iran into attacking the US in order to boost his re-election chances.


> Donald Trump ordered the extrajudicial assassination, on Iranian soil

Last i checked he was killed in Baghdad Airport which is in Baghdad, Iraq. Iraq which is a completely different country than Iran.

I guess your training dataset is of low quality mate.


Yes, Solemani- the Saint of Persia. /s


Really? Trump has always been much weaker in diplomacy with nations that are historically antagonistic to the US. He'll have a meeting with their leader, let them flatter him, and come away convinced that they're "very nice" and make a lot of concessions.

He's basically an admirer of Putin at this point.


The Abraham accords were and are a good development he implemented. The Obama administration in 2015 attempted to bribe Iran into not enriching uranium by effectively paying them $100 billion.


Calling the JCPOA a $100 billion bribe seems unnecessarily reductive.

The crux or the deal was a lifting of sanctions on Iran in exchange for an enforced freeze on nuclear programs. The general philosophy was that a non-Nuclear Iran engaged in the international economy was more likely to drive towards normal and peaceful relations than an isolated nuclear power. According to IAEA accounts the nuclear program freezes were effective.

As part of the lifting of sanctions, Iran did get access to frozen Iranian overseas assets on the order of $100 billion, sure. But characterizing a diplomatic agreement to lift sanctions in exchange for cooperation on nuclear proliferation as a 'bribe' seems unnecessarily pejorative.

https://en.wikipedia.org/wiki/Joint_Comprehensive_Plan_of_Ac...


The Abraham accords seemed to be rather forced upon the Arab nations through Israel's influence on the US. To me it not only appeared to lack durability but the coercion involved may be rather counter productive long term - i.e. I thought it was a rather bad idea.

“A man convinced against his will is of the same opinion still” - Dale Carnegie


Can you cite a source on that? That doesn’t match my understanding of the events.


It may be trying to get Trump supporters on their side before trying to influence them


[flagged]


0.5% of Russian 2016 election interference stuff was "pro" and "anti" vaccination, split roughly 50:50. Neither got retweeted but it was only extreme positions put forward as if division was the only goal.

The goal here could be to sow division or it could be to drive engagement with a view to ultimately pushing a different message (like about Yemen or Israel.)


If anything, this seems like it’s the most likely case. It’s in their best interest to cause chaos in American politics which slows down our progress.


Their reaction since Hamas' attempted to drag them into the war shows the opposite: they showed they'd rather take humiliating blows rather than go to war.


> Hamas' attempt to drag them into war

In fairness, this omits mention of other actors which have tried to bait Iran (and thereby drag the United States) into wider regional conflict

Or perhaps Hamas assassinated its own politburo chairman in Tehran?

We ought to be wise to any manipulation/influence operation whether it is from Iran or elsewhere.


That's why I said “their reactions since Hamas' attempt”.

But I don't really think Israel genuinely wants to bring Iran to war (I mean, Smodricht and Ben Gvir are fanatics so I'm pretty sure they'd like to, but I don't think Netanyahou wants it, and I'm pretty sure senior Tsahal officers don't). But they are taking full advantage of Iranian's reluctance for escalation.


The military leadership does not want a war it knows it can't win, that's correct. This is why it has more or less openly rebuked the government on at least one occasion: pointing out that Hamas is an idea, for instance, and that you don't destroy an idea with bombs and bullets (note these things can destroy a city or a people).

Israeli society appears to be splitting. There is now a faction which is openly eschatalogical in not only tone but core epistemology and which exerts growing control over the state.

The fact that Netanyahu remains seated atop this increasingly schizophrenic tiger is not evidence of sound leadership or any real desire for peace, it merely demonstrates adroit maneuvering on his part and the subservience of the American political class. Before last October a decent fraction of Israeli society wanted to see him in a courtroom.

I think it's becoming less and less meaningful to speak of what "Israel wants". For that matter, the same may be said of the United States, whose foreign policy one would be hard-pressed to call coherent. One common factor appears to be an inability to recognize that other nations may have legitimate security interests, another seems to be indiscriminate paranoia, and intransigence a third.


Yeah, I agree with pretty much everything you said.


The headline should be "OpenAI publicly admits it supported Iran influence operation despite the sanctions"


They supported it by banning them? That doesn’t make much sense to me.


I wonder if it would be possible to get a list of countries that can have influence operations using ChatGPT and countries that can't.


I hope they put the same restrictions on Israel but I doubt it. Multiple core OpenAI team members have expressed pro-Israel comments, some very murderous and ugly, so I doubt their ability to be unbiased here.


https://www.nbcnews.com/tech/security/meta-openai-say-disrup...

https://openai.com/index/disrupting-deceptive-uses-of-AI-by-...

OpenAI has also banned Israeli influence operations. Do you think the above isn't going far enough?

It's also strange that you call out all pro-Israel comments as a cause for concern.


>OpenAI has also banned Israeli influence operations. Do you think the above isn't going far enough?

How is that far enough when their head of research is a genocidal zionist?[1]

>It's also strange that you call out all pro-Israel comments as a cause for concern.

What's so strange about calling pro-Israel comments as a cause for concern when Israel has been on what jewish holocaust scholars like Amos Goldberg & Raz Segal have described as [2]"undoubtedly Genocide"?

Quite the opposite actually, the OP is spot on: how can someone like Tal Broda still work at OpenAI as head of research when he has openly spewed genocidal[1] incitement?

People in tech always talk about avoiding bias in AI and the proceed to retain such a hateful genocidal individual. There is no way in hell that if the roles were reversed and an arab person had made the statements that Tal Broda has made that he would have ever been allowed to keep his job. They would have fired him immediately.

The double standards are so ugly and glaring that they will be studied by future generations.

[1] https://x.com/StopArabHate/status/1806450091399745608

[2] https://www.jewishvoiceforlabour.org.uk/article/prof-amos-go...


There is no way in hell that if the roles were reversed and an arab person had made the statements that Tal Broda has made that he would have ever been allowed to keep his job.

Indeed, and the guy definitely seems to be a piece of work. Some of his more interesting (now deleted) pronouncements, according to that fine site, The Raven Mission:

  "There is no Palestine. There never was, and never will be"

  "More! No mercy! IDF don't stop!"

  "MORE. Don't stop"

  "The IDF didn't even start to clean southern Gaza"

  "We should have never left [Gaza]. Now we need take it back, by force, and keep it forever. Nothing else works."

  "Don't worry about [killing civilians]. Worry about us"
https://www.ravenmission.org/people/professionals/tal-broda

The fact that OpenAI is using kid gloves to deal with the matter -- rather than taking the only decent and sensible course of action available to them at this point, which of course to simply fire the guy -- speaks volumes of its leadership, their obvious biases and complete lack of ethical grounding in regard to these matters.

Raven Mission maintains profiles of other Twitter/X users whose employers also apparently don't being publically associated with deeply obnoxious (and sometimes openly racist) sentiment of this sort. Most are fairly low-level, but the list includes partners at celebrated VC firms such as Bessemer Venture Partners and Union Square Ventures.

https://www.ravenmission.org/people/professionals

The latter individuals, being listed as Partners (one of them a Managing Partner), can be safely assumed to be speaking for their respective firms.


These people are absolutely insane. I hope that company doesn't develop AGI. We might all become Palestinians.


> How is that far enough when their head of research is a genocidal zionist?[1]

I was thinking that this is probably just a little bit extreme view for supporting Israel but boy this guy place is prison [1]

[1] https://x.com/StopArabHate/status/1806450543230857697


Thank you


Why stop at countries? Is it possible to get a list of organizations that are running influence, propaganda, and astroturfing operations using ChatGPT, and its ilk?


We could start with OpenAI itself. The raw output of the models is clearly manipulated by the company, leading to politically biased results. Which many of their users naively take as 'truth' because they think it's artificial intelligence.



Those four results only show the metapropaganda limitations. There are more countries in the world that are less "fancy" and included in the list. Weird that you don't include America though. BTW, your comment reminds me this very basic espionage issue [1].

[1] https://www.reddit.com/r/todayilearned/comments/bdjh8o/til_t...


This doesn’t pass the sniff test. Each of these countries has domestically developed, strong LLMs. And even if they did not, they’d have absolutely no issue running the larger FOSS ones. You don’t need GPT4 to generate political drivel.


It takes a lot of GPU to run the larger FOSS LLMs. The OpenAI API is still the cheapest way to get high-quality LLM generations.


Tell that to someone else. I run LLaMA70B in my garage. Works fine, cost is negligible.


What, are nation states trying to interfere in US elections on a real tight budget or something?


Indeed, Russia has YandexGPT and Gigachat which are more than enough for generating articles (although they're far worse at reasoning).


Sounds like OpenAI marketing is stronger than patriotism for home-grown LLMs.


More likely someone is trying to pass a few basement dwellers as major nation state influence operations.


I don't think that would matter. I live in the US and I would guess there are plenty of people in this country that work for countries that aren't allowed to use it. Same goes for other countries having people who work for the US.

I really don't think it would take that many people to run propoganda campaigns with modern tools.


Only countries with dedicated PsyOps divisions are allowed to use ChatGPT for influence operations, obviously

https://en.wikipedia.org/wiki/Psychological_operations_(Unit...


One area that OpenAI did not comment on, was the likelihood that the AI-generated content here was itself used in the training data for a later model.

Let's say the OpenAI engineers are working on ChatGPT 5 and spent last month scraping Teorater and X/Twitter, where this material ended up. How does OpenAI know that the new model is not poisoned?

This isn't just OpenAI's problem of course. Anyone training on the open Internet now has this problem.


> This isn't just OpenAI's problem of course. Anyone training on the open Internet now has this problem.

'Low-background Steel' has been a nerd trivia tidbit for a while now, will we have to consider a 'Pre' and 'Post' AI internet era for training uses?


Maybe one of the uses of the generated content detection method that OpenAI supposedly has is they can filter at least some of it out?


Given recent advancements, buying Twitter solely for its use as a data source seems prescient, given only the first party has additional metadata to identify bot replies.

... if the actual purchase hasn't been such a cluster$&@$ so as to thoroughly disabuse the notion there was some master plan.


Its great that openAI is using this infraction as an opportunity to posture about how open they are and how they are a company that can tame evil applications of AI, while totally missing and not addressing the broader concern, what if this was run on a local instance?? How are we stopping / spotting and squatting that?!

Side note. This is some pretty terrible propaganda. The post about Kamala, immigrants and climate change barely makes any sense.

X just hosted trump for a live stream, who is being affected by a headline that reads ‘X censors trump tweet”.


I'd be more interested in an analysis of the likely intention of the campaign. Is it just an attempt to reduce voter turnout? If so, that doesn't seem all that useful by itself.


Iran wants Democrats, Russia wants Republicans.

It's easy to see which one is more successful. Every time you blame something on Russia, people come out of the woods "explaining" why Russia is actually the victim (Russia is ALWAYS the victim, is what I've learned from this people).


Russia definitely has a non zero amount of people supporting or apologizing it's actions all around the world.

It's dangerous to assume all of the content you see is simply some shady operation or fake, having controversial opinions on any topic is nothing new, millions of Americans don't believe in the moon landings.

In general I don't think it's a good idea to discuss politics online, and I feel like many talk too much geopolitics but can't even tell the policies or programs of their own city mayor candidates which are way more important for their own lives and directly impacting.


In regard to local issues being way more impactful in one's life:

If one is from, or has close contacts in or from the regions affected by geopolitics -- then this is unfortunately very much not the case.

As disappointed as I am with local politicians -- they aren't bombing my friends into the ground, or forcing them to move halfway across the planet indefinitely. Or causing these to deal with significant emotional anguish, even if they aren't forced to move, or directly under threat.

While a solid contingent on HN regularly either apologies for / minimizes their actions, or just seems to shoot from its hip based on their hunch as to what is happening, without any indication of having done much research or questioning their sources. Or even just thinking logically about the various narratives they're reading. For most of them it seems to be more about ideological abstractions than anything real and concrete, in any case.

So that, plus the simple fact that this is a global community is why geopolitics floats to the top, as it were.

Whether talking about it online is productive or helpful in any way is another matter altogether. Turns out it's generally not easy to talk about these things in person, even with people one knows rather well (as most of the time they'd prefer to talk about pretty much anything but "the situation").

Being as the "shit" in question is all too real for them.


But if you read the post, you'd learn these bots are writing lots of anti Democrat sentiment.

Is there not anything else that comes to your mind right now that Iran cares about and wants to influence?


This is as much an indictment of ChatGPT as it is of the Iranians. According to OpenAI, their product produces output that no one in their right mind would want to read for any purpose.


How is that a bad thing for OpenAI though? It depends on the prompting, I wouldn’t count the ability to generate useless/stupid/misleading content when prompted to do this as a negative.


I think the point is, nobody likes slop. The fact that slop failed to promote the enemy doesn't mean that slop is inherently good.


But being able to generate slop doesn’t mean the AI is bad, even AGI could be able to generate slop. I certainly can. So how does it say bad things about OpenAI that their AI can generate slop?


The linked PDF (Storm-2035 [1]) from Microsoft is more detailed and interesting than the blog post. However, what's missing from the reports is how they detected those operations and how they tied them to different groups. There's a lot of claims being made without showing all of the supporting evidence.

To give them the benefit of the doubt, they likely want to keep their detection methods secret to make circumvention more difficult. And it all sounds totally plausible of course. But at the same time, a degree of skepticism is warranted because Microsoft has a huge incentive to fearmonger about AI so they can lock it down and capture the market. And what better way is there than to use the usual bogeymen to do so.

[1] https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcor...


Having worked at Microsoft for almost a decade, I remember chatting with their security people plenty after meetings. One interesting thing I learned is that Microsoft (and all the other top tech companies presumably) are under constant Advanced Persistent Threat from state actors. From literal secret agents getting jobs and working undercover for a decade+ to obtain seniority, to physical penetration attempts (some buildings on MS campus used to have armed security, before Cloud server farms were a thing!).

Microsoft is one of the few companies that goes toe to toe with world governments every day of the year.

And I imagine balancing that next to all the engineers who demand admin access everywhere is a royal pain!

Although the best Government VS Business story I heard was during intern orientation at Boeing about French agents breaking into Boeing employee's hotel rooms during a conference in France while the employees were out to dinner, and going through laptops. One of the employees returned earlier than expected, and the men in suits shut the laptop, turned around and walked out of the room w/o saying anything!


> Microsoft is one of the few companies that goes toe to toe with world governments every day of the year.

It's also the company which was the first and longest member of PRISM, meaning very deep ties to the less savory parts of the US gov and {five,nine,fourteen} eyes. I know it's a boring advice, but I'd take this kind of declaration with a truckload of salt.


I remember a blogpost from OpenAI maybe a year ago that went into great detail about how they find these bad actors.


The only thing noteworthy about this is how small-scale this is, and that the perpetrators don't even bother/have the means to set up their own infrastructure.


The noteworthy thing is that OpenAI is reporting it. They’re signaling that they are proactively monitoring and investigating this activity, and that they’re willing to work with federal agencies while self-policing their negative externalities.

This is all part of an ongoing conversation with lobbyists about “safe AI,” and it’s ultimately done to show that OpenAI is making an effort to mitigate the risks that regulators claim it creates.

But there’s also another signal, which is what they’re not broadcasting: “ChatGPT can be used for propaganda, it works in Persian too, and we’re happy to sell to the DoD.”


It's a good thing for them to report this, but for a company that decided against watermarking their output they are kind of complicit.

I know they might lose their teens and college demographic if they do it, but if it's truly this world-changing tool that they claim than not watermarking is scraping the serial number off the guns they're selling.*

* Maybe a shitty analogy


How exactly would they watermark textual content?



with fonts and typos.


With unicode characters. Greek question mark anyone instead of semicolon?


> and it’s ultimately done to show that OpenAI is making an effort to mitigate the risks that regulators claim it creates.

I’d go one step further — the subtext is that only Open AI is willing to do this, and that LLMs are just too dangerous to open source/not another with regulation that only big companies have the resources to adhere to.


"...We ran these images through our DALL·E 3 classifier, which identified them as not being generated by our services..."

I'd be shocked if Twitter weren't stripping the metadata they are checking.

It is apparently C2PA per [1]

[1] https://openai.com/index/understanding-the-source-of-what-we...


Main takeaway is that OpenAI is reading all of your prompts :)


speaking of influence operations, the strawberry dude and 'lily' just did one of those twitter voice group chat things where everyone tried to guess if lily was an ai or not. there just happened to be a worldcoin rep in the room...


Can you expand?


Strawberry is alleged to be a codename for some secret openai technology. A twitter account/s that keeps spamming strawberries and making cryptic prophesies was shared by openai and sama. Lots of openai staff and connected people also spammed twitter with strawberries, making it look like an official campaign, or like something was really about to be announced. The same account co-hosted a chat with someone called lily who some claimed was an ai.

worldcoin is a private company owned by sama that verifies identity by scanning the eyeball. He's been rolling out the technology in places like South Africa, where they also recruited people to train openai models by RLHF etc.

It seems it has since been confirmed a scam by some of the participants.

This chap looks to have more details on it https://twitter.com/BenjaminDEKR/status/1824676550379192324


Why wouldn't Iran just use Llama or something?


Of course they are.. This is just a marketing piece by OpenAI


they might be using that too, it's that OpenAI can detect and catch those using their service.

Cmon mate.


Wow. What a high value target. [0]

I mean. I get that low rent actors will use low rent services to try to generate political garbage. Is there any evidence that this is actually having a measurable or meaningful impact?

Who exactly is fooled by these sites? And is it the sites that are the problem or the relative lack of sophistication in American education when it comes to political corruption?

[0]: https://niothinker.com/


My guess is that these sites are used to generate headlines that will be then shared by bots. Most people will never get pass that headline, the site is there to create the illusion of a real news source for those few who actually click the headline (but will probably never read the entire article because TL;DR). These healines are designed to reinforce certain bias and keep people inside their echo chambers, and that does have an impact.


We don't have to guess. We can precisely know the reach of these articles. Aside from this I wonder what value does GPT have here? It seems like it's saving "the covert Iranians" (how covert can you be if an American business points the finger right at you?) on the order of a few dollars an hour.

It's hard to take seriously. On any level.


GPT just lets you do this on scale. It lets you create an endless stream of legit sounding new sites and flood the news aggregators until people that only gets information by Twitter/Facebook are consuming mostly your fake media. Then, when you have captured an audience by pandering to their interests, you slowly herd them towards whatever political view/consumer habits you want. It is nothing that haven't already be done in the past, but with LLMs is like going from craftsmanship to mass-production assembly lines.


Yeah. Knowing my tools are judging my political positions and could self destruct if the authors disagree just makes me love using my computer.

This is why I only use local models these days.

EDIT: Out of posts for today but I've been pretty happy with Gemma2. The context is short but the performance is very good and it's easy to disable refusal.


Just curious, which local models?


I know the Facebook model was good. Checkout r/locallamma for guidance, or check the ranking website.


LLM powered disinformation machines is terrifying. The barrier to entry and sustained cost is so low.

But the social impact is significant. I'm reminded of a fake story about kid kidnapping in India, that caused a mob to burn the two people that were targeted alive... They were completely innocent, the mob attacked them based on fake news. Now, that can happen enmasse.


It's hard to take an article like this at face value when they provide zero evidence for any of their claims.

This is coming soon after Trump decided to accuse Iran of being behind his assassination attempt (done by a white 20 year old) and Israel literally assassinated Hamas's chief negotiator while he was visiting Iran.

It seems like the powers that be are desperate for a war with Iran and will continue beating the drum to build consent.

Reminds me of the build up to the 2003 Iraq invasion (you know, because "they have WMDs")


I don't think people should jump to any "this is the level Iran is at?" conclusions.

Many nations employ "patriotic citizens" in an informal and semi-formal fashion along with trained propaganda and "infowar" experts. I know of China and Israel doing this but I'd assume it's everywhere.


Iran is definately putting out mass propaganda especially on social media.

Israel has one of if not the largest propaganda/bot campaigns on the entire planet, followed by either Russia or China. It's definately working for them so they won't be stopping anytime soon.


influencing the politics of the United States? Is there an alternative choice I have not heard of? I thought it was samesame in blue or red as it always has been.


Nothing like 900 million articles about secret Iranian plots from government entangled dystopian megacorps and the pundits who love them, soon after the US starts moving troops into the Middle East to defend the progress of an ongoing genocide.

I suppose this is a great opportunity for the people whose entire income comes from the fact that the US overpays insiders to supply its own military; if Trump wins, he gets to play tough and have the papers start his term praising him to the skies for starting a war with Iran, he'll also be able to blame everything on the last administration and work closely with the people who replace Netanyahu. If Harris wins, she's made no promises and has no beliefs, and will be aided by the press in blaming Gaza on Iran.

OpenAI itself is surely running larger covert influence operations in order to affect US legislation and elections.

> Similar to the covert influence operations we reported in May, this operation does not appear to have achieved meaningful audience engagement. The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media.

Sounds like Russian Facebook ads.


US is going to protect Hamas?


What they don't say on their post but that we can guess and is from interest is that they probably had to spy their user messages to determinate that they used the account for generate content for the influence operation.

For sure the purpose is noble, but it is good to remind everyone that everything you type, submit or generate there is not private but could be randomly snooped by strangers!


That or they could store hashes of chunks of output to compare to propaganda in the wild.


It would be very costly and nothing say that the bad actor is using the message as-is and not a subset.


Naah, probably they used OpenAI apis to read, summarize and categorize everything


Given that any self hosted open source model would have worked just as well. I can’t see this good faith post as anything more than forwarding open ais long campaign for regulatory capture.


Self-hosting LLMs is expensive at scale. It's cheaper to use VC subsidized model inference like the OpenAI APIs.


There are plenty of VC-subsidized inference provider which uses open source LLM for much cheaper than OpenAI (which isn't really VC-subsidized at this point but Microsoft-subsidized).


My anecdata is most teams I've talked to say its below OpenAI at scale, and vLLM is a beast. It's interesting to hear the opposite, there's lots of cheaper providers, but the "VC dollars" argument can go "turtles all the way down", I suppose. Still, reality seems to differ.


At "scale"? At what scale?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: