The very idea that deep fakes are somewhere near the top of the list of significant concerns for the media right now is in itself disinformation. And it seems to be very persistent. I keep seeing articles and videos about it all the time. Someone is pumping significant amount of money into this narrative.
The irony is that all this "anti-disinformation" research will clearly be used to run better disinformation campaign if/when the primitive shit being used right now stops working.
I think this is fair, but it's also somewhat predicated on the threat from deepfakes actually being a bunch of real people believing some big synthetic narratives and presumably taking some action they wouldn't/shouldn't have taken if not for this synthetic narrative.
That may or may not ever happen, but in any case it doesn't smell like a stable equilibrium.
Personally, I'm more concerned that the social gravity it exerts will be towards broad solipsism, which smells like a very stable equilibrium wrt to addressing much that doesn't affect an entire population somewhat simultaneously/equally.
"... believing some big synthetic narratives and presumably taking some action they wouldn't/shouldn't have taken if not for this synthetic narrative."
We have had that already for millenia I am afraid. Deep fakes have nothing on the power of plain old bullshit. Like the pretext of Carthage's "Aggression" against Rome.
All true, so maybe it's more concerning for something targeted like evidence in a trial where a judge and jury are shown some undeniable and incriminating, yet fabricated evidence. The amount of time it takes to produce these makes them more purposed for spear-phishing campaign than anything more board, as you say where simple lying works.
I'm not sure if we should expect that to be a huge problem. There are a lot of white-collar prosecutions based heavily on written documents, but as far as I know it's not common for people to be convicted based on forgeries.
My favorite theory around this is that deepfake panic is being stoked so that if the Epstein tapes (or other video evidence of power elite malfeasance) ever leak, the people in them can just cry "deepfake!"
I can't speak for OP, but consider: the very strong scientific consensus is that the earth is warming and that warming is caused by human activity; the best-case study I've seen showed 75% consensus among Americans and that's sharply up over previous years.
The gap between scientific consensus and common belief has been driven by "regular" misinformation campaigns: promotions of conspiracy theories, manufactured doubt, cherry-picking of claims.
Why worry about the novel when the basics are still working?
What would you think about a deep fake depicting secret video footage (from a cell phone) of real climate scientists at at a real conference discussing how they are manipulating the data to convince the public and the lawmakers that climate change is a real phenomena. This deep fake would contain real people who really are climate scientists who really did attend a conference together. Its just that this discussion never took place. It was created by deep fake technology.
This type of scenario seems like a very real concern to me. You can extend this example into practically any hot button issue.
I wouldn't think it would matter at all. Nobody would know who the scientists were, so you wouldn't even need to use real ones for the same effect. I don't even think most warming deniers would really care, and it wouldn't even spend a week in the news cycle, if aired at all. Do a video of them sacrificing a child to Beelzebub, and maybe you'd get some attention.
Just telling everybody that you were told that this meeting happened through secret messages from a secret high-level traitor from the Soros Foundation would work just as well. It would work on hundreds of people even if you said that you were receiving these messages psychically or encoded through subtle changes in reruns of Law & Order.
What's to stop somebody from doing that now? If you have seen any movies in the last few years, you would know how good special effects and CGI are. We can already create fake people, and generate fake voices, or find people who sound close enough to pass. Ever since the invention of video, the potential for fakes has existed.
We already have people willing to believe a conspiracy theory based on willfully misinterpreting emails, cough cough, so it seems like it probably won't make a difference.
In a highly polarized world people don't actually care about evaluating evidence. We pay attention to the voices that reinforce our beliefs and – at best – ignore those that don't.
We know this because that's exactly what ClimateGate was. A huge pile of hacked emails leaked, and they included climate scientists saying things like:
- The world had stopped warming and they couldn't explain that
- They were mixing and matching data to ensure graphs showed temperature's going up
- They were working to prevent papers that disagreed with them from being published
- They were deleting emails and other material to avoid having to release them
etc
Guess what - they denied everything, the emails were roundly denounced by a friendly media, and nothing happened to any of the people involved.
Really, you don't need deepfakes to create climatology skepticism. Climatologists are very good at creating it all by themselves.
Have you ever heard the saying “science advances one funeral at a time?” It absolutely is consensus-driven. https://en.m.wikipedia.org/wiki/Planck%27s_principle
Especially important in academia where you need to go along to get along with thesis chairs, department heads, and grant funders.
> "Deepfakes aren't a huge concern because regular misinformation campaigns are still wildly successful."
> I might go as far as to say: "Deepfakes aren't a huge concern because we live in a post-factual society, so evidence doesn't really matter."
Deepfakes are concerning because they point to a future where disinformation is so effective that it's impossible to pierce a disinformation bubble and unreasonably difficult for a sensible person to avoid getting pulled into one.
Take the idea that the moon landings are a hoax. It relies on accepting the idea of a rather unbelievable and expensive (for the 60s) amount of fakery. Sure, people believe it anyway, but they have to try pretty hard. They wouldn't have to try so hard if someone had faked a convincing body of evidence that the landings were faked. Or they'd have to throw up their hands and give up if they understood how easily things could be faked with modern technology.
That would be true of people made individual assessments based on evidence. That doesn't however seem to be what people do, even if they themselves believe that they do it.
I would guess that people who believe in government conspiracies has low or even very low trust of the government. Give them an story that display that the government can't be trusted and people will attach themselves to the narritive in order to confirm how they feelt. They don't need to try hard to actually rationalize it, because the actually details are unimportant. The only thing that matters is the feeling of confirmation.
It is really hard to convince people out of a feeling by using facts and proofs, and a common statement during online discussion is that it is impossible.
> It is really hard to convince people out of a feeling by using facts and proofs
I agree with this. I think the problem with deepfakes is that they could make it impossible rather than "really hard" to "convince people out of a feeling by using facts and proofs." Not only will those people have "a feeling" to back up their false beliefs, but they'll also have convincing fake "facts and proofs."
Those fake "facts and proofs" will also make it easier to convince others of the false beliefs. With deepfakes, it could be possible to convince someone with moderate to high "trust of government" of false government conspiracies (e.g. a convincing high-resolution deepfake showing Hillary Clinton knowingly engaging in child sex trafficking or talking about adrenochrome).
Can you give me a single example of deepfakes being used to stage a real news event?
Meanwhile, most of the population isn't even trained to recognize when videos are trimmed to start and stop at a conspicuously convenient moments, which is the most commonly used technique for creating fake narratives.
Actual media literacy is at near zero, especially among the people who arrogantly consider themselves "informed".
This suggests you may not have done the research to support your viewpoint with such self assurance.
The increasing sophistication of Deepfakes as a technology concept applied over time is what people are rightly afraid of today.
But even now :
(based on some search notes I have from 6 months ago)
[1] Video of Gabon’s long-unseen president Ali Bongo, who was believed in poor health or already dead, was decried as a deepfake by his political opponents and cited as the trigger, a week later, for an unsuccessful coup by the Gabonese military.
[2] Malaysia: Is the political aide viral sex video confession real or a Deepfake?
[3] ISIS 'preparing clip of al-Baghdadi ALIVE' using deepfake propaganda.
"An important goal may be to try and rally the ISIS supporters and 'show' that the US is lying"
The concern is over what will happen in the first few instances, assuming it hasn’t happened yet. We don’t need deepfakes when Colin Powell can just say that something untrue is true and we go to a long and expensive war. A deepfake video in a tweet at the 11th hour could tilt an election. At an earlier time, could be a more convincing swift-boating. A spear-phished deepfake could be really valuable in the ransom industry. Deepfake police vest-camera recordings could be very pernicious. Suppose there was plausibly a camera on in Jeffery Epstein’s cell?
I’d say it’s something to be highly concerned about, but like an earthquake you don’t know where and when to actually be concerned about it.
Just to clarify, I'm not really picking a side but just was curious on your thoughts being explained further. Only thing I have to add is just because it's not a problem right this instant doesn't necessarily mean it can't be a problem in the future, right?
This argument is going in circles, though. Plain-old-media has been used, to great success, to manufacture news.
Consider, nearly a decade ago, the BBC (And that's an outfit that many believe to have journalistic integrity) airing video footage of a protest in India, with people waving Indian flags, and claiming it to be video footage of anti-Ghadaffi protests in Libya, in the run-up to the Libya bombing campaigns. Millions of people have seen it, a few of them noticed that they are being fed a line of horseshit, and then they shrug their shoulders and go on with their day, because Gell-Mann amnesia is probably a real effect, and they didn't actually care too deeply about this.
Worrying about deepfakes is a lot like worrying, around the invention of scissors that they could be used to stab people. Yes, they can be, but knives have been around for a lot longer, are easier to make, and work just as well.
Going through the trouble of deepfakes, to convince the subset of people that will not be convinced by plain-old-multimedia-lies, but will be convinced by deepfake-multimedia-lies seems like a poor ROI maneuver - especially given that we live in a world where the number of people that can be convinced by plain-old-multimedia-lies is sufficient to achieve whatever end you seek.
It's a lot easier to fool people than generating fake video. Fake video detection is not far behind generation, if at all, and if any official person/department gets caught showing a fake video, they won't be trusted again. To my knowledge, this hasn't happened yet, thought I'm sure it will eventually.
One example I could give right now is the US where mask-wearing has become a partisan issue.
You don't need a deepfake video of some scientists claiming masks are useless/evil/deadly. You just need to label masks as some kind of "liberal" hoax and half the country now hates mask-wearing.
If that's all it takes to influence the population why are deepfakes a large concern?
It doesn’t help that leading Health organizations have changed their point of view over time. It also doesn’t help that mask manufacturers place a disclaimer regarding effectiveness of the masks. That and very few people wear them right —which decreases their effectiveness such that they might as well be a good luck charm.
Hygiene, distancing, minimizing contact time as well as wearing a mask and goggles help. But a mask by itself is of limited effectiveness though not useless.
Changing your point of view over time as new data becomes available is the hallmark of integrity.
Anyone who claims a position which doesn’t change at all as we learn more and more about a novel challenge should be viewed with less credibility, not more.
It doesn't sound like they changed their mind based on new data. In June, Fauci said that they knew masks would help, but deliberately downplayed their usefulness because they anticipated a shortage for medical workers. Telling people "Yeah, we were lying before, sorry, but you can totally trust us now" isn't integrity.
You're making this out as if this was a simplistic decision where everyone in the medical community agreed that masks were beneficial but agreed not to tell anyone because of limited supply and that's simply not the case.
In other interviews, Fauci clearly discusses how the community perspective changed as we saw greater asymptomatic spread:
> When it became clear that we could get the infection could be spread by asymptomatic carriers who don't know they're infected, that made it very clear that we had to strongly recommend masks.
>
> And also, it soon became clear that we had enough protective equipment and that cloth masks and homemade masks were as good as masks that you would buy from surgical supply stores," Fauci added. "So in the context of when we were not strongly recommending it, it was the correct thing.
So yes, while some hesitancy was due to (extremely legitimate) worries about exacerbating shortages of PPE, the other half is that they legitimately didn't think that mask-wearing would help the average person when the expectation was that symptomatic people would stay home and prevent the spread.
As that turned out not to be reflective of how this virus spreads, the recommendations changed.
Policy choices like these are almost never made for single, simplistic reasons. Even just these two points (effectiveness, limited supply) are almost certainly less than what they were considering. But when it became clear that both effectiveness was higher than expected and supply issues were resolved by the widespread use of simple cloth masks, those factors combined strongly shifted the balance toward pushing their use.
I think that's something that you and I agree on, but something the general public wouldn't. Just think of how the public perceives politicians that change their minds constantly - they're "waffling", they're spineless, they're weak-willed, they're "just catering to the majority" (as if that was a bad thing! but I digress...) Then consider how much the public loves politicians that are steadfast. Look how much we laud Sanders for holding to the same principles he's held his entire life. (Sure, we might disagree with him for other reasons, but at least he's _consistent!_) Look how much we admire the politicians who voted against the Iraq war, as if they couldn't have changed their mind in the meantime.
(In the political sphere, the reason for this is pretty clear. If you're continuously shifting your view from day to day as mores change, then that's a clear indicator that you don't have any particular insight into what the true good views are. You're a follower, not a leader. Plus, it kinda sucks if I voted for you to uphold position A, only for you to change your mind and support B the next day.
Of course, when it comes to health, we want the exact opposite - we basically want a weak-willed spineless leader who immediately caves his opinion to whatever the most up-to-date scientific information is. :-) But who's ever going to respect someone like that..?)
I agree with you partially, however in this case it was for political (not necessarily partisan) reasons (not enough stock, wanted to reserve it for medical personnel, export restrictions, etc.)
Don’t forget that it was initially thought that the virus could not be transmitted in asymptomatic cases. As time went on, it became clearer that this was happening, and we couldn’t rely on isolation alone.
Anyone could be infected but asymptomatic, you never really know if you’re contagious, and wearing a mask helps with that.
Once this became known, the right thing to do was to change the advice on face coverings.
That was another case of information being conveyed that the speakers knew to be untrue, and there was immediate skepticism that the speakers were lying.
Pointing to the fact that some people were skeptical doesn't mean anything. There's immediate (and ongoing) skepticism that this entire damn pandemic is a hoax, FFS.
As with any expert group, the medical community defaulted to going with what they know. In the context of virology, in the overwhelming majority of real-world cases, diseases spread symptomatically. SARS-CoV-19 is genuinely a novel threat in the scale of its transmissivity while asymptomatic and while there was certainly evidence of this early on, it takes non-zero time for this component to be studied, verified, and understood.
You can't (or at least shouldn't) base public policy on reports that haven't yet been corroborated by data and experimental evidence. And being validated after-the-fact doesn't (or at least shouldn't) reflect poorly on decisions made before we had hard data of this virus spreading in a novel fashion.
masks also give people who suspect they are sick, but are not sure, a license to go out in public, potentially spreading infection. instead, let's have assurance of basic care and necessities (pandemic welfare) for anyone even suspected of being sick or with elevated risk, and those folks would happily stay home.
masks are a good backup to appropriate distancing, but not a good primary or always-on mitigation, because of their many failure modes, not the least of which is the enormous variety of human behavior.
More specifically, people who need to go out in public, because they need to:
1. Work for a living, and don't get sick days.
2. Can't get/afford <whatever it is they need delivered to them>.
3. Think COVID is a hoax.
4. Don't give a shit.
Are going to do that, regardless of whether or not they have masks.
The only people changing their behaviour because masks exist are employers, who think it is safe to sardine-can people in their workspaces, because everyone's wearing masks. (Never mind that so many people are wearing them incorrectly...)
That worked in the US, but the US is... having a bit of a moment right now. Most democratic countries just don't have a single figure who can do no wrong in the eyes of half (well, 40%, really) of the country; if nothing else, they're not executive presidencies. And even in the US this wasn't historically normal.
You could of course describe it the other way around.
The population starts out hating mask wearing - don't see many people volunteering for that in America before 2020. And then it was labelled as "caring" and "protecting others" and despite vast piles of evidence indicating it achieves nothing, half the country is immediately influenced into hating non-mask-wearing.
I think you misspoke. Masks have a clear, demonstrable, and scientifically-validated benefit. Not for the wearer, but for those who come in contact with the wearer.
I'm pro-mask personally... but the experts told us not to worry about masks as early as 3 months ago. I was pro-mask back then too. My point is, "the science" is not always so clear.
And the reason for that is a something they've owned up to and apologized for.
3 months ago was ages given the recency of this catastrophe. And what we learned is that while masks don't help the wearer, they do help everyone around the wearer. Most of the medical profession wasn't operating from the current understanding we have about symptomless spread. As that became a dramatically more clear concern here, the scientific establishment quickly pivoted to pushing mask-wearking hard.
Isn't that exactly the kind of behavior you'd like to see?
There's some question on asymptomatic transmission. That's important from an epidemiological point of view, but irrelevant to you as an individual deciding whether to wear a mask, because at any given time it is generally impossible to know whether you're asymptomatic, _presymptomatic_, or negative. And presymptomatic infection is definitely common.
Well, it's pretty easy to know if you are in the 4th category.
Recovered.
Should someone that's been infected and recovered from COVID have to wear a mask?
I would say, until we're fairly confident that having had covid confer permanent immunity, yes, it would be more responsible to wear a mask. Hopefully it does confer permanent immunity, but it would honestly be pretty optimistic to assume that at this point.
Not to mention, nobody around you knows whether or not you're recovered. Wearing a mask is about as close to zero effort as plausible, so wear one anyway out of consideration for others.
Although mechanistic studies support the potential effect of hand hygiene or face masks, evidence from 14 randomized controlled trials of these measures did not support a substantial effect on transmission of laboratory-confirmed influenza
The idea that anything about this topic is clear or demonstrable is ridiculous. Masks are an ideological issue by the left, which is why my entirely reasonable post has been so aggressively downvoted.
There seems to be this hopeful belief out there that if only indisputably correct information was available to the general public, at long last they would finally support the correct policies and elect the correct leaders.
15 years ago I created a "pre-deep fakes" automated actor replacement visual effects production pipeline. My ambition was to create "Personalized Advertising" where ordinary people would appear in video advertising for desirable products like film trailers, and various other products and services that typically have a celebrity spokesperson - the basic idea was the celebrity spokesperson would be "with you" in the advert explaining how great the product was while you are depicted nodding and enjoying it.
The system worked, and still would work if I un-mothballed it. I even globally patented the system, with an ungodly expense doing so: https://patents.justia.com/patent/7460731
However, nobody believed what I was pitching 15 years ago was possible. After I'd demonstrate a scaled down implementation, I'd get interest and an investment pool would start to form. Then one of the investors would realize the tech could be used for porn, and then no matter how I explained the economics of failure such an application would cause, their stupid dicks would take over and I could not get them to realize "porn with anyone inserted" is a lawsuit engine and not a sell-able product.
While pursuing this, I became extremely jaded about what people think occurs in a film production, and the time and expense necessary to add visual effects to any media. The majority of angels and VC I pitched were flabbergasted at what current media and VFX productions are like, and their current level of expense and sophistication. My proposing a fully automated incarnation struck them as complete fantasy. Yet I had a working implementation. Slowly I'd convince them of the possibility... and then they'd get fixated on porn, ignoring the fact that there is no way to make money producing deep fake porn.
I eventually went bankrupt and left media production entirely. After creating my fully automated VFX pipeline, working in VFX without my automation tools drove me nuts. I work in facial recognition now.
Forget faking news events. What about petty crime? The cops get sent a video of you defacing property. Your phone data and several eyewitnesses tie you to the area. How do you escape, without a blanket ban on video evidence? If there's a blanket ban on video evidence, what if you have video proof that you were somewhere else?
If this becomes a problem, I think cameras and smartphones would be upgraded with a way to certify they are real videos. I have no idea how, and it would likely have to keep evolving, but if the financial incentives are there I'm sure someone will figure it out. Maybe the video gets signed by the phone and can only be verified by that device?
Flash the flashlight in a sequence corresponding to the most recent hash on the bitcoin blockchain, then take a hash of the video and send 0.00000001 BTC to that address from a wallet associated with your personal identity? Would pretty much prove that a specific person recorded this video between two fairly close together times, and the blinking lights would probably do things with shadows that deepfakes aren't great at replicating (and retouching the video to fix those artifacts would make the hash not match).
Fascinating. It's good, but there are lots of little emphatic "head nods" that don't make sense in the context of the speech - unless that was just an odd mannerism of Nixon's.
I wonder if a more recent subject would've been better. We all know the tics and habits of our current president, but Nixon was half a century ago and I wonder how many of us have seen his speeches in any real depth.
Hell, play me Billy West reading Nixon as "Richard Nixon's head" from Futurama and I might buy it.
The thing is that you know it's a fake and are looking for clues that it's a fake. But for many purposes, it doesn't matter if you can fool careful study or expert analysis, it matters whether you can fool a quick glance from someone already predisposed to believe the contents who will retweet or share on facebook and move on.
I think it’s already clear that a fake video isn’t necessary for people to believe something they already want to believe. You could just distribute a badly-compressed screenshot of an unrelated video with a caption explaining the fake story behind the video. I think that would be more than enough, and I doubt a full faked video would provide much marginal utility.
Well said. Imagine a similar deepfake with a Nixon "confession" about having to stage the moon landing to make America look superior. You now suddenly have the next viral facebook or whatsapp forward for conspiracy theorists who are fully convinced of their beliefs.
Accessibility and scalability, get a few trained models and you not only can produce the original deepfake but also a lot of convincing identities and accounts to spread it around.
This is the same issue as RC va Drones, you could’ve strapped a pound of C4 to an RC flyer for decades but it was never as easy to do as with drones today, the skills, availability and reliability of consumer drones even the cheapest ones from Aliexpress allowed non-state actors to weaponize them with ease.
The term "simulation" comes to mind as a description of the emerging art and science of ultra-realistic bullshit generation.
From Baudrillard (https://cla.purdue.edu/academic/english/theory/postmodernism...): "Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal.... It is no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real."
> Hyperreality, in semiotics and postmodernism, is an inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced postmodern societies.
> Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.
Yeah pretty much. The lords are worried that the peasants can afford repeating firearms now and the well to do peasants are worried that the status quo might be upset.
I agree. To add to this, I would suggest that this is good feedback to the people creating these and they will just see it as a challenge and / or enhancement request. Perhaps in time, even the best forensics won't be able to distinguish the difference. Maybe.
The thing is that five years ago this stuff did not exist full stop. Five years from now it's going to be massively better. And societal institutions just don't respond to external shocks in five years, modern governance institutions are just not wired for it (e.g. the US Senate has six-year terms, Supreme Court terms are 30-40 years). So it's basically a complete inevitability that society will not be able to handle this, and in four years we'll probably see an election where a deepfaked video massively swings public sentiment (e.g. one of the candidates saying the n-word or something), and eight years from now, an authentic video of a candidate actively conspiring to commit a crime will be dismissed as a "deepfake" because of the experiences of four years earlier.
How is a society governed when Western Englightenment ideals of logic and reason have failed due to the inability to have any axiomatic truths? The way that societies were governed before the Englightenment -- religious fundamentalism and "might makes right."
Cute and well made job. I suspected that the video simply took a standard Nixon televised speech and used CGI to remake just the lower half of his face (thus the odd head movements even though his words and lips synced correctly, and if you look closely you can see that the lower part of his face looks somehow "cleaner" than the upper half), what stumped me a bit though was the audio. His voice sounds odd but recognizably his own, especially since Nixon had a rather unique voice. so I thought maybe they cut together words he'd said in different recorded contexts and digitally modulated the audio to construct a speech with an even tone, but no, they actually faked his voice, which impresses me more than the face CGI.
I don't think they are using conventional CGI techniques to "remake" his mouth/lips and lower face, if that is what you are suggesting. Seems like the dialog replacement is using a technology called VDR: https://www.fxguide.com/quicktakes/cannyai-vdr-face-replacem.... There's a demo video on that page and some more details about the technique.
Sorry, maybe I'm conflating terminology, but I was referring to the use of AI and digital editing to reconstruct the lower part of his face so that it looks real while moving in ways that are false and say things he didn't actually say. This is what they did in some way at least, no?
At Risk? Where have these people been for the last 40 years? I don't want to say the authenticity of information doesn't matter. But what clearly matters much more is collective trust in information institutions and from that standpoint the ecosystem has already utterly failed.
A really good example to showcase the capability of the technology !
In context of India, where news channels have reneged from due diligence on evidently doctored videos, this is something which can have huge consequences.
There is already a tool in place that handles much of it, chain of custody for evidence. The video evidence would need corroborating evidence to "pin it down" essentially. Just showing say Hillary Clinton making infant stew wouldn't cut it without a chain of evidence proving there was a location, time, that a camera would actually be there, and there is some actual physical evidence of her murderous cannibalism.
The irony is that all this "anti-disinformation" research will clearly be used to run better disinformation campaign if/when the primitive shit being used right now stops working.