Hacker News new | past | comments | ask | show | jobs | submit login
As fake videos become more realistic, seeing shouldn't always be believing (latimes.com)
145 points by geekdidi on Oct 7, 2018 | hide | past | favorite | 78 comments



> With more time, Pinscreen, the Los Angeles start-up behind the technology, believes its renderings will become so accurate they will defy reality.

Pinscreen is getting sued by their former VP of Engineering for faking their results and for assault and battery [1].

[1] http://sadeghi.com/dr-iman-sadeghi-v-pinscreen-inc-et-al/


The linked piece is a surprisingly compelling read.


There's something so refreshing about reading things in legal point form. No fluff, easy to read fast, and no "life story before the recipe" type writing.

I wish more news articles were written with such brevity. I read pretty much the whole thing with ease, whereas I would struggle to read an article the quarter the length with all the fluff and irrelevant drivel added to most articles these days.


whoah, fakery recursion.


HAHA welcome to the real world outside of the Ivory Tower of Google. Everything is smoke and mirrors.


My fear is less about people being “duped” by a fake video and more that fake videos will serve as feedback loops for misguided or false beliefs that people already “cherish” and “love”. Most will make little effort to research the legitimacy of a video that agrees with their current beliefs, but those beliefs will probably be strongly reinforced by fake videos.


I don't think it matters. Those people are already in an intellectually closed pocket universe. People overestimate the extent to which universal consensual reality exists or has ever existed.

The first line of defense is education. Fundamentally, we have to make the case for why we know what we know. This is why K-12 exists, although the availability of effective primary and secondary education remains a major issue.

The next line of defense is social interaction. Most people will have to leave their bubbles to have any sort of upward mobility and ability to steer society. There will always be cynical people who exploit constituencies of deceived people to gain power, but many others eventually defect.

We have little reason to believe that this is a long-term equilibrium, but it's the story of the past 500 years of history, ever since the printing press created decentralized mass media.


This is first-order thinking, and neglects deeper strategies, that tamper with how society currently functions, as compared with moving through a transition phase for how society will have to function in the future, which requires some second and third order thinking.

It's not only the flashy, obvious attention grabbing deception that matters, it's also some really mediocre day-to-day stuff that's going to matter too. Take your typical tendencies for human dysfunction, and now amplify the effects of poor communication with augmented miscommunication.

Areas involving identity theft, SWATTING, security camera evidence, security cameras as crime deterrence, post divorce child custody, blackmail, and worse.

If you don't trust MD5 hashes to protect your password, this will be the corollary in terms of video cameras as de facto evidence. Sure, an MD5 quickly masks a string in a deterministic way, that requires privilege escalation to access and limited skill to unmask, but the level of technology we've reached raises the bar, and MD5 is understood as untrustworthy, such that even a well guarded data set should still not utilize MD5 hashes.

So too with video, which requires skill to tamper with, and likely privilege escalation even to do so, but we're moving into a world where it won't be enough to assume that the data assets themselves were too complex to tamper with, too few would know how, and best practices always kept all the footage 100% secure in an impregnable, incorruptible repository under lock and key.


"Those people" indicates that the majority on this forum (including myself) is part of the general dupe-susceptible population. That seems dubious to me.

Edit: *is not! I do think Hacker News readers are dupe-susceptible.


Don't get me wrong, everyone is susceptible. I don't mean to talk down about people. Everyone wants to confirm their own worldview.

But I do think a lot of people come to forums like this because they like having their mind expanded and care about how we know what we know.


Formal “education” just seeks to get people to think in the same close minded way. At least from my experience in public schools.


You don't need quotes around education. Education isn't an imaginary thing.


> Most will make little effort to research the legitimacy of a video

This is already happening. People are regularly editing videos out of context to fit a narrative. It can't really get any worse when the media's integrity is already hitting rock bottom.


James O'Keefe is a notable offender. Incidentally, he is not part of the news media, although segments of it will often push his doctored videos as a source of truth.


Some people still don't think humans landed on the moon. This is not a battle over facts. It's a war.

The real problem is that people will torture a fact, like a PoW, until it'll tell any story they want it to tell. Even if that story has no basis except for delusion of the torturers.


That can be avoided by video content uploaders verifying their source content or a plugin running in your browser noticing you about detected fake videos. Sort of like firewall against deepfakes.


As a general principle, I don't think we have any reason to still believe technology can patch social problems


I think people overestimate the concern of fake videos. Consider photos for comparison. There have been fake photos of well known people for decades online, many of which are indistinguishable from reality. It doesn't lead to much confusion or issues in our everyday life. We just assume every noteworthy image is fake unless it comes from, or is cleared by, a credible source. The same will apply to video.


> We just assume every noteworthy image is fake unless

Who is we? How do you know this? Citation needed. I think it is a very small portion of the population who actively operates under this assumption for photos. I would guess 5-15% but I don't know. Surely it is not most people though, or everyone.

Just because you/your friends do something doesn't mean other people do, or even most people do. I'd guess that for a great number of things that it would often be the opposite for technical people; often the things we do are things most people don't do.

I think people underestimate the coming confusion from video.

> It doesn't lead to much confusion or issues in our everyday life.

I mean, I think it does. I think it leads to massive issues in society where people don't know what is real or not without even knowing it. Magazine photos of people are well-known to be touched-up at a minimum, but how many people in a population actively think about that when the look at the cover?

I would posit that our society has been heavily damaged by the proliferation of fake photos.


Okay. Name a specific negative incident stemming from fake photos, off the top of your heard. (No googling or otherwise searching for an example.)


That one tabloid that showed those fake, grossed out "images" of Hillary during the last election cycle. Not linking the image because your requested no links allowed. These were displayed by the millions at grocery store checkouts across America for months leading up to the election. They portrayed a sick, ill woman who looked like she was dying. Completely fake photo.

Next: Why did you ask for a specific 'incident' when I clearly described that the problem is a general, societal problem that arises in specific instances every single time someone looks at a photo that they think is real but isn't? I could also cite a specific instance from earlier today when I saw a stack of magazines at the store and felt like a slob in my normal clothes and non-digital face and existence. Happens billions of times a day.

Edit: Since I really prefer to provide citations, I have since Googled for my example to provide context for other readers. Rest assured I wrote out my initial comment first.

https://qz.com/1369399/david-peckers-national-enquirer-ami-t...


I didn't say no links, I said you had to remember the incident off the top of your head.

Also: 1) The Quartz link you provided does not say the photos are fake. 2) "[N]egative incident stemming from fake photos" requires more than "fake photo existed."


The line between a completely made-up depiction and a real one isn't necessarily the important one. The photos of Hillary Clinton were real enough, but carefully selected and post-processed to make her look as unsympathetic as possible.

Does a photo of a candidate scowling or sneering constitute a genuine portrayal of their appearance and personality? If so, why not make that scowl or sneer just a bit more menacing or disgusting, if your publication is in the opponent's corner?

A better example might be this one: http://hoaxes.org/photo_database/image/darkened_mug_shot/

Imagine what it's going to be like when the tools are good enough to do more than just airbrush or darken a static image, but not yet good enough to create entire "fake news" segments from scratch. It's absurd to think that ML-based tools won't be used to turn a real audio or video recording into something different that you and I will incorrectly assume is still real.

That's when things are going to get scary. I'm sure this tech is coming soon to an election near you.


Alright, I did remember the incident off the top of my head, I abide by your weird rules.

1) The photos are fake, I did not say the Quartz was the source of the information that the photos are fake, I was showing the photos. Are you suggesting the photos are real? They are not real.

2) I was attempting to have this conversation without being political. Fine - I consider the election of Donald Trump to be a negative thing, and I consider the evil tactics of lying and cheating to get him to be more likely to be elected to also be negative consequences. (If you don't consider these to be negative consequences, then fine - but surely you understand that some people do, and that's not what we're here to discuss.)

And for the magazine photos - I felt sad. So do millions of other people every day. That is also a negative consequence. (Again, if you disagree that this is a negative consequence, then fine - but lots of us think it is a bad thing.)


You probably don't think it's likely because when done properly, subtly and constantly, you don't notice.

https://www.google.ca/amp/s/www.moillusions.com/media-manipu...


I have a few friends who shared a poorly Photoshopped picture of Hilary Clinton shaking hands with bin Laden. That helped reinforce some of their nutty beliefs during the last election cycle.

An example of how dangerous video can be is the faked Planned Parenthood video from a couple years back. That actually led a guy to shoot some people in a PP clinic.


Childhood disordered eating, including diagnosable anorexia, is on the rise for young women, and some studies have linked this to photoshopped images of women setting impossible cultural ideals.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792687/

https://www.huffpost.com/entry/why-photoshopping-is-a-ma_b_5...


> Name a specific negative incident stemming from fake photos

You're looking at the problem the wrong way if you think this is just about specific incidents.

Something can have a large negative effect without giving arise to a specific incident. There can be more systemic effects, like people seeing fake photos that feed into their pre-existing likes and dislikes, and (to them) validates and reinforces those views.


How about the faked photo of John Kerry speaking with Jane Fonda?


Like fake photos, these will be most highly leveraged among the undereducated. Websites like Snopes will probably help serve as an outside point of reference in many cases; however some people are not really open to criticism of whatever is their current mental model and can just as easily see another point of reference as opposition propaganda as reliable analysis. Alex Jones is an example of one with this psychology--he has such a strong trust in his original sensory-experience-analysis-system that you're better off taking other approaches than making a frontal assault on the citadel of this subjective information-interpretation experience, which is so highly knotted up with his sense of self and personal creativity.

Fortunately this kind of lopsided/over-weighted psychological subsystem will never speak to everyone, and humans are as a group becoming more resilient in the face of such imbalance. The internet has in many ways been extremely helpful in serving as a sort of blowoff valve for psychological gifts that have spun out of balance.


>We just assume every noteworthy image is fake unless it comes from, or is cleared by, a credible source. The same will apply to video.

As the inauguration crowd size debate proved in January of 2018, there are problems now with even credible sources on trivial things.

That's the 1-2 punch that I think might be more problematic. OK maybe you're right and people assume it is fake unless it is from a trustworthy source, but the trustworthy sources have dramatically shifted in the last--oh I'd say 18 years.


Which credible sources disagreed about the crowd size?


The media vs the POTUS. The POTUS ought to be a credible source.


The default approach to a medium shifting from trust to distrust is a significant change.


Sure - so the effort switches to attacking the credibility of sources. That just pushes people further and further towards only believing the news sources they already believe, no matter how far away from a neutral and reasonable interpretation of the facts they have gone.


I’m going to have to ask for a citation on photoshopped imaged not being taken seriously. We routinely see people being duped by them on twitter, but it’s even more insidious in the form of Facebook advertisements that aren’t publically shared for people to debunk.


Are there any examples of fake photographs being used in this way?

Do fake videos add any more credibility to this kind of misinformation?


For the fellow Europeans that cannot read the LA Times: https://archive.fo/heTle


I am curios what things the la times is doing with the data it collects that it still after all these months has not been able to offer a GDPR compliant page.


I suspect that the LA Times, like various other online publications, has simply made the decision that they have approximately zero economic benefit from having EU readers. They're an essentially local newspaper.

Therefore, it's not worth any staff or contractor time to figure out what GDPR compliance would mean. Maybe they're GDPR compliant today. Maybe not.

But it's pretty understandable why they might choose to simply geo-block the EU to send a clear signal that they're not marketing to the EU. They can't keep EU residents out of their site but this approach would seem to be a pretty good low-effort approach.


As an aside, I can see now that laws around the world like this will be what breaks up the internet, to the extent that isn’t already the case.


There is a risk. Google and Facebook won’t pull out of Europe. But to the degree there’s too much divergence of laws and regulations affecting smaller sites I could definitely see fragmentation happening.


I've oft wondered if we'll ever see any big discussions (i.e. Not just some random person's blog post) of post-Westphalian sovereignty applied to the Internet. Maybe GDPR is something that will kick that nest.


An assumption I have is that so much news on these sites comes down the wire from AP, Reuters, et al. that there’s little to no value in catering to that audience when plenty of compliant websites over there will be doing it better already. So I’m kind of repeating what you say there.

While at odds with the value of the internet, it is understandable that they decide that their business isn’t really in global news/reporting to international audiences and GDPR is a reason to focus on what works better for them. Before then it was purely incidental.


Conversely I actually kind of like this; it's far too easy as a Brit on the Net to end up with a US-heavy worldview and supply of news. If it's really of global importance I should be able to get it from local coverage or some other route.


This site can’t provide a secure connection archive.fo uses an unsupported protocol. ERR_SSL_VERSION_OR_CIPHER_MISMATCH

Remember when the web just worked?


No, I remember an absolute load of ways it didn't just work, or worse, worked in silently incorrect or insecure ways. I think you get my point so I'll leave out the laundry list of things I remember being broken throughout the history of the world wide web. A TLS error here and there is not so bad :)


> Remember when the web just worked?

I don't think there's ever been a time where incorrect webserver configurations like that were impossible to make.


I remember when pretty much everything was http rather than https. It's better now.


How?

Why do I need an https connection to read a newspaper?


Herd immunity is one perk.

It prevents anyone between you and the newspaper from inserting some js to turn your machine into a cannon targeting someone else.

"Need" may be a bit strong, but there are positive security externalities.


When has this ever actually happened? Would be curious if there are any good case studies.



go read: verizon supercookie


> Now imagine a phony video of North Korean dictator Kim Jong Un announcing a missile strike. The White House would have mere minutes to determine whether the clip was genuine and whether it warranted a retaliatory strike.

Really? Hard hitting journalism everybody. "Mere minutes", it will be warranted when they see proof of a launch, do you really think the government just decides based on a video when we have much more surefire ways to determine these things!?


Imagine somebody interested in war starts realistically faking radar etc. signatures, perhaps supported by tiny planted chips in army infrastructure. Could be fun.


It could be a video disguised as an insider spy tip. It wouldn't cause a retaliatory strike but it certainly would cause trouble and loss of money and time.


I think part of the problem is that we watch too many movies with CGI and we trained ourselves to ignore it.

At this point in time face-replace videos can be relatively easily spotted if you watch them in high quality. At least the ones I've seen demonstrated.

But overall, videos are getting either to fake. At the same time, the weren't bullet-proof in the past either. The bad part is that now you can do a lot of it in near real time. So you can change something in a live report.

I see a growing need for public services that cryptographically timestamp files.

Also, I would like to see research in using machine learning to spot fake videos.


A nice video channel that shows how some of these videos are made is Captain Disillusion's: https://m.youtube.com/user/CaptainDisillusion


Is it just me, or does this phrasing make it seem like he's full of it?

"With further deep-learning advancements, especially on mobile devices, we'll be able to produce completely photoreal avatars in real time."

What "deep learning advancements" is he referring to?

Doesn't surprise me that he's being sued by his former VP of Engineering for fabricating the truth (and assault and battery) at all.


Ever since I saw that 'elephant landing at an airport' video, I've had little faith in anything on YT being real:

https://www.youtube.com/watch?v=Fm8FJ8la2VU


Hasn't this been the case for nearly 100 years now. Stalin was notorious for editing out "comrades" from photos.

https://en.wikipedia.org/wiki/Censorship_of_images_in_the_So...

Even FDR's photos were edited to hide his paralysis early on.

As long as we have "histories" of videos as we do of photos, can't we reasonably compare them?


Our team at Mirage is working on solving exactly the same problem. Our current prototype allows users to detect deepfakes in YouTube videos. Currently very early stage and any feedback is greatly appreciated. Max video length 60 seconds. Link to demo: https://deepbuster.com/


I think people overestimate the bad consequences and underestimate the good consequences of such things.

1. I see amazing potential in AI based content creation. Imagine a world where you can have all the music/movies you want, personalized to your specific taste. I would love to have an AI watch Avatar The Last Airbender and invent a few more seasons for me (find out what happened to Zuko's mom :))

2. It is true that we should be more careful about what content is genuine vs fake, but cryptography has you covered, anyone can easily digitally sign the content they create with their private key and be able to prove the authenticity of their content.

3. One thing to note is that AI cannot be used to distinguish reals from fakes because the fakes are generated (using GANs) precisely so they can't be distinguished from the reals.


There is going to be a period of time during which videos are easy to fake but people are still convinced they are real. This will lead to a lot of fake news, as well as wrongful criminal convictions.


This is starting to become impressive. Isn't there a huge opportunity for software able to analyze videos and verify their authenticity?


Why not just rely on signatures and watermarks


Photography can be unreliable even if you can't tamper with whatever the sensor records. The "Russian Ghost Car" viral video is a simple example of this sort of effect. It does really look like a ghost car, and the video itself was not tampered with. The camera recording the action just happened to have a very unlucky angle, occluding the "ghost car" for much longer than you'd expect. The result is a misleading but 100% authentic video.

https://www.youtube.com/watch?v=fQALBUY7OH4


Don't believe certain politicians are guilty of certain crimes, even if you see video evidence!


The first worry comes to my mind is, in case of extreme inequality, the richest could buy the truth.


I think that's already been the case, ever since there's been lawyers for hire.


So isn't that enough to prefer social democracy over capitalism?


Welcome to every era ever.


Isn't it the way inequality in this world works? Information is the king.


The thing about video evidence is that it should be treated like witness testimony should - verified with other evidence. Manipulative editing can have similar effects like the James O'Keefe's infamous Acorn libel. If someone makes a deep-fake of Donald Trump shooting someone on main street with a rifle the lack of actual blood, 9-11 calls in the area or similar would give it away as a fake even if technically perfect.


sadly the latimes still not reachable from europe


great. now we are on the other side, with the moon landers deniers?


Very trashy article, doesn't even display in my country.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: