Hacker News new | past | comments | ask | show | jobs | submit login
AI photo editing raises trust issues in photography (theverge.com)
48 points by Brajeshwar 69 days ago | hide | past | favorite | 51 comments



I'll point out an interesting New York Times article "Ask it no questions: the camera can lie". This article discusses how photography had been seen as a medium of truth, but now "the veracity of photographic reality is being radically challenged" as "technology makes it easy to recompose and combine photographic images, and to do so in a way that is virtually undetectable." By creating a "kind of unsettled and unsettling hybrid imagery based not so much on observable reality and actual events as on the imagination", "photographs will appear less like facts and more like factoids" and will "fundamentally alter not only conventional ideas about the nature of photography but also many cherished conceptions about reality itself."

This article, by the way, is from 1990.

https://www.nytimes.com/1990/08/12/arts/photography-view-ask...


From 1935. The original idea for this I believe

https://en.m.wikipedia.org/wiki/The_Work_of_Art_in_the_Age_o...


Writing, or rather journalism, went the same way so I guess it was only time for photography to follow. Video as well.


Didn't we do this a few days ago: (all from The Verge)

Google's 'Reimagine' tool helped us add wrecks, disasters, and corpses to photos

https://news.ycombinator.com/item?id=41312381

The AI photo editing era is here, and it's every person for themselves

https://news.ycombinator.com/item?id=41301612


This is silly. Our "trust" in photography ended as soon as it was invented. Photo editing has been a reality long before computers came around.


I am sincerely confused by how often this rationale is presented in so few words. Obviously AI has made it much easier - much, much easier - for many more people to make dramatic edits or fabricate whole images.

I understand the point that photo tampering has always been an issue, but by itself that point can't really mean anything without at all addressing the tremendous difference between traditional editing and AI editing.

We can say things like, "you should never trust X format of information anyway", but it's unrealistic for essentially any given X. A sentient human being can't function without trust heuristics. It's always been that way, for all of time - there is no such thing as 100% trust nor 0% trust in any human-to-human communication domain. That squishiness is why things like "how easy/hard is it to fabricate" matters pragmatically. And we can't rely entirely on sources either, since a ton of information is not readily sourceable, and often that's just kicking the can (is the source true?), and we often don't even notice the source. We see photos constantly, and we make assumptions about how likely they are to be real subconsciously and consciously based on a ton of variables. This is just basic human stuff. It's why new ways to more easily/broadly poison information have real consequences.


But the thing is that this isn't even the first jump in terms of making photo tampering easier - this has already happened in the past. Both naturally as these processes matured, but also when manual editing has been superseded by digital editing.

The introduction of digital photo editing (and especially making it accessible to everyone) also brought the skill floor down by several orders of magnitude. To my knowledge, people back then were also saying things akin to "you can't trust anything now". Before that point, you needed skills in working with physical photographs, and had to be a visual artist to some extent. It was a pretty niche skill set. Now, any random user can fire up some software and start doing things that used to be impossible or unthinkably hard, after just some hours of training. And that was just the beginning - photo editing became easier and more accessible over the decades to come.

So why didn't this "data collapse" happen then? I'm not saying it's impossible, but people in this comment thread are acting like there's some hard barrier in technological accessibility, before which everything is good and trustworthy, and after which there's apocalyptic consequences, permanent decay of trust, erasure of history, etc. Is it a barrier or a sliding scale? What makes people so confident that it's this exact development that will finally tip the scales for good? And if this was always bound to happen with our technology evolving, should we even fight back?


> So why didn't this "data collapse" happen then? I'm not saying it's impossible, but people in this comment thread are acting like there's some hard barrier in technological accessibility, before which everything is good and trustworthy, and after which there's apocalyptic consequences, permanent decay of trust, erasure of history, etc. Is it a barrier or a sliding scale? What makes people so confident that it's this exact development that will finally tip the scales for good?

I think people tend to get too focused on one narrow thing (e.g. this technology) and loose sight of the context. Some things going on now:

1. AI fakes will take the skill floor down "several orders of magnitude" more.

2. Trust in all kinds of institutions is declining, including the government and media (precisely the ones that were relied on to debunk and discredit fakes in the past).

3. More and more people are getting their information from social media.

tl;dr: "data collapse" hasn't happened yet because, previously we had the institutional capacity to manage airbrushed fakes/photoshops/etc. As the technology gets cheaper, and the institutions get weaker, eventually they won't be able to keep up.


Why can't it be?

I have always completely not trusted photos for several years now.

Like I can look at them and believe them, but as soon as they contain something of importance that would actually matter in a great way, I simply already "checked out" of trusting them for that purpose years ago already.

This will only make more people trust photos even less, which is a good thing, because they already shouldn't be as trusting of them as they are.


Arthur Conan Doyale was duped by a couple of young girls with a camera and paper cutouts of faeries. Has there ever been a time when it wasn't trivial to trick at least some people with low effort fakes?

People who want to believe what you're showing them will believe even the lamest fabrication, while people who don't want to believe will doubt even authentic photographs with reputable provenance.


Before the photograph we had no trouble not trusting photos. Why have we needed to trust them since?


On the one hand, yes: https://en.wikipedia.org/wiki/Cottingley_Fairies

On the other, quantity has a quality all of its own: what once took effort… well, I've now got a pile of "photographs" of an anthropomorphic dragon-racoon hybrid sitting on a motorbike in a rainy Japanese city, for about €0.03 each.

I'm not sure exactly what I spent on film and development in the 90s, but I bet it was more than that per photograph.


Mach to late to edit, but I got the decimal point wrong: the AI generated images cost 0.03 Euro-cents each, not 0.03 Euros each.


Orders of magnitude are differences of kind.

It is now at least three orders (give or take) easier to make a realistic alteration. History is doomed :)


For sure, but there are all kinds of bad photoshops out there where it is super easy to tell, much like bad AI images that stick out like a sore thumb (or 2 thumbs).

But getting good at photoshop takes a lot of effort (as well as a license or a will to use it otherwise), that will remain the same. The same can't necessarily be said for AI images which have been improving year over year and becoming more and more accessible.


But anyone could have already paid someone who is decent enough at photoshop to make a convincing edited photo. It's not like you yourself needed to be good at photoshop for it to poison any image that someone wanted.


People could previously have afforded to get that done for any specific image, but now it's now possible for a propagandist to give each and every resident of the USA their own personalised faked image for a total cost of about $25k (if their electricity costs $0.1/kWh).


I made this point on Threads and Nilay's response was "yes making visual lies trivial to make is bad". It's never been photos that made "truth", it's been the source of the photos. You trust a photo from a photojournalist. You don't trust a photo from some rando in your social feed.


>You trust a photo from a photojournalist. You don't trust a photo from some rando in your social feed.

The problem is, this isn't highly true.

Sometimes we don't trust photos from some journalists, not necessarily because we think it is dramatically edited, but we know even professionals have been caught mildly editing, either in-camera or with tools afterward.

Conversely - sure, we don't trust when we see a photo from a rando slandering a politician, unless we want to believe it. At the same time, we mostly believe a rando photo of a fireman rescuing a cat. The latter is less likely to be fake, and if it is, the consequences of believing it are less severe.

Trust heuristics are complex and highly psychological.


"You don't trust a photo from some rando in your social feed."

If only that were true for so many people.


I would add that, at least historically, a reputable photojournalist wouldn't likely build a very successful career on faked photos. It's heavily disincentivized. The time and effort required to build the necessary skills and clout won't casually be wasted by a professional. And if and when it does happen that a photojournalist is caught in a lie, the rest are quick to reject it, because it damages their own reputations and livelihoods.

But now, there's little to stop anyone from producing images depicting anything, and we've seen how systems that are blind to ethics can be manipulated into disseminating such images at a speed and scale that far outpaces fact-checking. Professional standards and traditional gatekeeping have no power against it.


Obligatory xkcd: https://xkcd.com/2650/


This reminds me of the other take that's in vogue the last week, that it's a moral panic to complain there's an AI app that's happy to write school shooting instructions, meth recipes, etc. because The Internet already has all that.

This "everything was already broken" take elides a lot that's obvious and intuitive, and the obtuseness gets you regulated and everyone saying you deserved it.

For exampling, this is scaling Photoshopping to, literally everyone, in their pocket, with hours of work transformed to typing out a short message and waiting 20 seconds.


I agree, discourse about faked photos ended decades ago while it's only been too hard to notice.


Eh. Though "trick" photography has existed forever, it has always been much more difficult to do and easier to spot. Now it is super easy. That has to change the calculus of trust and the basic assumption that most images aren't doctored.


Yeah, I haven't trusted photos in a long time already. I'm not really sure this changes much.


Errol Morris had some great thoughts to share about the topic some years back:

As I’ve said elsewhere: Nothing is so obvious that it’s obvious. When someone says that something is obvious, it seems almost certain that it is anything but obvious – even to them. The use of the word “obvious” indicates the absence of a logical argument – an attempt to convince the reader by asserting the truth of something by saying it a little louder. [0]

0. https://archive.nytimes.com/opinionator.blogs.nytimes.com/20...


I know it's not their intention but this seemed like a great ad for Pixels


I have an 8 Pro and wasn't aware of the new Magic Editor, so I got excited there was some sort of new attestation or fingerprinting. Pixels have been silently doing image processing for quite a while.


"Kickass new weapon is just TOO good at killing and maiming enemies, don't buy it!"


We have said that already: https://www.icrc.org/en/document/weapons


I tried Adobe's beta of Photoshop with AI.

Pretty useless, as it would refuse to process edits because of adult content.

These pictures were of house interiors, some cars. Nothing pornographic, but the flunk rate was high enough for me to give up.

I don't need big brother looking at my work and judging its content.


Try using Krita with Stable Diffusion. Open source AI has no big brother and no censorship.


Photoshop and similar tools were doing this before AI.


Wow... I Didn't Know That, You're Telling Me Now For The First Time [1]

[1]: https://www.udio.com/songs/jGjYfsRosZjYTkSBdFgEyF (260,000 listens)


When snapping a photo, couldn't a phone produce a hash with date/time/location included? I understand it's not bullet-proof, but at least it would make it possible to verify the original to some degree.


Leica signs images in the camera in their new M11-P (and presumably future cameras that cost less than $9000). Canon, Nikon and Sony are also working on adding this to their future cameras. And thanks to Lightroom integration that signature can be preserved through edits, with a trail of what you changed.

Maybe at some point this will also trickle down to smartphone cameras. Though with the tiny sensors mandated by the smartphone form factor there is always a lot of preprocessing going on, and it's hard to draw a clear line between just clearing up the sensor data and simple beauty filters. But the examples in the article seem pretty clear cut

https://leica-camera.com/en-int/photography/content-credenti...


Note that Canon has been shipping cameras that intend to do this since 2002.

However, clever people figured out to break it. [1]

[1] https://www.usa.canon.com/support/canon-product-advisories/a...


I imagine a new kind of instant camera, instead of printing the photo they upload to a satellite blockchain, timestamp the time of shooting. Zero privacy but hopefully more truthful representation of the world. Before AI filters of any kind get a hold of it, that is.


I don't worry too much about photographs. Fake images have been floating around in increasing quantities for decades now. Yeah these new AI fakes are often perfect, better than photoshop, but plenty of people were falling for bad photoshop edits before anyway.

Audio/video on the other hand...I don't think anyone is ready for that. There is no precedence for fake videos full of fake dialog. And I'm not talking about trick golf shots or sound-alike voice overs.


Nobody would question that a painting could lie, or be (literally) colored by it's creator. Photographs should be no different.


My feeling on this is AI will continue to erode our trust in photographs, but it will force us to build trust in _photographers_. There are people who are honest and true, and we will need to spend time and energy seeking them out. There will be a market for more trustworthy photographers.


> build trust in _photographers_.

I think you meant there are individual photographers with a reputation for being trustworthy, but I would like to also add that photographers, plural, could also provide trust. That is, I am more likely to believe a scene that has been independently photographed by a multitude of people, compared to a photo or video from a single source.

By making it require a larger group of conspiring photographers to fake a particular event, it raises the cost of forgeries. Maybe it won't be quite the same cost balance as before the AI days, but I would say it's too early to lose all trust in photos.


Ideally it will force us to give up on the flawed notion that is trust. Perhaps you can excuse primitive societies for adopting it, but surely we can do better?


How can we do better than trust? Trust arises from information gaps, and there is no way to close all gaps of information for a single person. One human can't know everything


Photographs have been faked and fooling people since 5 minutes after the first photograph. AI doesn't make this any worse than Photoshop or the airbrush.


How can there be a trust issue without trust? That ship has sailed


Google is clearly doing whatever it takes to sell one more Pixel device, but why make it so easy to create entirely synthetic photos, thereby normalizing them? With the growing difficulty of distinguishing real from fake on social media, this only adds to the confusion. What societal benefit comes from democratizing the creation of fake images? Is Google crossing a moral line?


Every photograph lies. Every photographer who's any good knows this.

Does the public know that? At some level probably yes.

With AI, though, it becomes more obvious. That picture of Elon Musk and Santa Claus having sex probably isn't real. Neither is the one that shows Obama at Yalta next to FDR.


IMO for all the god-complex silicon valley entrepreneurs have about AI (Google, OpenAI e.t.c). Having fake photorealistic images at the tip of a finger seems like a dangerous precedent.

We are already in territory of AI generated images used for political gain. See Trump using Taylor Swift images. And Trump claiming Kamala crowds are AI generated.

IMO I'm going to be calling my congress representatives to make a law that all AI generated images that look photorealistic must have an "Generated by X AI" tag both visible and embedded in image metadata.

We already have similar to laws around counterfeiting currency.

https://www.justia.com/criminal/offenses/white-collar-crimes...

AOC's DEFIANCE bill is a step in the right direction. AI generated images will only get more realistic. However we deffo need guardrails on transparency and sharing.

https://ocasio-cortez.house.gov/media/press-releases/rep-oca...

Tangentially, we have bills that its illegal to represent as police officer or federal agent.

I really hope we get sane regulation around images misrepresenting themselves in social media.


We love being fooled by fake imagery. Movies have done this for years -- sets only build the front of buildings, lights that have no relation to reality, and, of course, visual effects. Even films that you think have no CGI are building fake worlds that we love. Same is true for images and Photoshop. We just don't like being told it is fake because it breaks our beautiful internal vision.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: