I'll point out an interesting New York Times article "Ask it no questions: the camera can lie". This article discusses how photography had been seen as a medium of truth, but now "the veracity of photographic reality is being radically challenged" as "technology makes it easy to recompose and combine photographic images, and to do so in a way that is virtually undetectable." By creating a "kind of unsettled and unsettling hybrid imagery based not so much on observable reality and actual events as on the imagination", "photographs will appear less like facts and more like factoids" and will "fundamentally alter not only conventional ideas about the nature of photography but also many cherished conceptions about reality itself."
I am sincerely confused by how often this rationale is presented in so few words. Obviously AI has made it much easier - much, much easier - for many more people to make dramatic edits or fabricate whole images.
I understand the point that photo tampering has always been an issue, but by itself that point can't really mean anything without at all addressing the tremendous difference between traditional editing and AI editing.
We can say things like, "you should never trust X format of information anyway", but it's unrealistic for essentially any given X. A sentient human being can't function without trust heuristics. It's always been that way, for all of time - there is no such thing as 100% trust nor 0% trust in any human-to-human communication domain. That squishiness is why things like "how easy/hard is it to fabricate" matters pragmatically. And we can't rely entirely on sources either, since a ton of information is not readily sourceable, and often that's just kicking the can (is the source true?), and we often don't even notice the source. We see photos constantly, and we make assumptions about how likely they are to be real subconsciously and consciously based on a ton of variables. This is just basic human stuff. It's why new ways to more easily/broadly poison information have real consequences.
But the thing is that this isn't even the first jump in terms of making photo tampering easier - this has already happened in the past. Both naturally as these processes matured, but also when manual editing has been superseded by digital editing.
The introduction of digital photo editing (and especially making it accessible to everyone) also brought the skill floor down by several orders of magnitude. To my knowledge, people back then were also saying things akin to "you can't trust anything now". Before that point, you needed skills in working with physical photographs, and had to be a visual artist to some extent. It was a pretty niche skill set. Now, any random user can fire up some software and start doing things that used to be impossible or unthinkably hard, after just some hours of training. And that was just the beginning - photo editing became easier and more accessible over the decades to come.
So why didn't this "data collapse" happen then? I'm not saying it's impossible, but people in this comment thread are acting like there's some hard barrier in technological accessibility, before which everything is good and trustworthy, and after which there's apocalyptic consequences, permanent decay of trust, erasure of history, etc. Is it a barrier or a sliding scale? What makes people so confident that it's this exact development that will finally tip the scales for good? And if this was always bound to happen with our technology evolving, should we even fight back?
> So why didn't this "data collapse" happen then? I'm not saying it's impossible, but people in this comment thread are acting like there's some hard barrier in technological accessibility, before which everything is good and trustworthy, and after which there's apocalyptic consequences, permanent decay of trust, erasure of history, etc. Is it a barrier or a sliding scale? What makes people so confident that it's this exact development that will finally tip the scales for good?
I think people tend to get too focused on one narrow thing (e.g. this technology) and loose sight of the context. Some things going on now:
1. AI fakes will take the skill floor down "several orders of magnitude" more.
2. Trust in all kinds of institutions is declining, including the government and media (precisely the ones that were relied on to debunk and discredit fakes in the past).
3. More and more people are getting their information from social media.
tl;dr: "data collapse" hasn't happened yet because, previously we had the institutional capacity to manage airbrushed fakes/photoshops/etc. As the technology gets cheaper, and the institutions get weaker, eventually they won't be able to keep up.
I have always completely not trusted photos for several years now.
Like I can look at them and believe them, but as soon as they contain something of importance that would actually matter in a great way, I simply already "checked out" of trusting them for that purpose years ago already.
This will only make more people trust photos even less, which is a good thing, because they already shouldn't be as trusting of them as they are.
Arthur Conan Doyale was duped by a couple of young girls with a camera and paper cutouts of faeries. Has there ever been a time when it wasn't trivial to trick at least some people with low effort fakes?
People who want to believe what you're showing them will believe even the lamest fabrication, while people who don't want to believe will doubt even authentic photographs with reputable provenance.
On the other, quantity has a quality all of its own: what once took effort… well, I've now got a pile of "photographs" of an anthropomorphic dragon-racoon hybrid sitting on a motorbike in a rainy Japanese city, for about €0.03 each.
I'm not sure exactly what I spent on film and development in the 90s, but I bet it was more than that per photograph.
For sure, but there are all kinds of bad photoshops out there where it is super easy to tell, much like bad AI images that stick out like a sore thumb (or 2 thumbs).
But getting good at photoshop takes a lot of effort (as well as a license or a will to use it otherwise), that will remain the same. The same can't necessarily be said for AI images which have been improving year over year and becoming more and more accessible.
But anyone could have already paid someone who is decent enough at photoshop to make a convincing edited photo. It's not like you yourself needed to be good at photoshop for it to poison any image that someone wanted.
People could previously have afforded to get that done for any specific image, but now it's now possible for a propagandist to give each and every resident of the USA their own personalised faked image for a total cost of about $25k (if their electricity costs $0.1/kWh).
I made this point on Threads and Nilay's response was "yes making visual lies trivial to make is bad". It's never been photos that made "truth", it's been the source of the photos. You trust a photo from a photojournalist. You don't trust a photo from some rando in your social feed.
>You trust a photo from a photojournalist. You don't trust a photo from some rando in your social feed.
The problem is, this isn't highly true.
Sometimes we don't trust photos from some journalists, not necessarily because we think it is dramatically edited, but we know even professionals have been caught mildly editing, either in-camera or with tools afterward.
Conversely - sure, we don't trust when we see a photo from a rando slandering a politician, unless we want to believe it. At the same time, we mostly believe a rando photo of a fireman rescuing a cat. The latter is less likely to be fake, and if it is, the consequences of believing it are less severe.
Trust heuristics are complex and highly psychological.
I would add that, at least historically, a reputable photojournalist wouldn't likely build a very successful career on faked photos. It's heavily disincentivized. The time and effort required to build the necessary skills and clout won't casually be wasted by a professional. And if and when it does happen that a photojournalist is caught in a lie, the rest are quick to reject it, because it damages their own reputations and livelihoods.
But now, there's little to stop anyone from producing images depicting anything, and we've seen how systems that are blind to ethics can be manipulated into disseminating such images at a speed and scale that far outpaces fact-checking. Professional standards and traditional gatekeeping have no power against it.
This reminds me of the other take that's in vogue the last week, that it's a moral panic to complain there's an AI app that's happy to write school shooting instructions, meth recipes, etc. because The Internet already has all that.
This "everything was already broken" take elides a lot that's obvious and intuitive, and the obtuseness gets you regulated and everyone saying you deserved it.
For exampling, this is scaling Photoshopping to, literally everyone, in their pocket, with hours of work transformed to typing out a short message and waiting 20 seconds.
Eh. Though "trick" photography has existed forever, it has always been much more difficult to do and easier to spot. Now it is super easy. That has to change the calculus of trust and the basic assumption that most images aren't doctored.
Errol Morris had some great thoughts to share about the topic some years back:
As I’ve said elsewhere: Nothing is so obvious that it’s obvious. When someone says that something is obvious, it seems almost certain that it is anything but obvious – even to them. The use of the word “obvious” indicates the absence of a logical argument – an attempt to convince the reader by asserting the truth of something by saying it a little louder. [0]
I have an 8 Pro and wasn't aware of the new Magic Editor, so I got excited there was some sort of new attestation or fingerprinting. Pixels have been silently doing image processing for quite a while.
When snapping a photo, couldn't a phone produce a hash with date/time/location included? I understand it's not bullet-proof, but at least it would make it possible to verify the original to some degree.
Leica signs images in the camera in their new M11-P (and presumably future cameras that cost less than $9000). Canon, Nikon and Sony are also working on adding this to their future cameras. And thanks to Lightroom integration that signature can be preserved through edits, with a trail of what you changed.
Maybe at some point this will also trickle down to smartphone cameras. Though with the tiny sensors mandated by the smartphone form factor there is always a lot of preprocessing going on, and it's hard to draw a clear line between just clearing up the sensor data and simple beauty filters. But the examples in the article seem pretty clear cut
I imagine a new kind of instant camera, instead of printing the photo they upload to a satellite blockchain, timestamp the time of shooting. Zero privacy but hopefully more truthful representation of the world. Before AI filters of any kind get a hold of it, that is.
I don't worry too much about photographs. Fake images have been floating around in increasing quantities for decades now. Yeah these new AI fakes are often perfect, better than photoshop, but plenty of people were falling for bad photoshop edits before anyway.
Audio/video on the other hand...I don't think anyone is ready for that. There is no precedence for fake videos full of fake dialog. And I'm not talking about trick golf shots or sound-alike voice overs.
My feeling on this is AI will continue to erode our trust in photographs, but it will force us to build trust in _photographers_. There are people who are honest and true, and we will need to spend time and energy seeking them out. There will be a market for more trustworthy photographers.
I think you meant there are individual photographers with a reputation for being trustworthy, but I would like to also add that photographers, plural, could also provide trust. That is, I am more likely to believe a scene that has been independently photographed by a multitude of people, compared to a photo or video from a single source.
By making it require a larger group of conspiring photographers to fake a particular event, it raises the cost of forgeries. Maybe it won't be quite the same cost balance as before the AI days, but I would say it's too early to lose all trust in photos.
Ideally it will force us to give up on the flawed notion that is trust. Perhaps you can excuse primitive societies for adopting it, but surely we can do better?
How can we do better than trust? Trust arises from information gaps, and there is no way to close all gaps of information for a single person. One human can't know everything
Photographs have been faked and fooling people since 5 minutes after the first photograph. AI doesn't make this any worse than Photoshop or the airbrush.
Google is clearly doing whatever it takes to sell one more Pixel device, but why make it so easy to create entirely synthetic photos, thereby normalizing them? With the growing difficulty of distinguishing real from fake on social media, this only adds to the confusion. What societal benefit comes from democratizing the creation of fake images? Is Google crossing a moral line?
Every photograph lies. Every photographer who's any good knows this.
Does the public know that? At some level probably yes.
With AI, though, it becomes more obvious. That picture of Elon Musk and Santa Claus having sex probably isn't real. Neither is the one that shows Obama at Yalta next to FDR.
IMO for all the god-complex silicon valley entrepreneurs have about AI (Google, OpenAI e.t.c). Having fake photorealistic images at the tip of a finger seems like a dangerous precedent.
We are already in territory of AI generated images used for political gain. See Trump using Taylor Swift images. And Trump claiming Kamala crowds are AI generated.
IMO I'm going to be calling my congress representatives to make a law that all AI generated images that look photorealistic must have an "Generated by X AI" tag both visible and embedded in image metadata.
We already have similar to laws around counterfeiting currency.
AOC's DEFIANCE bill is a step in the right direction. AI generated images will only get more realistic. However we deffo need guardrails on transparency and sharing.
We love being fooled by fake imagery. Movies have done this for years -- sets only build the front of buildings, lights that have no relation to reality, and, of course, visual effects. Even films that you think have no CGI are building fake worlds that we love. Same is true for images and Photoshop. We just don't like being told it is fake because it breaks our beautiful internal vision.
This article, by the way, is from 1990.
https://www.nytimes.com/1990/08/12/arts/photography-view-ask...