> People cannot distinguish between a face generated by Artificial Intelligence - using StyleGAN2- and a real face say researchers, who are calling for safeguards to prevent “deep fakes”
...
The results revealed that synthetically generated faces are not only highly photo realistic, but nearly indistinguishable from real faces and are even judged to be more trustworthy.
I dunno maybe instead of being worried about deep fakes we should be worried that in 2022 people still believe it's possible to judge "trustworthiness" based on nothing more than a headshot. Where was this even published, the New England Journal of Phrenology?
It's important to distinguish between "trustworthiness can be determined from headshots" which is essentially phrenology and "perceived trustworthiness can be determined from headshots" which is a very, very different scientific question.
People do perceive others as more or less trustworthy, smart, reliable, etc, etc based on simply how you look, even in cases where there is no correlation - while they may be mistaken, that's how people feel and act, this is a reasonably established fact.
Like, we might be sad that in 2022 the general public believes that trustworthiness can be judged by nothing more than a headshot, but that - just as all other kinds of preconceptions and prejudices - is an actual, real attribute of society and people and deserves to be properly scientifically studied and published, without any allusions that the topic is taboo or pseudoscience just because we don't like the factual observations.
Oh I totally agree that people can and do make snap judgements based on superficial appearance - my point was actually that this judgement is a more real and damaging threat to society than deep fakes, which IMO is mostly media driven pearl clutching / fear mongering.
Faces are extremely densely packed with information about the subjects genetic makeup, wellbeing, average and current emotions.
Humans are very well adapted to process such visual information and can produce statistically significant(although nowhere near perfact) predictions of some 'hidden' qualities including trustworthiness in a real sample of humans.
People in 2022 are perfectly rational to think this. Even if this wasn't possible and criminals could perfectly signal trustworthiness (sometimes they don't even bother or intentionally try to look scary), the face-> trust instinct would still be a real world phenomenon worth studying.
Calling into question the decision to publish something(or the reputation of the publisher) based on simplified and incorrect expectations about how the world should be needs to stop if we want too keep calling ourselves an enlightened society.
You are subconsciously primed to. Your reptilian brain will consider somebody trustworthy before your logical brain overrides. That initial trustworthiness lingers. You aren't above this.
I frankly think a lot of autistic people can be "above it all." It's not as if they're somehow better or smarter, but rather that intuition doesn't operate successfully for many of them. There is nothing to "override" as you put it.
I appreciate that this might be a controversial statement, and to be clear it's just based on my own personal anecdotes. That said, I'd be interested to hear if anyone disagrees, or else has had similar observations.
Yeah I'm not saying I'm above it, I'm saying that this type of subconscious bias is far more damaging than "deep fakes". But that doesn't generate clicks so the media doesn't care.
>I dunno maybe instead of being worried about deep fakes we should be worried that in 2022 people still believe it's possible to judge "trustworthiness" based on nothing more than a headshot. Where was this even published, the New England Journal of Phrenology?
I generally agree with your sentiment, but I don't believe this is an indication of progress for the most part. That is, we shouldn't necessarily trust that because it's the current year we should have been able to progress past the point of relying on intuition. Although I agree with you that relying on intuition is fraught with error, it's very innately human, and I wouldn't expect people to be able to "progress" past it.
Congratulations, you've missed the point entirely. No one is trying to literally judge how trustworthy the people actually are. I'm not sure how you got confused there.
The title and the article is fairly ambiguous, if you don't pay attention. Several sentences just say the generated faces are more trustworthy, and only further down in the article it is pointed out, that the generated pictures are rated as being more authentic pictures, than the real pictures.
It's poorly formulated and perhaps done so for clickbait, which makes it worse, since there are many ways to phrase it more clearly.
> “Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness. We wondered if synthetic faces activate the same judgements of trustworthiness. If not, then a perception of trustworthiness could help distinguish real from synthetic faces.”
A third study asked 223 participants to rate the trustworthiness of 128 faces taken the same set of 800 faces on a scale of 1 (very untrustworthy) to 7 (very trustworthy).
> The average rating for synthetic faces was 7.7% MORE trustworthy than the average rating for real faces which is statistically significant.
They literally aren't. They are asking which faces look more trustworthy. People think the AI faces look more trustworthy. Whether they are right or not (how could they be???) isn't the point.
This conversation feels like there's just a slight disconnect between what "trustworthy" means. It's either an intrinsic quality that one possesses, or it's an external quality created by outside perception. It either means you are actually worthy of trust or that others find you worthy of trust.
Or, at least, I had both definitions rattling around in my head, and had to think through which one I actually believed is correct. Sample size of 1 and all that.
We know what 'trustworthy means', it's not an extant perception.
We know exactly what we're doing when we estimate trustworthiness from a particular attribute.
The particpants may or may not know some of the faces are AI, it's besides the point.
'Estimate how well a football player's career will go from a photo'. It doesn't matter if the photos are real or not - it's an understanding of which characteristics we use to estimate trustorthiness, assent knowing their actual trustworthiness.
Asserting that everyone shares your semantic understanding and then only arguing based on that seems unproductive. Especially when it's unclear which of the definitions I used that you're arguing that "we know." You say it's not a perception, but then describe perception in the last sentence.
I think the "trustworthiness" dimension can be a little questionable. The Discussion section does bring this up - "This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy". In a sense, the model produces faces with less detail (or "average" looking faces). However, if you look at all the faces, they are "high quality" headshots. Would generated faces of, say, mugshots still produce higher levels of "trustworthiness"? Not to place any type of judgement on the individuals with a mugshot, but you would be generating a vastly different type of face.
There are a lot of public mugshot databases out there. Has anyone trained a facial generation model on them exclusively yet? I would try myself but it's nowhere near my field and I have no idea where I'd begin.
GANs are pretty good at not averaging as that is fairly easily spotted by the discriminator. It would assume the training set is biased to more attractive (who make better subjects for photos) so it would generate unattractive people at a lower probability than exists in real life. To do it properly you’d have to pull the real examples from the model training set and I didn’t see where they did that.
They are better than most VAE models at not averaging, but they may still not be covering the entire data distribution. That would probably imply that the underdispersed samples are closer to the average. We don’t have great metrics for figuring out how complete that coverage is.
It would be fun to have this system generate alternate families. Provide my photo, then have it generate a 'family' photo with me, my parents, wife, and 5 kids ages 3-12.
Ok so this is not exactly on topic and mostly humorous but I was just having a conversation with someone about how when the USSR had their program for trying to breed domesticated foxes the foxes eventually started having fluffier ears and bigger eyes and in general started looking cuter, even though the breeding selection was for human friendliness and not these factors. So as we “domesticate” AI maybe it’s becoming more cute to lull us into feeding it? :)
This is all interesting but I fail to see that the risk for fraud is significantly elevated due to deep fakes. It has always been easy enough to make plausible material that "proves" something. To avoid fraud you need to be wary of the source and weight the new information based on previous interactions with the source by you or by people that you trust. Nothing is really new in this context right?
What's new is the democratization of falsified images, not their existence. Nowadays, local police investigating a small crime would never doubt the veracity of photo or video evidence. If they are provided video evidence of someone committing a burglary, they won't do advanced analysis on the image before sentencing the person.
This won't fool the FBI investigating the murder of a minister, but many authorities with less resources have to rely on video evidence. Very soon, anyone will be able to fabricate incriminating evidence with little effort. Unless tools catch up, this could become a serious problem.
My broken-record reaction to headline-worthy research findings:
The past decade of social science research publication has proven that provocative-sounding results should be considered fraudulent until the underlying data has been published, and then (assuming the data passes muster) should be regarded with skepticism until independently replicated with a fresh data set.
The more interesting question is how long until people recognise these generated faces because they are "too real", and how long until AI starts deliberately making faces imperfect?
You're decades late on that. The AI is already deliberately making the faces imperfect. That's one of the early things you'd focus on when building systems like this.
Newsflash: the fitness function still makes models more fit! Also, renowned scientist Neil deGrasse Tyson recently announced his latest and greatest discovery: photoshopped magazine covers sell better - finally giving purpose to what was previously thought to be a purely ritualistic removal of skin blemishes from photos of fashion models.
I dunno maybe instead of being worried about deep fakes we should be worried that in 2022 people still believe it's possible to judge "trustworthiness" based on nothing more than a headshot. Where was this even published, the New England Journal of Phrenology?