Hacker News new | past | comments | ask | show | jobs | submit login
Deep neural networks more accurate than humans at detecting sexual orientation (psyarxiv.com)
97 points by fotcorn on Sept 8, 2017 | hide | past | favorite | 122 comments



If you read the paper, the photos and labels were sourced from a dating website. In my opinion, there is a good chance that the model may be overfitting to how people wish to present themselves in that context. - e.g. framing of the photo, facial expression etc. Things with a heavy amount of cultural conditioning.

Some of the press around this seems a bit alarmist - I doubt you would see anywhere near this accuracy out in the real world.


They address overfitting, presentation and context in the paper. Their DNN was using facial features that had been extracted by VGG-Face, which is a widely used thing that reduces a face to a vector of scores that are meant to be independent of transient features such as facial expression, background, orientation, lighting, contrast, and similar.

By having their DNN train on faces that have been processed by VGG-Face, they greatly reduce the risk of overfitting or relying on things that would be present in dating site pictures but not in pictures of the same people in other contexts.


They use multiple pictures from the same profile. Does the test set include any people that were in the training set?


The problem being, even if they are different photos, if the same people are in the test set, it may just be recognizing people.

Instead of learning, that person looks like a gay person.

It learns, that person looks like Tim, who is gay.


Ah, I had missed that. I guess this will mitigate the risk a lot, although I would still like to have seen results against a test set of images from a different context (social media for example).


Good point. I was thinking similar. The context matters, in this case a lot. Certainly, there are signals people want to send on dating sites. Evidently the algorithm picked up those signals and then from that created a pattern (because it has more and finer capacity than mere humans).

Still quite an accomplishment (maybe). But they pretty much already led the horse to water, yes?


This is a very annoying part of non-explanatory models. I think the result defies common sense a bit, and the model can't explain why this is so.

So in the circumstance, why should we believe it's generalizable?


This also goes to the heart of the problem with deep learning on neural nets. We have this algorithm that apparently identifies homo and heterosexual people, presumably based on a variety of subtle features, but we have pretty much no clue as to which features and why.

The human judges may have been less accurate, but they could likely explain each decision they made and the visual features they based their decision on.


Humans are known to be unreliable in explaining how they come to conclusions as well. Humans just like to pretend they can verbalise all knowledge ;)


Even if they verbalized their knowledge incorrectly they give you something, which if you chose, you could further test / replicate. In other words even if they're BSing they're still falsifiable, not so "magic models" when their publisher may not want them falsified


Some are better at it than others, and no doubt many are pretty bad at it, but I have yet to see a neural net explain to me, accurately or not, why it came to the decision it did.


Don't forget that we can listen to their attempt at verbalizing that knowledge and then, in turn, draw/verbalize our own sketchy conclusions....and so on.


Something to do with clouds and tanks : https://www.jefftk.com/p/detecting-tanks


Line 210: > "Gay and heterosexual people were represented in equal numbers."

According to Gallup Polling, around 4% of the American population is homosexual. So, let's be generous and say that their classifier has a 80% accuracy given balanced inputs while humans have 60%.

Let's sample 100 people of the dataset, 50 of which are homosexual (and the rest heterosexual). If the classifier has 80% accuracy (let's assume false_positive_rate = false_negative_balance since I didn't find information about that), it means that 80 people were correctly classified, 10 heterosexuals were misclassified as homosexuals and 10 homosexuals as heterosexuals.

According to the Bayes theorem, "Given a random person, probability of being homosexual assuming that the classifier said so" = "Probability of being homosexual of a random person (the so-called prior)" <times> "Probability of classifier saying it's homosexual, assuming it's indeed homosexual <divided by> "Given a random person, probability of saying it's homosexual".

Substituting, we get: P = 0.04 * 0.8 / 0.5 = 0.064

If instead of the classifier we use human "feeling", we get: P=0.04 * 0.6 / 0.5 = 0.048

In summary, 4% of the people are homosexual. If a human thinks someone is homosexual, the probability "increases" to 4.8%. If on the contrary their algorithm believes it's homosexual, the probability of actually being homosexual increases to 6.4%.

Not very groundbreaking...


Don't follow your full argument, but I think get the point: the computer is expecting 50/50 hetero and homosexual people, after looking at the training data. However, the human judges likely assumed that the percentage of homosexual people in the study matched that in society, roughly 4%, and would bias their guesses guessing that fewer people were homosexual.

Assuming that's true, no wonder the computer did better. I'd bet it would do far worse dealing with a more representative sample.

Still, even though it's not a perfect study, it's still impressive it can do as well as it did.


I don't know if I can follow. You say that if their classifier had 100% accuracy, then the probability of being homosexual would increase to 8%? That doesn't make much sense.


Look here [1]

Very good explanation (though the op's one are too)

[1]: https://en.wikipedia.org/wiki/False_positive_paradox


But why does "Given a random person, probability of saying it's homosexual" equal 0.5? I'd say that should be 0.04.


This is the part of his explanation I find confusing too, though I do agree with his general argument: the results aren't really useful when they haven't been tested on a dataset representative of the real distribution.

From what I can gather he's basically saying that the "probability of a random face being classified as homosexual" is 0.5. This isn't REALLY true (would have to run the classifier on all possible faces to find this), but that is in fact the "environment" the classifier has been trained within.


If the test set of images really were 50/50 and the human judges weren't told that, then they were effectively given inaccurate priors, which would obviously reduce their accuracy.


I guess because the algorithm has been trained on 50% straight 50% gay and the 80% figure is actually for telling appart a pair of straight and gay persons, it will say gay half the time.. But this is confusing you are right. The explanation using false positive is clearer.

If there is 1000 persons, 40 gay, and the algorithm is 80% accurate, it will say: 960 * 20%= 192 straight people are gay and 40 * 80% = 32 gay person are gay. Not very impressive and shows a very stong gay bias if I can say so


There is also https://en.wikipedia.org/wiki/Base_rate_fallacy, which describes the common mistakes interpreting these kinds of data.


I found that confusing, too. I guess that's the best you can expect if you train your model in such an unrealistically balanced dataset. Put it differently. If I had 100% accuracy at detecting depression in a dataset of depressed people, how well will run this "detector" at detecting depression in the real world? Well, being 'd' the incidence of depression:

P(being depressed | trivial detector said so) = d * 1.0 / 1.0 = d


It doesn't make sense, but that's exactly right. If their training sample had been only 4% gay and achieved 100% accuracy, then the true positive rate would be expected to be 1, because the denominator changes to 0.04.

It does raise the question of how Bayes' rule applies if your sample is underrepresented, though... say their training set was only 2% gay. Could they achieve a true positive rate of 200%?


Your denominators are wrong.

The denominator should be the probability of the classifier being positive: P(True Positive)+P(False Positive)

Since you assumed that P(False Positive) = P(False Negative) = (1-P(True Positive))/2 = (1-P(True Negative))/2

In the algorithm case the denominator is 0.8x0.04+0.1x0.96=0.128

In the human case the denominator is 0.6x0.04+0.2x0.96=0.216

In summary, if a human thinks someone is homosexual, the probability increases to 11.1%. If the algorithm makes that call the probability increases to 25%.


I think the point is that the classifier is not intended to be used to predict sexual orientation in a real-world scenario. It'd be completely useless because it will have way too many false positives. But I think that wasn't the authors' goal anyway. I think they rather wanted to show that there are facial features related to sexual orientation that are "hidden" to humans. And in my opinion the results are fine for that purpose.


>> let's assume false_positive_rate = false_negative_balance

This assumption is usually wrong when trying to detect rare events (such as CC fraud or straight/gay people).

In my own experience, if you train on a prepared 50/50 dataset to some kind of reasonable accuracy, on actual data where classes are more like 96/4 you get a LOT of false positives, thus making the whole thing not very useful


The homosexual percentage is very likely to be lower than 4%, but getting accurate data is not easy [0].

0. https://en.wikipedia.org/wiki/LGBT_demographics_of_the_Unite...


> very likely to be lower than 4%

Where in the article does it say that? I can see

> Studies from several nations, including the U.S., conducted at varying time periods, have produced a statistical range of 1.2[4] to 6.8[6] percent of the adult population identifying as LGBT.

but there's no indication of how likely any part of that range is.


LGBT != homosexual

Just read the whole article. The references cited are also worth reading if you are actually interested in learning about this topic.


That's why it's usually advised to not only look at simple accuracy but also things like precision, recall and F1 score


I tried really hard to follow your explanation, but I can't understand why it has to be so complex.

Why are you using the accuracy as a conditional likelihood?


Is there a name for this kind of smug, couple-of-sentences, incorrect, dismissal of proper research?

It seems to have infected a lot of the sites I visit, and is always liked or voted to the top.

I guess people just love feeling superior.


Yes, it's called "peer review".


Don't you think these thoughts are justified?


It's not proper research.


Page 22 of the report has an image of average faces for men and women and gay / straight (bisexuality doesn't seem to be covered).

> The results show that the faces of gay men were more feminine and the faces of lesbians were more masculine than those of their respective heterosexual counterpart

It looks to me you're gay if you are a man wearing glasses and straight if you have a beard and rounder face. It would be interesting to see what it made of the stereotypical "bear". If you're a brunette and have a slightly thinner face then you're probably, 58%, a lesbian ¯\_(ツ)_/¯.

> lesbians tended to wear baseball caps

I'm really not sure what to make of this.


Yes, that page caught my eye, too. My conclusion is that gay people are better-looking.


Well, don't forget self-selection effects in the population sample. It's highly possible those who are better looking are more confident about posting their profiles openly for analysis. The confounders for studies such as these are multiple and treacherous.


This is very likely the same as detecting women by "they have long hair" - it's not finding intrinsic characteristics, but social ones.


I'm guessing you missed the part where they specifically talk about using only facial structure (nose, jaw shape, etc) and still getting good results? They even talk about there being a long-standing theory predicting the differences they found...


If you look at the facial landmarks figure[1], it seems that the main difference between a "gay" and a "straight" man is how fat/thin their face is. Maybe gay people are on average less overweight? That's possible, but that's hardly an intrinsic characteristic. Eyebrow shape is not intrinsic either.

[1]: https://imgur.com/zs8RWIz


And in the context of privacy with regard to public surveillance systems with facial recognition capability your point is exactly what?


Social characteristics are voluntary. People who are hiding won't adopt them.


Homosexual men will have to start wearing ill-fitting suits and dirty gym shorts to avoid being caught by the algorithm.


Totally agree! Heterosexual women will have to wear makeup and grow their hair long to avoid being misidentified as lesbians. Personally I think it is wishful thinking on the part of people who think that you aren't born gay. This algorithm will out even the most repressed homosexual as it is not entirely based on 'social characteristics'. Science doesn't care what you think.


Gay men and women have blended in undetected for thousands of years all over the world. They've only been able to really come out in the last few decades. If this miserable distopia of automatic homosexual detection came to pass, they would simply go back to blending in again. A miserable existence, but pretty sure they'd be able to fool the robot overlords.


80% of times according to this research the robot overlords will not be fooled. Of course, the success rate is little consolation. The fact that it will be applied at all is the problem. People who expect gay people to go back into hiding are the past.


I don't know about you, but when trying to figure out if someone is female by sight alone, long hair is a big clue, after clothing and before facial characteristics. Visual analysis can only reveal surface characteristics that are subject to manipulation (up to and including hormone therapy and plastic surgery), not "intrinsic", whatever that means.


What is your point though? Who cares how it does it? The fact that it can tell gay from straight will have privacy implications for both. Not to mention that it is not revealed in the study how much of the feature set was based on grooming. This is pure speculation. Clearly these trigger points mean more minuses for me and a lack of any sensible discussion of the actual point of the study.


My point is that the poster I was responding to seemed to want something magical, something I don't think is possible.


But then again, it's trivial to fool it, if the deciding fitting factor is long hair.


Sure. And it's trivial to fool humans if they aren't able to look closely.


I'm a bit skeptical of their claim that "Human judges achieved much lower accuracy". The deep net likely found a some feature that correlated with sexuality presumably after looking at many tagged images.

I wonder whether the "judges" had access to that same training set. If they were people off the street that brought in their own biases, I would suspect they would do far worse than if they were able to view the training set themselves and teach themselves what hetero and homosexual people look like. Many heterosexual people have very limited contact with homosexual people. But given a training set (or sufficient contact with homosexual people), they could learn too, and I suspect they could learn better than the machine learning algorithm.

That said, this research is impressive, as well as terrifying and depressing.


>That said, this research is impressive, as well as terrifying and depressing.

I know why you say this, but it's only terrifying because there are so many idiots around. All this study says is that there is a strong genetic/developmental component to sexuality - something we've known about for decades.

Telling people they can't be gay is like telling people they can't be tall, or black.


> All this study says is that there is a strong genetic/developmental component to sexuality - something we've known about for decades.

It doesn't say that. The effects could very well be presented by how people with different sexual orientations present themselves (sub)consciously to attract similar people.


Well a lot of the differences are in the shapes and proportions of the face. A subset are to do things changeable things like hair.


I am less concerned that there may be visibly identifiable characteristics of homosexuality. I am more concerned that these features can be automatically discerned by computers on a massive scale. It's one thing for an individual homophobe to call out someone because they look gay. It's quite another for a government or organization to do that to millions of people in seconds, and then act upon it at scale.


Do you think the authors of the paper are the first to think of this idea of detecting personal traits with facial features? How much cost do you think it takes for implementing detection with this particular set of features?

If it can be done it will be done. In this particular case it doesn't even cost much. You should have started to be concerned when machine learning evolved into a paradigm that is only possible to achieve any kind of real world performance with an amount of data and computing resources available only to megacorps and governments.


> If it can be done it will be done. Nonsense. Scientists and grad students have ethical standards and refrain from unethical research all the time. This paper was written by a business school grad student, who may be more interested in clicks than in moving the state of the art forward.


I think what I implied was it will be done by entities without moral considerations, such as capitals and governments. In this sense it's not very productive to censor individual researchers from publicizing these kind of findings. Technology will evolve. The question to ask is who will control the technology. By censoring you're just blinding yourself from what's coming. And my second point was the imbalance of power started when individual control of intelligence capabilities was no longer possible after the field moved to "data driven" and "deep learning" paradigms.


You can study the idea and science without doing the deed. The U.S. could have learned about atomic bombs without dropping one. Tech companies can learn about image recognition without categorizing everyone based on their photo.


I love the fact that the authors also see that implication and include a warning.

> Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.


It's also saying that being gay has an effect of facial shape, for which there is current zero evidence, and phrenology is junk science.


Necessary link to an article by Blaise Aguera y Arcas, the lead of an ML team at Google: "Physiognomy’s New Clothes" [0]

He gives several examples, and explains how these type of statements are extremely misleading (and dangerous!), and not something new at all...

https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59...


This is creepy. First, there is no serious scientific proof that your facial expression can tell if you are homosexual or not. Correlations here are total garbage. Sexuality is a really complex thing to discuss and, most importantly, is a private thing. Second, software detecting if someone is gay or not seems to me quite similar to jews being forced to wear the Star of David in Nazi Germany, so everyone could spot them and act against them, including people being wrongly labeled. Seriously guys, stop this. We have to be really careful about the potential uses our software. There can be serious impacts in society and in people lives.


If the paper does what it says it's telling user reported labels.

The problem isn't the presence of software producing likelihoods of someone being gay. The problem is people interpreting the results in a reductionist way and reducing people into labels. And another problem is you just don't like seeing what you don't like. There are companies out there profiting from the same stuff. They just don't talk about it in this way.


The ethical and moral implications are enormous. But machine learning and AI are not going away.

We must make our best effort to understand how this works, not bury our head in the sand as hostile actors use this for ill.


So what do you suggest, for example in this case?


I can easily imagine this being used by an anti-gay government to target gays. Doesn't matter to them if it is not 100% accurate.


In cultures where homosexuals are persecuted, a tool like this could be devastating.


They mention this in the article.

Even so, it is disheartening. It appears we're so fascinated by all the avenues tools like deep learning could open up, to ask more basic questions like the implications of pursuing such research.


The first author of this also developed tools that allow you to analyse all sorts of personality traits (including sexual orientation) via FB likes. He was aware and publicised of the potential consequences and risks here, but I seem to remember thinking his reasons for publishing were justified at the time. As he was part of the psychology department during his PhD, I think his papers will have been extensively scrutinised by ethics boards and the like.

Mini disclaimer: I vaguely knew him from university and didn't get on with him. However, I don't think he was the type of person to put people at risk for personal glory.


If that were possible, humans could already detect homosexuals. It's likely in those countries that homosexuals do not present themselves the same.


Both for the true positives and the false positives.


Or anything else that someone might label deviant.


Isn't this a result you should immediately be skeptical of because the results are so significant? I mean, the idea that you can detect sexual orientation from facial images seems somewhat plausible, but no way I would immediately buy that a classifier can discriminate just from the facial-structure with 81% accuracy as the title may imply, for instance. To be fair, in their abstract the authors mention grooming style and so on, but I feel that the title alone is slightly misleading.

Also: man, that preprint formatting is annoying to read.


81% accuracy in binary classification isn't something to write home about, either...


I'd bet you'd get a lot higher accuracy from scanning a facebook profile's metadata than from their image.

That said, the concern of this is the potential to do this on at scale in a horrible distopia. Cameras in airports and on sidewalks automatically scan people for "undesirable" features and they end up in reeducation camps or just simply disappearing.


Yeah, especially if you had the "Interested in..." field.

Just wait until somebody publishes a "terrorist facial classifier" with "99% accuracy". That's when I'll be scared.


Never mind reeducation camps what about people trying to sell you loafers...


I would really hope that they don't publish more information and keep algorithms as secret as possible. Algorithms like this can have huge implications in countries where homosexuality is banned. If this really works, people could end up in jail just because a neural network determines that they look gay. And even if it doesn't work, some governments or local communities could still use it to ban people because of the chance that they're gay.

While I'm not a fan of censoring research, I think this is a case where the research community should refrain from publishing results that could help replicating such algorithms.

EDIT: Just to be clear, I don't say that this study makes sense or works. But there are areas where large parts of the population really hate homosexuality. Just thinking this could work can make them use such a system. Especially in situations where you can afford false positives. So next time you want to travel to a country it could happen that they don't let you in because the algorithm says that you're likely gay.


Can't really put the deep learning genie back in the bottle at this point. Even if this research wasn't released, someone who wanted to do this would be able to figure it out eventually.

This is just the first step, things are about to get much worse.


It wouldn't take long for a government to replicate the study even if it is not published if it is sufficiently interested


Getting a prototype working would take a week for a skilled machine learning engineer, assuming they had access to well labeled tagged data.

What will be difficult is to push the accuracy up to get actionable information from it. Assuming you're an evil repressive regime, even if you're system had 90% accuracy, you'd be falsely accusing a huge number of people and doubt even a repressive homophobic regime would implement it. Getting that 90% accuracy to 99.9% or higher would take a huge effort, this study isn't anywhere close.

That said, the concerns are real. Automatically sorting hetero from homosexual people, Jews from Christians, or even black from white comes with a tone of moral issues.


Getting buy-in from an evil repressive regime will be hard if we assume they have a population distribution like anyone else which means a sizeable proportion of those in power are gay but not openly.


Not sure about that. In the US it seems that the most vocal opponents of gay rights often get caught with their pants down in some embarrassing homosexual encounter.


Quite. This technology will terrify them.


The title is surprisingly not too clickbaity, although it's not as clear-cut: in order to reduce false positives (increase precision), they need to limit the number of positives (reduce recall). On 1000 samples containing 70 gay people, they were able to get a 10% false positive rate on their positive results, which were 10 people (meaning 12% of the total gay sample). The sample is a bit biased too because they pulled it from explicitly gay-oriented public social network pages (but I don't fault them for that, it would be quite hard to find a better sample).

It is still an impressive result, and one that might be misused badly, despite the numerous warnings used in the paper.


This is the classic "false positive paradox"[1]. Commonly present in medical testing. Even if false positive rate is very low, if the positive value is very low too, the likelihood of being victim of false can be high.

The exemple in the wikipedia page are very good. I had to explain that to a friend which had tested positive on the HIV test and was waiting for confirmation over the weekend. Not easy to talk math when such things happens.. I found it very troubling that doctors don't even mention that to patient and present tests as 95% effective. (In fact she was fine)

[1]: https://en.wikipedia.org/wiki/False_positive_paradox


The potential for abuse is enormous, even if the results aren't 100% accurate (which they cannot ever be). And not just in the obvious ways. Will they be able to detect pedophilia with any accuracy? Good luck getting a job when employers covertly figure out your score is more than 50%. Good luck becoming a public figure when a large portion of the population take statistics at face value. Good luck having a life once the mob (aka. SJW) hears about it.


This is my big fear. If people will shoot up pizza places over false accusations of pedophilia, any sort of algorithm that claims to predict it (regardless of efficacy), and people will be murdered.

America has a bunch of angry people with twitchy trigger-fingers just begging for targets.


'breaking news: neural network speeds up personal intepretations of scientist a bit'


Which personal interpretations?


yep. This is phrenology. Bias laundering at best.


I was just imagining sourcing mugshot sites and then tracking those who were tried and considered guilty of whatever they got the mugshot for to train a good/bad person model.

It's fun, but I'd never dare to make such a dumb thing public.


This article [0] discusses a paper by Chinese researchers [1] that did just that. (and yes, it is quite dumb/dangerous)

[0] https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59... [1] https://arxiv.org/abs/1611.04135


Pretty sure there's conclusive evidence of an existing effect whereby how attractive you are plays into if you're found guilty


Total garbage. There will need to be some laws put in place soon to stop this type of thing by companies, otherwise we're going pure Gattaca. The nearest term risk for AI making your life suck is basically encoding people's biases into models that are used to affect your life. This "research" is the equivalent of making an inference like "He's poor so he must be black, etc.."


In "Serving the Reich"+, Philip Ball talks about how "pure" science lost its innocence when nukes were first deployed, and argues that researchers cannot absolve themselves by claiming that they didn't know how their research would be used. The same applies here: nothing good can come from this.

+ Hi trolls, yes, I'm aware of Godwin's Law. I also didn't call anybody a Nazi.


I can't seem to load the page, but I'd imagine this is "declared sexual orientation" rather than actual, which makes this a poor tool for discrimination


The images were from a dating site. They inferred sexual orientation from the gender of the partners that the subjects were looking for according to their profiles.


That is declared sexual orientation.


If you know would show a list of 100 heterosexual man to this neural machine, it would probably qualify a high percentage as gay. Since it was learning on a 50/50 dataset.


THAT Michal Kosinski... I wonder where the seed for this idea started, such that it germinated so far. Are people still doing this in the second decade of the 21st century?


A bit disheartened to see that this paper came out of Stanford. Of all the great applications of ML and computer vision, they have to pick a project that attracts clicks but has far more negative value than positive. How about identifying people that have clinical depression? Or for a skin disease? Or if they like to surf or play basketball? Or if they are good chess players? Even just hot dog or not hot dog.

The study seems intentionally divisive. I get that a Stanford BSchool student would take it on to attract attention, but disappointed that Stanford would get behind it.


just imagine, you can spot people sexual orientation with your google glasses, and god knows how many other features, not only from the people face, also from him speech pattern, and eye movement. seriously!!!, the first time that machine learning start to worry me, my mistake. I never considered how vicious the human can be. if those types of studies continue the future will be a nightmare.


First of all if you didn't read the whole paper, this picture is really interesting and a good summary: https://i.imgur.com/zs8RWIz.png

As for the debate over whether this research is ethical, consider this. If someone actually uses this to discriminate against homosexuals, they must accept that the thing actually works. Which means that homosexuality is determined by biological features beyond anyone's control, which would contradict their own ideology.

And that is the most interesting part of this work, not whether this tool is very accurate or not. This pretty solidly proves that physical features correlate well with sexual orientation, which is strong evidence for the biological theory of sexual orientation. Which has always been one of the biggest arguments for gay rights, that it's not a choice and can't be changed.

On the usefulness of this test to actually classify gay people:

They claim 91% accuracy on a balanced dataset. E.g. where there the ratio of gays to straights is 50:50. To get a ratio of correct:incorrect of 91:9 on such a dataset, their test must increase or decrease the odds a person is gay by 10.

Now in the general population, the ratio of gays to straights is about 16 to 984 (1.6%). So if their test gives someone a positive reading, that increases the odds to 162 to 984, or 14%. So you can't use this test to accurately guess someone's sexual orientation. Simply because gay people are so rare that even a few percent of straight people misclassified will overwhelm the number of actual gay people.

But still that's a lot more accurate than human guessing or the base rate, and it's scientifically interesting that this is even possible. It's a proof of concept that higher accuracy may be possible with better methods and more data.

Another article claims this:

>when asked to pick out the ten faces it was most confident about, nine of the chosen were in fact gay. If the goal is to pick a small number of people who are very likely to be gay out of a large group, the system appears able to do so.

The test gives varying degrees of confidence, it gives much higher confidence to some people than others. There are some individuals that it can tell are definitely gay or straight. But for most it is more uncertain.

Also note that the estimates for the percentage of gay people vary a lot. Which could make the true accuracy as high as 42%. Also some people believe sexuality is more of a spectrum than a binary straight/gay. If so the straight people it misclassifies might lean more on the gay/bisexual spectrum than normal and the errors wouldn't seem so unreasonable.

Lastly all these "phrenology" references are silly. If you have methodological problem with this research I'd love to hear it. But I see people discarding the research simply because they don't like the conclusions. For this study and other facial correlations based research.

This isn't new at all, there's tons of scientific research about digit ratios and all kinds of correlations they have with different things (https://en.wikipedia.org/wiki/Digit_ratio). Why wouldn't we expect even better correlations from all facial features?


Code is opinion and the idea that you can predict or identify things of this nature is just human prejudice masquerading as something more.

Some human programmers got together and thought they could identify sexual orientation from faces, a silly bar room level idea in itself with no basis in reality and trained a neural net to express this prejudice. This is the same thing as a seer claiming to see the future.


With Facebook profile photos you could detect sexual orientation with 81% probability for man and 74% for women... Hmmm... And with 91% / 83% probability when using 5 photos/person in the learning process...


Well, I am not sure this is the case.. The metric seems to be the ability to tell apart pairs of straight and gay people.

>Among men, the classification accuracy equaled AUC = .81 when provided with one image per person. This means that in 81% of randomly selected pairs—composed of one gay and one heterosexual man—gay men were correctly ranked as more likely to be gay.

Pretty different from predicting one's sexual orientation.


Serious question by someone who's not into this kind of stats too much:

What would happen if you took a big set of Facebook profiles and train some (the same if you wanna) CNN to classify picture->f for each f in profile features. Sure, for some features, your model should be able to deliver decent precision.

Does this mean that you quickly found out what features can be predicted from pictures & how well your CNN performs on that? Or is it possible that you just train models from picture->X where X is basically meaningless but significantly correlated with some feature because of the effect portrait in xkcd's "Significant" (Scientists investigate!) [1]

[1]: https://xkcd.com/882/


There is a tendency for machine learning (including neural networks) to over-fit data - i.e. the algorithm learns to recognise the particular data, rather than the real distinguishing predictors of the groups. As you say, these can be features that are by chance associated with what you are trying to discriminate.

This is why the model is validated on a separate testing group from the training group which created it. There are lots of ways to do this, and the more sophisticated continually iterate training and testing to improve the model.


The people working on this need to stop, sit down, and have a long, hard think about what they're doing. And about how ultimately this will be used to persecute and gill gay men.


Actually, their work seems to be focused on dispelling the common disbelief in these kind of applications for technology.

From the abstract: "Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women."

Acknowledging that data analysis can create obstacles to freedom or serious social problems is a necessary step to preventing or addressing these issues, but public opinion is far from being there yet...


I doubt it will get so extreme as to kill gay men. However, they do need to sit down, and have a long hard think about why they are in school at Stanford, and to what end they want to direct their studies. If it's to create click-bait research papers with little to no value, they should question why they are in school in the first place.

Articles like this give the impression that the authors are internet hustlers, not proper grad students.


gAIdar?


Is this serious?


Research around this existed for years, it could be shown that gays can more easily detect the sexual orientation of others. One explanation could be that since homosexuality was/is banned in many cultures, this helped to survive.

A reason why we didn't read much about it is that most researchers know that you shouldn't publish about this. Otherwise you can quickly end up with blood on your hands.


AI DL ... etc. are just tools ... if one is not careful, one would run the risk of GIGO == garbage in, garbage out ...


But sexual orientation is fluid according to the fringe psycho-babble pseudo science crowd.


Check all the people worried that 'gay' may not be a 'lifestyle choice'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: