Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Twitter appears to prefer white people for thumbnails (unlikekinds.com)
15 points by whalabi on Sept 23, 2020 | hide | past | favorite | 14 comments



> Contrast in the area in question has an effect, as might the presence of text, or any blurriness present.

This is the argument made on Twitter as well; that its not intentionally trained to prefer white people due to the training data, but instead, the technique used results in a bias.

Hypothetically, say, Twitter did multi-variant testing of thumbnails to train the model based on engagement; and this resulted in prefer white faces. That's still a racially biased algorithm.

Whether or not your developers specifically set out to create a racially biased algorithm does not change that it is measurably a racially biased algorithm. Twitter's response that there is more work to be done is correct and a relief.


With or without the bias, this is a classic misuse of Machine Learning. I don't know a single person who likes that Twitter chooses how to crop and format images. Everyone just wants to choose the visible area themselves, and it's not hard to create a simple UX that would enable this.


> I don't know a single person who likes that Twitter chooses how to crop and format images.

I have never cared nor noticed. Other things to do with life than fiddle with twitter image upload thumbnails.

> Everyone just wants to choose the visible area themselves, and it's not hard to create a simple UX that would enable this.

mind citing?


> it's not hard to create a simple UX that would enable this.

This is a gross understatement. The number of different clients that Twitter supports (apis, third party clients/vendors, mobile, etc) means that it isn't a simple problem. Not only that but they have to support different screen sizes and likely have to crop the image differently depending on the device you're viewing the content on. I can without much mental effort understand how this feature comes to be. People spend more time looking at pictures that are interesting and in order to support all of the different screen sizes they decided it was easier for their users to attempt to automatically frame the image.


Twitter could let you choose a focal point, and then have an algorithm choose the cropping in order to best display the focal point.


Sure but as I mentioned in my original comment you're assuming that Tweets are only created in a Twitter owned property and this just isn't the case at all. I have no insider knowledge here but my hunch is when they were talking about this problem the calculus was "Do we just make photos posted from the Twitter web or mobile client look good or maybe we can use some ML to attempt to make photos from anywhere look good". I am not at all advocating whether they are doing a good job at that or not but reducing the problem to "a simple UX to select a focal point" is ignoring the massive surface of the problem.


I'd argue that "maybe we can use some ML to attempt to make photos from anywhere look good" is also ignoring the massive surface of the problem, with predictably bad results.


They are both large problems which is my point but given the choice between two hard problems I can understand why they chose the one that has the potential to solve the entire surface instead of just part of it.


So what you're saying is, it's easier to create a magical AI that can decide with 100% confidence the best visual area of the photo, than a UI component?


I'm guessing the tweets referenced in that article were based off of the original[1] (that I saw). The original tweet series did a bunch of testing with different variations of the same image to attempt to get to the bottom of why the algorithm is doing what its doing. There was also an official response from Twitter[2].

[1]: https://twitter.com/bascule/status/1307440596668182528 [2]: https://twitter.com/twittercomms/status/1307739940424359936?...


Hey thanks, I didn't see Twitter's response, I'll update the article.


The cropping issue is an unsolvable one for AI. It doesn’t matter how unbiased you want the model to be, if an algorithm has to chose between 2+ faces with no “correct” answer, it can’t win.

You might just think, why doesn’t it just identify the skin color and crop randomly based on that? Now what if there is two races of men and one woman in the image? Or vice versa? You quickly get into if/else hell.

The algorithm can be tuned to no end, but in the end, it’s dealing with an infinite amount of possibilities (not just faces) and it’s a problem that proves Twitter just needs to let the users adjust the crop or focus themselves.


I am a (white) photographer for an organization where I am photographing a bunch of Black people, and nightly one or more of those photographs wind up on Twitter. This would explain some issues with terrible automatic thumbnail choices we've had in the past.


The Twitter tech lead is Asian, Asians are over-represented (27% vs US 5.4%) and whites under-represented (42% vs US 61%) at Twitter, yet they didn't even bother checking for Asian-Black or Asian-White bias, only White-Black?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: