Hacker News new | past | comments | ask | show | jobs | submit login

Maybe they can't make editorial recommendations for the long tail but they absokutely could do so for the top few thousand videos each week.

Would that yield an improvement? I don't know, but it would have an impact.




I'm kind of wondering if a "Ned Flanders" user-detector is possible.

Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric.

Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag.

Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives.

The big negative is showing people content they obviously don't want for the sake of collecting imperfect data.


Should we filter all the pro-choice videos or the pro-life videos?

Should we filter all the Santa-is-fake videos or the Santa-is-real videos?

Do you agree with Flanders?


Maybe Youtube and their revenue sources agree with him.


Why would Youtube for Kids show anything on the topic of abortion?


Last year we have a very big mobilization here in Argentina because there was a vote in congress to legalize abortion. It was very big and the discussion split all the political parties.

The big square in front of the congress was split at the half, the pro-choice "green" group was on one side and the pro-life "sky-blue" group was in the other side. Each group had a strong opinion, but the mobilization was quite civilized, I don't remember that anyone get hurt. Anyway, there were small kids on both sides with the handkerchief of the respective color.

Also, what is your definition of kid: 6? 12? 17?

Just imagine that the Church release a video on youtube where Santa visit a lot of children to give them presents, and in particular to a unborn children during the 8 month of pregnancy, and add to Santa a "sky-blue" handkerchief in case someone didn't notice the hidden message. Do you think it should be censored for kids?


YouTube for Kids is a separate application that is specifically geared towards <10 years of age. It’s not ambiguous.


Kid: I'd say the lower bound is around 5, and the upper bound is variable depending on the individual...

In this case, I'd suggest the upper bound doesn't matter, as the criteria for filtering should be "a semi-unattended 5 year old could view it without concern."

All your examples are of topics where it's probably best for parents to initiate their child's education on the topic rather than Youtube randomly putting it in some kid's feed.


So a 4 year old kid is not a kid?


They're toddlers or babies if we're arguing semantics.

Kids < 4 really shouldn't have access to YouTube though.


The question I have is how can they tell "Flanders" viewers from "bored" ones or "out of time" ones short of them flagging it without a lot of manual review and guess work?

Reviewing viewers on that level sounds even more intensive than filtering every channel and video.


In the system I've proposed, if there are enough test-Flanders thrown at the content the times closed should be different enough to trigger an unclear Flanders rating. This would indicate some other metric should be used.

I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case).

To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available.


The problem is that just a few examples of the algorithm getting it wrong is enough to cause an adpocalypse. If millions of videos are uploaded every month then you can imagine how low the error rate has to be.


If Google takes the impractical route and hires a sufficient number of multilingual Ned Flanders, then they're still probably going to have a non-zero false positive rate (humans make mistakes too).

Whatever they do is going to have to be evaluated in terms of best effort / sincerity.

Semi-related: The fun of Youtube is when the recommendation algo gets it right and shows you something great you wouldn't have searched for. The value is that it can detect elements that would be near impossible for a human to specify. But that means it has to take risks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: