Hacker News new | past | comments | ask | show | jobs | submit login
Stop KOSA (Kids Online Safety Act) (stopkosa.com)
103 points by surteen 7 months ago | hide | past | favorite | 50 comments



It's becoming so hard to constantly keep up the fight against a new bill like this year after year. I guess it is part of the point to try and wear people down over time until they eventually run out of the energy to care.

I had not previously heard about this one, but they all sort of run together. Some vague stuff about protecting children which would undoubtedly result in 1) not doing much at all to protect children and 2) actively making the internet worse for everyone.

The fact that this is a bipartisan-introduced bill is even worse news.


Reminds me of one my local school board kept having a referendum at least once a quarter to raise the school taxes. The refund kept failing, but they kept it up every few months until people stopped showing up and the referendum passed


Has anyone found a good explainer about the politics of (and money behind) this and similar legislation? This morning, my city's Axios newsletter contained Instagram sponsorship on the matter, which included this language:

"More than 75% of parents agree: Teens under 16 shouldn’t be able to download apps from app stores without parental permission. Instagram wants to work with Congress to pass federal legislation that gets it done."

Given who is paying for this, my assumption is that this is protectionism dressed up as child safety, though the language is so vague I have no idea what legislation is being referenced.


Instagram wants to push onus of protection on to parents/platform owners. Rather than safeguarding their own platform to protect abusive content, this will allow them to blame parents - "Hey, you installed the app, so it's your fault."


Additionally, Instagram/Facebook will/probably has been lobbying a ton/spending a ton so that the terms of the law are as favorable to them as possible, and at the same time also make new entries into the space have considerable expense in complying with the laws.


I suppose they'll be trying to tie in the whole taylor swift deepfake thing (as if it the whole concept hadn't been around for over two decades now).

Regardless, If both the ACLU and EFF are against a bill, my immediate reaction is that it is probably a bad bill.


Unfortunately, in the long term, special interests are always going to win, because they can just try over and over until it passes. This is always what happens with these bills: Opponents have to fight it and win every time. Proponents only have to win once. So, chances are, the proponents are going to get their way eventually.


This doesn't sound logical to me. I mean, suppose proponents win once. Stupid protections get enacted. But... can't opponents also get bills proposed, and passed?


I think it's logical because the power to get bills proposed is asymmetric. Take my original comment and replace "proponents" -> "politicians" and "opponents" -> "regular people." Regular people do not have the ability to get bills proposed. So, while the proponents (politicians) can propose the bill over and over until it passes, the opponents (regular people) cannot do the same. They can only constantly threaten to vote out particular politicians.


It is significantly more difficult to remove laws than add them.



Thx. These discussions are so often based on assumed intents and interpretations (for better or worse) rather than the actual texts. Not saying one should be ignorant of context, but the ground text matters.


These types of requirements would be make it damn near impossible for any start-up or non mega-co to abide by due to lack of capital/resources.

I'm actually surprised more big tech firms aren't in favor of these types of bills. Sure, enforcement will cost a lot, but it would also stifle any chance at future competition from smaller orgs.


The shameful thing about this act is that it has a few good ideas, they're just riding along with a bunch of truly abhorrent ones. If they just axed the entire censorship angle and age verification requirement, and kept the requirement of opt-in (via self-declaration of the user as a minor/guardian account pair) tools for parents and extended the tools available to minors to everybody, I wouldn't even be opposed to the bill.

Summary:

- Establishes a duty of care for covered platforms (basically any internet connected platform (social media, videogames, etc) that isn't a common carrier, an email service, a private messaging service disconnected from any broader platform, VOIP service, VPN, or education related.

- Covered platforms must take reasonable measures to mitigate harms to minors, with a list of specific harms being things like mental illness, addiction, physical violence, bullying, narcotics, and predatory or deceptive marketing tactics. Platforms do not need to hide such content if the minor is specifically searching for/requesting such content, or if the content is providing resources for the prevention of said harms.

-Covered platforms must provide certain safeguards to minors on their platform, to be used at the minor's (or guardian's) discretion. These include blocking, basic privacy settings, opting out of personalized recommendation systems, while still allowing the display of content based on a chronological format or limit types or categories of recommendations from such systems, hiding geolocation, and deleting the account and all associated data.

-The default settings for minors must be the most protective.

-Covered Platforms need to provide parental tools, including the ability to change privacy settings, restrict purchases, view metrics of usage, and restrict the amount of time the minor uses on the platform.

-Covered platforms need a system to receive reports about minor harming on their platform, and must "substantively respond" to such reports withing either 10 or 21 days depending on whether the service has more or less than 10,000,000 monthly active users

-Covered platforms are liable for advertising illegal stuff to minors (including stuff that's only illegal for minors to purchase, like alcohol and tobacco).


> Nothing in this Act shall be construed to require ... a covered platform to implement an age gating or age verification functionality.

Does it even have an age verification requirement? The only part I could find was it requires the NIST to investigate age verification system creation/implementation/effectiveness/impact.

I think that part should be removed but it also made me feel like the stopkosa site might be fearmongering.


It does not create an age verification requirement. A platform only has obligations to provide safeguards when it has knowledge that a user is 16 and under. It also provides statutory direction that it does not require further steps to infer or collect age.

(I'm staff that cowrote the bill.)


Without age gating, a company would have to apply censorship site wide, if there is reasonable belief that the site is used by minors. That's also not great.


In the text it says "covered platform knows is used by minors", "covered platform knows is a minor", and "platform knows is a child".

> The term “know” or “knows” means to have actual knowledge or knowledge fairly implied on the basis of objective circumstances.

Is this part the reasonable belief? Or is it that the ambiguity will cause companies to age gate to try to protect themselves?

Though I don't see how an age gate would even protect companies. Say a minor lies on the age gate, illegitimately uses the site, and their parent calls in angry.

The company now knows their service is used (illegitimately) by minors, do they then have to implement protections?


It seems that the US Government introduces a new bill of this form every few years; SOPA & PIPA, COPPA, CISPA, etc. Will governments (particularly the US government) simply keep attempting this until they get one through?


Presumably


At this point, if there's any bill around children's online safety I'm immediately against it lol.

I don't even have to read it to know it'll contain some insane laws around encryption/privacy.


Online or not, my default position is that if the rationale is about protecting kids, I'm against it. I need very strong evidence to change that default. These laws almost never actually protect kids and almost always make life worse in innumerable ways.


It’s sad for any group that is actually working towards well being and saftey of children from online dangers. If the term gets co-opted so much then it stops it being used for it’s real meaning. These bills actually harm children.


Entire generations have grown up with the unfiltered internet at this point. What good can possibly come from introducing them now?


Entire generations grew up with polio. There are many good arguments against KOSA; this isn’t one of them.


It's fairly easy to prove the harm from polio.

It's not so easy to prove harm from internet access.

There are studies that suggest social media can harm adolescent mental health[1] but there are also studies that suggest the magnitude is small enough that we don't really know.[2]

It needs more study (and it sounds like what we really need is more transparency from big tech so that it can be better studied).

For an actual argument against KOSA I would say it doesn't at first glance look like it will target issues that are likely more important: peer pressure, bullying, suicide promotion/romanticization, etc on social media.

Edit: Having spent more time reading the KOSA bill (thankfully linked by another commenter) it does actually appear to try to target the biggest known issues to help minors. Some of the bill seems reasonable.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10476631/

[2] https://journals.sagepub.com/doi/10.1177/21677026231207791#:...


> It's not so easy to prove harm from internet access.

In fact it's quite easy to prove that illegal stuff (for example zlibrary) massively improve at least the chances to learn, and improve lives.


Except it isn't really an argument, so much as a question. In your example, the good created by eliminating polio are blatantly obvious to anyone who knows what polio is. For your analogy to hold, then the answer to the question posed must be equally blatantly obvious. This doesn't seem at all obvious to me, so fails as a rebuttal.


It's absolutely an argument that filters are unnecessary. It's not a good one. It combines survivor bias with the false implication that today's internet is the same one we had in the 90s; it is not.

I'm opposed to KOSA - I think vague appeals to "preventing harm" are going to be weaponized against queer kids in red states and cause real harm... but again, I think there are far better arguments against it than the "we came out fine" one anti-vaxers and corporal punishment advocates trot out.


I don't know about good, but I can think of political advantages to doing so. If I put on my "evil politician" hat.


> Entire generations have grown up with the unfiltered internet at this point

To be fair, if you look at statistics around gen z you can see that there definitely are negative consequences of internet access (along with amazing benefits of course).

I don't support this measure or measures like it, but some of action should probably be taking considering the skyrocketing amounts of depression.


> To be fair, if you look at statistics around gen z you can see that there definitely are negative consequences of internet access

I haven't seen any such data. It's hard to separate generation effects and look purely at the effects of internet access in any sort of valid way.

You're probably thinking of studies around social media use. It isn't clear to me that internet filtering will combat any of what makes social media use such a mental health risk for teens.


> You're probably thinking of studies around social media use

Yes? Social media use is done through the internet? The vast majority of internet use for young teens is social media use.

I'm not sure what you mean by drawing the distinction between internet use and social media use.

> It isn't clear to me that internet filtering will combat any of what makes social media use such a mental health risk for teens.

I'm not suggesting that the government should be taking action through legal means.

I'm just saying some type of action should be taken.

I'm a really big fan of what Apple did with Screen Time & wish the implementation was better (it's super buggy on my device at least).

Apps that give people better power to control their own internet use is the way IMO.


> I'm not sure what you mean by drawing the distinction between internet use and social media use.

They're two different terms that mean different things. Just because something is true for squares doesn't mean that it's true for rectangles even though squares are rectangles. Using the correct words matter because otherwise you are spreading misinformation and confusion.


Internet of today quite differ from the Internet of the past.


control


Pretty silly to do this during an election year. It’s super easy to track who votes for this and put them on my Do-not-vote document I’ve got running. I might have forgotten if they’d done it in an off-year.


I'll be honest. If I used this methodology in our elections, I'm 100% convinced I would have 0 eligible candidates left long before the vote.


Remember that this bill will block anything that goes against the government by labelling them harmful to children.


China for minors.


The correct way to protect children is to protect everyone. The correct way to protect everyone is to make advertising illegal, with penalties significant enough that it isn't profitable anymore.

If we, as a society, were to accept the reality that paying people for their endorsement is only ever used to achieve the effect of fraud without committing it, we could just eliminate paid endorsement.

Extraordinary technological progress has been made in laundering fraud and disinformation, but that technology costs money and therefore requires customers to operate. Eliminate the customers and the sickness they create will go away.


[flagged]


Can the same point be made without casual homophobic references to anal rape?


Can it ? Yes

But I'm quoting a well known article that may not have aged gracefully when it comes to epithets


Here’s an unpopular opinion among the typical Hacker News crowd: Section 230 (inadvertently?) artificially created a market wherein a platform provider has the exclusive right to earn revenue on the content you post, and yet has zero culpability over that content. This allowed companies like Facebook to pop up and literally print billions and billions of dollars, scaling exponentially, with nearly zero overhead, wreaking havoc on society. Legal culpability for what you host is a very potent way of preventing enormous scale, as content moderation is costly to scale, and provided direct incentive to maintain clean platforms. Section 230 created a system in which Facebook et al. get all the upside, including the privilege of building a platform without having to charge their users, with zero downside.

Maybe this isn’t the internet we deserve?


So man strawman attacks, jumping to conclusions and divisive terms it's hard to take anything on that website serious.

> It’s no surprise that anti-rights zealots are excited about KOSA: it would let them shut down websites that cover topics like race, gender, and sexuality.

> Second, KOSA would ramp up the online surveillance of all internet users by expanding the use of age verification and parental monitoring tools. Not only are these tools needlessly invasive, they’re a massive safety risk for young people who could be trying to escape domestic violence and abuse.


Could you identify the strawman attacks you see? I'm trying to interpret your comment charitably but I'm having trouble identifying what you object to specifically.

I don't think it's much of a stretch to assume these kind of laws will be used to censor topics like "race, gender, and sexuality". There is a ton of precedent in recent years of e.g. books on these topics being banned from public libraries (not just school libraries) under the guise of protecting children. It seems like a reasonable inference that the same topics would be targeted in online content if the laws allowed it.


I thought those websites were already shut down after net neutrality ended.


Net Neutrality made it possible to do that, but the motivation was more about advantaging certain businesses to entrench monopolies and scare away newcomers.

Abolishing it was plenty bad, and you look silly making up straw men to whitewash it.


The website was created by fightforthefuture.org, an astroturf advocacy organization. Their tax filings show their backers are largely venture capital firms and tech companies.

It's no surprise that big tech companies don't want increased liability.

See https://thetrichordist.com/2016/05/12/list-of-corporations-a....


Judging by the quality of the content of the page, this KOSA bill is at least not a bad thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: