I might be the only one here in favor of this, and wanting to see a federal rollout.
It is not reasonable to expect parents to spontaneously agree on a strategy for keeping kids off social media- and that kind of coordination is what it would take, because the kids + social media companies have more than enough time to coordinate workarounds. Have the law put the social media companies on the parents side, or these kids may never be given the chance to develop into healthy adults themselves.
But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet (and probably unconstitutional under the First Amendment, to boot.)
It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.
The government doesn't get to know or regulate the websites I'm visiting, nor should it. And "protecting the children" isn't a valid reason to remove constitutional rights from adults.
(And if it is, let's start talking about gun ownership first...)
> But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet
That seems intuitive, but it's not actually true. I suggest looking up zero-knowledge proofs.
Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.
The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16. The proofs are unique to each site, so web sites can't use them to collude with each other.
That kind of "smart ID" doesn't have to be with the government, although that's often a natural starting point for ID information. There are methods which do the same based on a consensus of people and entities that know you, for example. That might be better from a human rights perspective, given how many people do not have citizenship rights.
> (and probably unconstitutional under the First Amendment, to boot.)
If it would be unconstitutional to require identity-revealing or age-revealing ID checks for social media, that's all the more reason to investigate modern technical solutions we have to those problems.
Assuming you can make active queries to the verifier, you could do something like
- Have your backend generate a temporary AES key, and create a request to the verifier saying "please encrypt a response using AES key A indicating that the user coming from ip X.Y.Z.W is over 16". Encrypt it with a known public key for the verifier. Save the temporary AES key to the user's session store.
- Hand that request to the user, who hands it to the verifier. The verifier authenticates the user and gives them the encrypted okay response.
- User gives the response back to your backend.
Potentially the user could still get someone to auth for them, but it'd at least have to be coming from the same IP address that the user tried to use to log into the service. The verifier could become suspicious if it sees lots of requests for the same user coming from different IP addresses, and the service would become suspicious if it saw lots of users verifying from the same IP address, so reselling wouldn't work. You could still find an over-16 friend and have them authenticate you without raising suspicions though, much like you can find an over-21 friend to buy you beer and cigarettes.
Since you use a different key with each user request, the verifier can't identify the requesting service. Both the service and the verifier know the user's IP, so that's not sensitive. If you used this scheme for over-16 vs. over-18 vs. over-21 services, the verifier does learn what level of service you are trying to access (i.e. are you buying alcohol, looking at porn, or signing up for social media). Harmonizing all age-restricted vices to a single age of majority can mitigate that. Or, you could choose to reveal the age bucket to the service instead of the verifier by having the verifier always send back the maximum bucket you qualify for instead of the service asking whether the user is in a specific bucket.
If you can make active queries to the verifier, so can any adversarial party. These kinds of ZK-with-oracle schemes need to be very carefully gamed to ensure they're truly ZK, and not just "you learn nothing if you only query once."
> and the service would become suspicious if it saw lots of users verifying from the same IP address
This implodes under CGNAT, cafe internet, hotel internet, etc.
You can make active queries, with the user's involvement. The verifier can potentially have a prompt with e.g. "The site you were just on would like to know that you are over 21. Would you like to share that with them?"
We do need to get people onto ipv6 so CGNAT can die. Restricted services could potentially disallow signups or require more knowledge (e.g. full ID) if coming from shared IPs as a risk mitigation strategy, depending on how liable we want to hold them to properly validate age. If you've already signed up for facebook at home, obviously you don't need to validate your age again at the cafe.
Fake IDs exist in the real world. The system doesn't have to be perfect, and we can say that there's some standard of reasonable verification that they should do for these sorts of cases.
Personally I'm more in favor of an approach where sites label their content in a way where parents can configure filters (ideally using labels that are descriptive enough that we don't get into fights over what's "adult", and instead leave that decision to individual families), but if we're going to go an ID-based route, there are at least more private ways we could do it, and I think technologists should be discussing that, and perhaps someone at one of these big companies can propose it.
That's the protocol for the computer, similar to oauth. From the user perspective, your 94-year-old neighbor would have an account with id.gov that they've somehow previously established (potentially the DMV or the post office does this for them), and the user flow works much like "Sign in with Google" buttons do today.
Addendum: you can actually preserve the privacy of which bucket the user is in to all parties if this is sufficiently standardized that it goes through a browser API.
- Have the service generate the request as above, but now the request is "Please encrypt a response with key A for the user coming from ip X.Y.Z.W".
- Service calls a standard browser API with the request, telling the browser it would like to know the user is in the over 16 bucket. Browser prompts the user to verify that they want to let the service know they are over 16. Browser sends the request to the verifier.
- Verifier responds with a token for each bucket the user is in. So a 22 year old gets an over-16 token, an over-18 token, and an over-21 token.
- Browser selects the appropriate response token and gives it back to the service.
So the service only ever learns you are over the age limit they care about, and the verifier only ever learns that you asked for some token, but not which one.
It would be neat if some authority like the passport office or social security office also provided a virtual ID that includes the features OP described and allowed specific individual attributes to be shared or not shared, revoked any time, much like when you authenticate a 3rd party app to gmail or etc.
Putting on my conspiracy hat for a minute: They don't want to make it easy for you to authenticate anonymously. They obtain their surveillance data from the companies that tie you, individually, to your data. They'd be shooting themselves in the feet.
There's a unique identifier, but it's your secret and can't be used for tracking. Sites needing verification don't learn anything except that you "have" a token matching the condition they are checking. This includes not learning your unique identifier, so they can't use it for tracking. The issuer also doesn't learn anything about your verification queries.
You have an incentive to keep the secret token to yourself, and would probably use existing mechanisms for that: You might manage like your phone number, private email and other personal accounts today. Not perfect, but effective most of the time for most people.
You might decide to share it with someone you trust, like your sibling. That's up to you, but you wouldn't share it widely or with people you don't trust, even under pressure, because:
To prevent mass reuse of stolen tokens, it's possible to use more cryptography to detect when the same token is reused in too many places, either on the same site or across many sites, without revealing tokens that don't meet the mass-reuse condition, so they still can't be used for tracking. If mass-reused tokens auto-revoke, they can't be reused thousands of times by anyone, and that also provides an incentive to avoid sharing with people you don't trust.
I won't pretend this is trivial. It's fairly advanced stuff. But the components exist these days. The last paragraph above requires combining zero-knowledge proofs (ZKP) with other branches of modern cryptography, called multi-party computation (MPC) and perhaps fully homomorphic encryption (FHE).
mDL wide scale rollout works be using the trusted computing element that is part of your phone and enrollment would be the same as obtaining a driver’s license in the first place.
There is no physical card - there is an attestation that only an enrolled device can hand out with revocation support in case of security flaws.
Is it going to be absolutely secure? No. The cost just needs to be high enough that it becomes inaccessible to the vast majority of adolescents.
Theft of your parents phone becomes a lot easier attack vector but phone biometrics/password requirements will thwart that for most parents.
Let me expand on this because Digital Identity proponents tend to try and oversimplify everything and then act surprised when their favorite cryptos implode.
There are 2 situations.
1. Everyone who wants to access restricted content needs to access a government or corporate tool to generate their wallet or keys or whatever you want to call it.
2. Everyone in the jurisdiction is required to sign up for the tool.
1. Is effectively putting yourself on a list for future fascist governments to purge.
2. Is an onerous burden on the entire population. We had enough issues with Digital TV.
Government already has all information about you, they put on your passport when they print it for you. Zero knowledge proofs allow you to generate attestation about some facts (ie that you are above 21 years) where 3rd party can verify it, where proof itself doesn’t reveal your identity or any other information - just the fact that you’re passing this check is revealed.
Yes and you can consider the passport office a list of everyone who wants to travel overseas.
Ditto, if you apply for internet porn license, you will be on a list of internet porn enthusiasts. Its non trivial information that its largely in your best interest not to provide the government.
Again this all happens before your zero knowledge check.
Design is far from the only threat vector. Any implementation that is less than perfect is prone to all kinds of attacks. A few years ago, there was a report that the NSA could decrypt a double-digit percentage of encrypted web traffic thanks to a larger-than-expected bag of factored primes they keep handy.
Great story, are you claiming that NSA can infer from zero-knowledge proofs inputs, maybe map cryptographic hashes to plain input text or something of that nature?
No, but I bet a dollar that NSA isn't just going to collectively fold hands and say "These schemes and implementations are too good and too secure for us to break. We'll ignore the meta data, network analysis, side channels and our data centers that can store 2 days worth of internet traffic; we'll give up and focus on defensive security only"
Your argument applies equally to any initiative ever made by humans that mentions "internet". Yet it appears quite few things exist on the "internet". We do have cryptography with good guarantees available.
My argument is not that those things don't exist - its just that to my knowledge, I never heard of any real-life implementation that's guaranteed to be NSA-proof[1] - you're welcome to offer a counterexample.
1. Your fancy encryption scheme is pointless if your plaintext can be acquired at either endpoints, of if a bug in the implementation leaks data. The security of the whole matters a lot more than the individual parts as attackers go for the weakest link.
> The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16.
If the zero-knowledge proof doesn't communicate anything other than the result of an age check, then the trivial exploit is for 1 person to upload an ID to the internet and every kid everywhere to use it.
It's not sufficient to check if someone has access to an ID where the age is over a threshold. Implementing a 1:1 linkage of real world ID to social media account closes the loophole where people borrow, steal, or duplicate IDs to bypass the check.
As I mentioned elsewhere, you’re falling for letting perfect be the enemy of good. The ZKP + phone biometrics only needs to raise the cost of bypass above what adolescents have access to. And no, you can’t just share the same ID because there’s revocation support in the mDL and it’s difficult to extract the raw data once it’s stored on the trusted element. This is very similar to how credit cards on phones work which are generally very difficult to steal.
You’re thinking like a group of technically proficient 15 year olds and their friends. That’s a small minority. The vast majority of teens are likely to be stymied.
Revocations are not for the individual ID but if an exploit is found compromising the IDs stored on a trusted element. Your older siblings ID can’t be used to sign for millions of accounts - just those who the older sibling lets borrow their phone that has their ID (and assuming there isn’t some kind of uniqueness cookie that can be used to prevent multiple accounts under a single ID). That’s a much different and more manageable problem (fake ids via older siblings have been a thing for forever).
>As I mentioned elsewhere, you’re falling for letting perfect be the enemy of good
No, this line of reasoning deserves nothing but absolute contempt when it comes to laws. We are not talking about getting the finnicky API to work at your job. Too often laws have had unintended consequences as a result of loopholes or small peculiarities. If the damn law doesn't even work on a fundamental level then it should be opposed on principle.
There are technical methods to detect and revoke large-scale reuse of an uploaded id. I wrote more detail in another comment.
That only covers large-scale reuse. It doesn't cover lending your id to your younger sibling if you want to, or if they find a way. Maybe that should be acceptable anyway. Same as you can lend your phone or computer to someone to use "as you", or you can buy them cigarettes and alcohol. Your responsibility.
Today you can scan your passport with your phone, and get enough digitally signed material chained up to nation level passport authorities to prove anything derived from the information on your passport.
You could prove to an arbitrary verifier that you have a US passport, that your first name starts with the letter F, and that you were born in July before 1970, and literally share zero other information.
The selective disclosure is super cool, I wonder how it works since smthing like a hash of DG1 is what is actually signed, how can you selectively disclose verified data from "inside" the hashed area? It does not sound very feasible to me but I am not an expert in zk-snarks etc.
There are some wrinkles that prevent passport data being used more broadly - technically it is a TOS violation to verify passports / use the ICAO pkd without explicit permission from ICAO or by direct agreement with the passport holder's CSCA (country signing certificate authority). Some CSCAs allow open use but many do not.
Also, without being too pedantic about it, what you are able to prove is more like possession of a document. An rfid passport (or rfid dump & mrz) - or in fact any kind of identity document - does not prove that you are the subject - you need some kind of biometric bind for that.
ZK circuts have gotten really fancy lately, to the point where full blown ZK virtual machines are a thing, which means you can write a program in rust or whatever, compile it to riscv, and then run it on the risc zero zkVM. (https://github.com/risc0)
This means you can literally just write a rust program that reads in the private data, verifies the signature, reads the first byte in the name string and confirms that it matches what you expect, and then after everything looks good, it returns "true", otherwise it returns "false". This all would happen on your phone when you scan a QR code or something that makes the request, then you send the validity proof you generated to the verifier, they can see that the output was true, and nothing else.
In theory, the private data would be stored on a trusted device you own, like your phone or something, so someone who steals your phone would have a hard time using your identity. Using fancy blockchain stuff you could even to a one time registration of your passport such that even if someone steals your passport, they wouldn't be able to import as a usable ZK credential. Presumably there would be some logic around it so you can re-register after a delay period or something, giving the current credential holder a chance to revoke new enrollment requests or whatever. So, yes, proving your exact identity to a website isn't perfect, but it's easy enough to make it really noisy if someone is trying to tamper with your identity, and maybe that's good enough.
If you want to go the trusted hardware route, you could make someone take a picture of their face with some sort of trusted hardware camera on their phone or laptop, and then use some zkml magic to make sure it kinda looks like the face on the passport data. Given the right resources, trusted hardware is never that hard to tamper with, so I don't like that solution very much.
What's often more important in an online context is that your credential is unique. It doesn't matter who you are, it matters that you've never used this credential to sign up for a twitter account, or get past a cloudflare captcha, or any other captcha use case. If you steal 10 passports, maybe you can set up a bot that will automatically vote for something 10 times, but at least you can't vote millions of times. This is sybil resistance, and it's massively important for a ton of things.
Thanks! I have a big rabbit hole to go down now :)
I don't get what causes the proof to fail if I provide the wrong bytes to the zkvm when it tries to read from inside the hashed area after the hash & signature are verified (this might not be directly sequential I guess, I think it has to be part of the same proof).
Put another way, I get we have to zk prove that a) I know a message M that hashes to H ... (can see this is do-able from googling), but also that a particular byte range M[A-B] is part of M, in a way that the verifier can trust I'm not lying and I don't see how the second bit is accomplished. It feels like there are also details in proving that the data comes from the right "field" in the DG1.
This stuff is such black magic! EDIT: will try this out in ZoKrates...
> Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.
There's just one problem. How does the machine proving your age know that you are who you say you are? Modern cryptography doesn't have any tools whatsoever that can prove anything about the real body currently operating the machine--it can never have such a tool. And the closest thing that people can think of to a solution is "biometrics," which immediately raises lots of privacy concerns.
The government need not know what sites you visit. It is damaging enough that the government know that you are visiting sites that require an age verification. You can then be flagged for parallel construction if you should, I don't know, start a rival political party.
Not if this were widespread. I wouldn’t be too bothered if the government knew that I either watched an R-rated movie or rented a car or purchased alcohol or created a Facebook account.
You missed the either in GP's comment. i.e. they know you did one of those things because you requested an over-18 token, but not which one. The more covered activities there are, the more uncertainty they have about why you might have asked for a token.
This isn't really my area of expertise, is there a way to know for sure that those are all the same token? Or could the government just lie and say they are all the same when in reality they can really differentiate.
The government would have to document the API for requesting tokens for anyone to use it. I suggested a scheme here[0] where it's clear that the government doesn't get any information about the service (unless the service re-uses AES keys) and the service doesn't get any information about the user other than whether they're in the appropriate age group.
Potentially there could be coordination between .gov and the service to track users by having each side store the temporary AES key and reconcile out-of-band. But .gov has other ways they could get that information anyway if they have cooperation from businesses (e.g. asking your ISP for your IP address, and asking the service provider for a list of user IPs).
Definitely, we can use a government issued id, or we can create our own. Social graphs i call em. Zero knowledge proofs have so many ground breaking applications. I have made a comment in the past, relevant to how could a social graph be build, without the need of any government [1]. We can create effectively one million new governments to compete with existing ones.
I've been thinking a lot lately about decentralized moderation.
All we need to do is replace the word "moderate" with "curate". Everything else is an attestation.
We don't really need a blockchain, either. Attestations can be asserted by a web of trust. Simply choose a curator (or collection of curators) to trust, and you're done.
Yeah, blockchain is not needed at all. A computer savvy sheriff might do it, an official person of some kind. Or even private companies, see also "Fido alliance".
Additionally the map of governments which accept Esthonian passport might be of some relevancy here[1].
Let's suppose that 1 million new governments are founded, and violence still can be enforced only by the existing ones. The new governments will be in charge of ids, signatures, property ownership and reputation. Governments of Rust programmers, or Python programmers, or football players, or pool players, or truck drivers will be created.
When a citizen of Rust programmers social graph uploads code, he can prove his citizenship via his id. We may not even know his name, but he can prove his citizenship. He can sign his code via his signature, even pull request in other projects. He can prove his ownership of an IT company, as it's CEO, the stock shares and what not. And he will be tied to a reputation system, so when a supply attack happens, his reputation will be tainted. Other citizens of Rust's social graph, will be able to single out the id of the developer, and future code from him will be rejected, as well as code from non-citizens.
Speaking of supply chains, how about the king of supply chains of products and physical goods? By transferring products around, in a more trustworthy way, by random people tied to reputation, Amazon may get a little bit of competition ain't it?
But in reality this would not hide age, for example if a child signs up for Facebook, the only information revealed is that they are <16 years old. But once they turn 16, they want to have an unrestricted Facebook experience, so they will send a new token to Facebook, showing that they are older then 16 now. Facebook can now record the day the user does this and will now know the approximate birthday of a person. Sure, the user might not do this right on their birthday, probably in a span of a few weeks, but they will still have a good understanding of the age. This system would still be better then having to reveal your whole Id with all details.
So I hash some combination of ID, name and birthday and send it to Facebook to create an account. Facebook relays that hashed info to a government server which responds with a binary yes/no.
Of course you need to trust that the hash is not reversable.
That doesn’t stop kids from using Facebook, but it stops kids’ ID from being used to create an account.
Thanks for bringing this solution up. Many people are unaware of Zero-Knowledge proofs are actually possible. Probably because it's very counter intuitive.
And as with electronic voting, the contract will go to the lowest bidder with the worst security, not the company that's got the CS chops to do it right.
We did a hackathon at work and one of the guys from one of my project teams covered this stuff as his project.
I trust that it _would_ work 100%,but what I don't trust is that a government would implement it properly and securely, because no government works like that lmao (even NZ's great one).
I mean living in the UK now I got like a dozen different fucking gov numbers for all manner of things, dvla, NHS, nin, other tax numbers, visa, etc...why isn't there just one number or identity. Gov.uk sites are mostly pretty stellar besides.
Minors dont get full constitutional protections. I think they should have more rights than they do, but the first amendment is already more limited for minors than for adults.
Tinker v. Des Moines has repeatedly been chipped away (e.g. Bethel School District v. Frasier).
Minors cant produce pornography.
Minors have their freedom of association and expression limited by employment laws.
Minors speech is not free at home due to parental control or at school.
Minors can’t consent to medical treatment which limits their ability to discuss sensitive issues.
Minors cant vote or run for public office, which limits their direct participation in political expression and civil engagement.
Wrong I’m afraid. Minors don’t have full rights, that’s why their bag/locker/car can be searched randomly, why they can have speech abridged, why they can’t see NC-17 movies(edit: scratch that one). The supreme court has weighed against minors many times.
The first amendment is a restriction on Congress, and does not apply to the private Motion Picture Association that maintains the film rating system.
> why they can have speech abridged
I don't think that's true of federal law. There are cases, such as in school, where more restrictions are permitted to the school. But those restrictions are not based on age.
> There are cases, such as in school, where more restrictions are permitted to the school. But those restrictions are not based on age.
After 18 you can choose whether to be in school or not. Those restrictions are voluntary for adults but compulsory for minors (who do not get to choose whether they go to public school, private school, homeschool, or just pass the GED)
”that’s why their bag/locker/car can be searched randomly"
This has always disgusted me about public school.
No better way to erode rights and democracy than ingraining absolute tyranny in children. I was never search but I was convicted and punished many times on zero evidence, just some authority assuming I did something.
You are not free to enter the White House to express yourself.
Enforcing a property owners right to refuse entry (digital or physical) does not prevent you from expressing yourself, but rather from doing it that specific property.
Yes, this isn't the right solution. The power needs to be given to the users.
A better solution is more robust device management, with control given to the device owner (read: the parent). The missing legislative piece is mandating that social media companies need to respond differently when the user agent tells them what to send.
I should be able to take my daughter's phone (which I own), set an option somewhere that indicates "this user is a minor," and with every HTTP request it makes it sets e.g. an OMIT_ADULT_CONTENT header. Site owners simply respond differently when they see this.
in HTTP responses that include adult content, and every parental controls software in existence will block it by default, including the ones built into iPhones/etc and embedded webviews. As far as I know all mainstream adult sites include this (or the equivalent meta tag) already.
In general, I don’t think communicating to every site you visit that you are a minor and asking them to do what they will with that information is a good idea. Better to filter on the user’s end.
It's much easier to regulate and enforce that websites must expose these headers so that UAs can do their own filtering. Adult Content = headers in response, no ifs, ands, or buts
Response headers are encrypted in the context of HTTPS, so there's no real sacrifice in privacy. Implementation effort about as close to trivial as can be. No real free speech implications (unless you really want to argue that those headers constitute compelled speech). All in all, it's a pretty decent solution.
This is not a response header. It's a meta tag that's added to a websites head element to indicate it's not kid friendly. The individual payloads returned from an adult site don't include this as a header.
That'd be the "equivalent meta tag" I mentioned. And this site claims the header works too, though I haven't tested it myself. https://davidwalsh.name/rta-label
I honestly wasn't aware of this, and it sounds like a great solution for "adult content." Certainly, the site specifying this is better than the user agent having to reveal any additional details about its configuration.
Emancipation of children is also a thing, where a minor may petition the court to be treated as an adult. This also falls afoul of a blanket age restriction.
I think there's an ambiguity here. Even on this page, talking about "for all purposes". This seems to mostly refer to parental decision making and rights.
i.e. even as an emancipated minor, being treated as a "legal adult" does not mean you can buy alcohol or tobacco.
Understood, but being emancipated could mean that you need to be able to used LinkedIn, perhaps as part of a job search. An argument could be made that an emancipated minor should have access to some social media.
Exactly, any design for this stuff requires parental-involvement because every approach without it is either (A) uselessly-weak or (B) creepy-Orwellian.
If we assume parents are involved enough to "buy the thingy that advertises a parental lock", then a whole bunch of less-dumb options become available... And more of the costs of the system will be borne by the people (or at least groups) that are utilizing it.
Geolocation to that degree not that reliable and not necessarily 1:1 with jurisdiction or parental intent.
If we're already trusting a parental-locked device to report minor-status, then it's trivial to also have it identify what jurisdiction/ruleset exists, or some finer-grained model of what shouldn't work.
In either case, we have the problem of how to model things like "in the Flub province of the nation of Elbonia children below 150.5 months may not see media containing exposed ankles". OK, maybe not quite that bad, but the line needs to be drawn somewhere.
Sounds like you want PICS (though it works on the opposite direction, with the web site sending the flag, and the browser deciding whether to show the content based on it).
> "Reasonable age verification method" means any commercially reasonable method regularly used by government agencies or businesses for the purpose of age and identity verification.
In particular, "anonymous speech is required if you want to have free speech" is actually a very niche position, not a mainstream one. It just happens to be widely spammed in certain online cultures.
Those laws are frequently overturned as unconstitutional, but may still remain on the books because we don't do a good job of clearing out laws that were ruled unconstitutional.
As a general rule of thumb, almost every time someone brings an edge case about whether or not speech is First Amendment-privileged before SCOTUS, SCOTUS rules in favor of the speech. (The main exception is speech of students in school.) SCOTUS hasn't specifically ruled on anti-mask laws to my knowledge, but I strongly doubt it would uphold those laws.
Random aside, I haven’t seen NY enforce the anti mask legislation during the Halloween parade, various protests, or Covid. So I bet a new constitutional challenge could be erected.
There are places where it wouldn't be safe to protest without masks. In that case people effectively would be losing their freedom of speech if it isn't anonymous.
Wealthy, politically connected men with the ability to read and write about political philosophy and get it distributed is a bit different situation than a thousand Russian AI trollbots posting bad-faith "opinions" on American current events.
> But the only way to do this is to require ID checks
Not necessarily, consider the counterexample of devices with parental-controls which--when locked--will always send a "this person is a minor" header. (Or "this person hits the following jurisdictional age-categories", or some blend of enough detail to be internationally useful and little-enough to be reasonably private and not-insane to obey.)
That would mostly puts control into the hands of parents, at the expense of sites needing some kind of code-library that can spit out a "block or not" result.
> It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff,
Depends on the argument being made, on the ideology of the audience, on the current norms, etc.
I had an exchange here on HN some time back (topic was about schools removing certain books from their libraries), and very many people in support of those books, which dealt with gender-identity and sexual orientation, also supported outright porn (the example I used was pornhub) for kids of all ages as long as those books with pictures (not photos) of male-male sexual intercourse could stay in the library.
Right now, if you made the argument "There are some things kids below $AGE shouldn't be exposed to", you'll still get some (vocal) minority disagreeing because:
1. They feel that what $AGE kids get exposed to should be out of the parent's hands ("Should we allow parents to hide evolution from their children?", "Should we allow parents to hide transgenderism from their children?")
2. They know that, especially with young children, they will lose their chance to imprint a norm on the child if they are prevented from distributing specific material to young children.
In the case of sex and sexual education, there is currently a huge push for specific thoughts to be normalised, and unfortunately if it means that graphic sexual depictions are made to children, so be it.
The majority is rarely so vocal about things they consider "common sense", like no access to pornhub for 10 year olds.
> effectively regulating and destroying the anonymous nature of the internet
Technically you can make that work without issues (You only need to prove your age not your identity, something which can reasonably be archived without leaking your identity).
There are just two practical issues:
- companies, government and state (at least US police & spy agencies) will try to undermine any afford to create a reasonable anonymous
- it only technically works if a "reasonable degree" of proof is "good enough", i.e. it's must be fine that a hacker can create a (illegal?) tool with which a child could pretend to be 16+, e.g. by proxing the age check to a hacked device of an adult. Heck it should be fine if a check can be tricked by using the parents passport or phone. I mean it's an 16+ check, there really isn't much of a reason why it isn't okay to have a system which is only "good enough". But lawmakers will try nonsense.
Interestingly this is more a problem for the US then some other states AFIK due to how 1) you can't expect everyone 18+ to have an id and everyone 16+ to be able to easily get one (a bunch of countries have owing (not carrying) id requirements without it being a privacy issue. 2) Terrible consumer protection making it practically nearly impossible to create a privacy preserving system even if government and state agencies do not meddle.
Similar if there wouldn't be the issue with passports in US it probably wouldn't touch the First Amendment as it in the end protects less then a lot of people believe it does.
Lately I'm repeatedly reminded of how in Ecuador citizens, when interviewed during a protest, see it as a normal thing to tell their name as well as their personal ID number into the camera when also speaking about their position in regards of the protest. They stand to what they are saying without hiding.
Since about half a year I've noticed the German Twitter section getting sunk in hate posts, people disrespecting each other, ranting about politicians or ways of thinking, but being really hateful. It's horrible. I've adblocked the "Trending" section away, because its the door to this horrible place where people don't have anything good to share anymore but disrespect and hate.
This made me think about what we're really in need for, at least here in Germany, is a Twitter alternative, where people register by using their eID and can only post by using their real name. Have something mean to say? Say it, but attach your name to it.
This anonymity in social media is really harming German society, at least as soon as politics are involved.
I don't know exactly how it is in the US but apparently it isn't as bad as here, at least judging from the trending topics in the US and skimming through the posts.
People have zero qualms about being absolute ghouls under their wallet names. The people with the most power in society don't need anonymity. The people with the least often can't safely express themselves without it.
>> "What matters, it seems, is not so much whether you are commenting anonymously, but whether you are invested in your persona and accountable for its behaviour in that particular forum. There seems to be value in enabling people to speak on forums without their comments being connected, via their real names, to other contexts. The online comment management company Disqus, in a similar vein, found that comments made under conditions of durable pseudonymity were rated by other users as having the highest quality. "
- Illegal content, such as insults or the like, will not get published. And if it does, it will have direct consequences.
I'm also not tending towards a requirement to have all social networks ID'd, but I think that a Twitter alternative which enables a more serious discussion should exist. A place where politicians and journalists or just citizens can post their content and get commented on it, without all that extreme toxicity from Twitter.
The thing is, the political climate is very toxic and the absence of anonymity can have a real impact for things that are basically wrong think.
Say for example I held the opinion that immigration threshold should be lower. No matter how many non-xenophobic justifications I can put on that opinion, my possibly on H1B colleagues can and would look up my opinion on your version of Twitter and it would have a real impact on my work life.
There is a reason why we hold voting in private, it's because when boiled down to its roots, there are principles that guide your opinions they are usually non -reconciliable with someone else's opinion and we preserve harmony by keeping everyone ignorant of their colleagues political opinions. It's not a bad system, but it's one that requires anonymity
Or, to post on a political forum you must have an ID. You can have and post from multiple accounts, but your id and all associated accounts can be penalized for bad behavior.
The algorithm powering the trending section, which rewards angry replies and accusatory quote-tweets is at least a good a candidate as a source of harm to political discourse than anonymity.
Take it a step forward, ban Engagement based algorithmic feeds. I've said this and I'll continue to say this type of behavioral science was designed at FB by a small group of people and needs to be outlawed. It never should have been allowed to take over the new age monetization economy. There's so much human potential with the internet and its absolutely trainwrecked atm b.c of Facebook.
I think this is a partial attribution error; the goal is to make money by capturing attention and selling it to advertisers. The fact that strife is one of the most effective ways to do so, and the companies appear utterly unconcerned about the resulting damage to society makes it look like strife is the end goal, but it is merely a means to make money.
They would change their algorithms immediately if large advertisers applied financial incentives. We've seen some of this with Youtube's policies leading to videos with censored profanity where its use was previously normal and neologisms like "unalived" to mean killed.
Musk's Twitter may be a partial exception since it's decisions are now driven by a single man's preferences rather than an amoral mechanistic imperative to increase shareholder value. That doesn't seem to have improved things.
> the goal is to make money by capturing attention and selling it to advertisers
I mean, how do you know that?
How do you know that the goal isn't actually to perform social engineering on a huge scale and that the advertising is just the way that goal is being funded?
I suppose I don't. It could be that Facebook, Twitter, Youtube, and TikTok are all actively trying to create chaos, but greed adequately explains their behavior. One thing that points more strongly to greed is that the companies, aside from Twitter post-Musk, rapidly change their behavior when it impacts their revenue.
With TikTok, there's some chance geopolitics is a factor as well.
My experience with propaganda bots is that the really nasty hate stuff will usually be posted by actual, real people (perhaps as a result of being prodded by bot-provided outrage), and bots will rather have all kinds of more subtle hinting and agenda-pushing - because bots are managed by semi-professionals who care about bots not being blocked and (often) don't really care about the agenda they're pushing, while there also is a substantial minority of semi-crazy people who just don't care and will escalate from zero to Hitler in a few minutes.
How many social media users who create accounts and "sign in" are "anonymous". How would targeted advertising work if the website did not "know" their ages and other demographic information about them. Are the social media companies lying to advertisers by telling them they can target persons in a certain age bracket.
> But the only way to do this is to require ID checks
COPPA has entered the building. If you're under 13 and a platform finds out, they'll usually ban you until you prove that you're not under 13 (via ID) or can provide signed forms from your parent / legal guardian.
I've seen dozens of people if not more over the years banned from various platforms over this. We're talking Reddit, Facebook, Discord and so on.
I get what you're saying, but it kind of is a thing already, all one has to do is raise the age limit from 13 to say... 16 and voila.
"Finds out" is the operative part. COPPA is not a proactive requirement; it's a reactive one. Proactive legislation is a newer harm that can't easily be predicted based on past experiences with reactive laws.
Indeed, nothing is stopping said companies from scanning and assessing age of a user uploading selfies though. This is allegedly something that TikTok does. My point being, the framework is there, and then people actually report minors, the companies have to take it seriously, or face serious legal consequences.
How do you know the selfie is from the "primary" user? And how do you know they're underage, versus being a chubby-faced 18 year old (like yours truly was?)
Aren't the really problematic social networks the ones where you've lost your privacy and anonymity long ago and are being tracked and mined like crazy?
> But the only way to do this is to require ID checks,
No, it isn't. Check out Yivi [1]. Its fundamental premise is to not reveal your attributes. It's based on academic work into (a.o.) attribute-based encryption. The professor then took this a step further and spun off a (no profit) foundation to expand and govern this idea.
>It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.
Surely not.
Imagine: government sells proof of age cards. They contain your name, and a unique identifier.
Each time you sign up to a service, you enter your name and ID. The service can verify with the government that your age is what it needs to be for the service. There are laws that state that you can't store that ID or use it for any other purpose.
- Would only work if the government, does not have access to the reverse mapping. (otherwise law enforcement will eventually expand to get its hands on it)
- It will likely be phished very quickly (and you'd have no way of knowing since no one is storing it). (making it short lived, means u'd have to announce tot he goverment eveytime u want to watch porn)
- Eventually there will be dumps of ids, like there are dumps of premium porn accounts now
> effectively regulating and destroying the anonymous nature of the internet
The bulk of the internet has not been anonymous for a while. Facebook requires an id already, Google tracks you using Google and the OS, reddit is tightening to control bots, amazon requires a phone number.
Think about it. What portion of your activities day to day on the Internet are anonymous? Now try to do them anonymously. It isn't practical/possible anymore and the internet of yesteryear is gone.
I propose the Leisure Suit Larry method. Just make users answer some outdated trivia questions that only olds will know when they sign up for an account.
It is pretty hard to give access to youtube to your kid with an account where age is stated. Yes kids can open browser in private mode… but they rarely do because it is a friction. If every social media would be moved to adult category the current rules in operating systems would do a good job.
I am not sure about 16 years ( i would support it as a father )… but up to 13-14 feels appropriate, there is PG-13
Ah so be it. I don’t care much for the things that come from anonymous culture. I want gatekeepers. This tyranny of the stupid online is pretty tiresome.
"Unconstitutional" arguments only go so far. I am not America (I'm a proud Australian) so I can easily see the incredibly obvious and ridiculous destruction "freedom" in your country entails.
Anonymity services can still exist without fostering an environment to addict children and young adults to social media or a device... and without your precious "rights" being taken away.
People can post all kinds of illegal things online and no one is suggesting that content should be approved before it can be visible on the Internet. It doesn't have to be strictly enforced to act as a deterrent. How effective of a deterrent it would be has yet to be seen.
The definition of "social media" in this bill actually seems to exempt anonymous social networks since it requires the site "Allows an account holder to interact with or track other account holders".
The internet has not been anonymous in fact or theory for decades now, and if you think the government can't get your complete browsing history on a whim I'm guessing you haven't paid any attention to the news about NSA buying user data bundles from online brokers. That said, "muh freedoms" is hardly a quality argument in the face of the widely documented pervasive harms caused to children by exposure to social media. The logical extreme of your position would be to declare smoking in public a form of self-expression and then demand age limits be removed for the sale of tobacco products because First Amendment. :P
Ironically the "freedom crowd" are also statistically significantly more likely to get shot by their own toddlers accidentally so I'm not convinced they represent a pool of quality decision-making or grounded worldview. It's interesting how quickly any discussion of potential solutions to real-world problems gets chucked out the window the second someone says "freedom".
>But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet
Ban portable electronics for children. Demand that law enforcement intervene any time it's spotted in the wild. If you still insist that children be allowed phones, dumb flip phones for them.
It could be done if there was the will to do it, it just won't be done.
"Regulating them properly" could mean a lot of things to a lot of people. Do you mean e.g. just not allowing porn on the internet or do you mean e.g. just not requiring identification verification? If neither, what way of regulation that allows distinction without banning content or identifying users?
Putting conditions on the company so that they do not even risk this in the first place, e.g. moderation, a demand to make tools to protect children, there are a lot of different things and prohibition is the least likely to work at any level. You can look at any issue in history with prohibition and see what that amounts to.
Government knows best when it comes to minors, right? And all of us need to have our government papers ready if we want to participate in the most important forum of communication of our time. Talk about power imbalance.
> who can’t even articulate the difference between man and woman
Can you articulate that difference? Personally, I'd expect a more complete definition from a kid today than I would from a conservative adult. For example, my dad might refer to DNA when explaining the difference between a man and a woman, and he would also confidently say he's a man (I mean, he is my dad), but since he's never had a DNA test, how can he be sure? It's obviously about more than DNA. My mom might say a woman is someone who can have children, but what about women who cannot have children, are they not women? Etc.
And what makes a site a social media site? Anywhere you can post interactive content?
You do realize that laws like this would apply to sites like HN, Reddit, the comment section of every blog, and every phpBB forum you ever used? It's not just Instagram and Tiktok.
Trying to force independently owned and operated forums to enforce laws that might not even be applicable in the country that the owners / admins live and work in is going to be about as effective as trying to force foreign VPS/server/hosting providers to delete copyrighted content from their server using laws that don't apply in their jurisdiction.
I think a perfectly clear line could be drawn that would separate out phpBB from TikTok very easily. I genuinely don't understand this comment, we shouldn't do it because it's hard or the results might be imperfect?
Kids want to communicate. Whether it's TikTok, Discord, phpBB, chatting in Roblox or Minecraft, they will if they can.
If we want to "ban social media" we'll need a consistent set of guidelines about what counts as social media and what doesn't, and what exactly the harms are so they can be avoided.
I think your comment would be much stronger if you laid out precisely what you think that line would be.
Laws do not have to be perfect to be good, but they do have to be workable. It's not clear that there's a working definition of "social media" that includes both TikTok and Reddit but doesn't include random forums.
This is not a charitable reading. Nobody is asking for a perfect solution; it is reasonable to demonstrate some prior consideration for the ways in which most solutions are dangerously imperfect.
To me the issue is it’s a waste of government and waste of time. Parental controls already exist on all devices. The answer to every problem can’t be “more government, more laws”.
You know, I'm not really sure that requiring IDs for access to porn / social media is a terrible idea. Sure it's been anonymous and free since the advent of the internet, but perhaps it's time to change that. After all, we don't allow a kid into a brothel or allow them to engage in prostitution (for good reasons), and porn is equally destructive.
But with the topic at hand being social media, I think a lot of the same issues and solutions apply. It's harmful to allow kids to interact with anyone and everyone at any given time. Boundaries are healthy.
Aaaaand, finally there's much less destruction of human livelihood by guns than both of the aforementioned topics if we measure "destruction" by "living a significantly impoverished life from the standard of emotional and mental wellbeing". I doubt we could even get hard numbers on the number of marriages destroyed by pornography, which yield broken households, which yield countless emotional and mental problems.
So, no, guns aren't something we should discuss first. Also, guns have utility including but not limited to defending yourself and your family. Porn has absolutely zero utility, and social media is pretty damn close, but not zero utility.
The biggest problem with this is how we would define "porn". Some states are currently redefining the existence of a transgender person in public as an inherently lewd act equivalent to indecent exposure.
I have no doubt that if your proposal were to pass that there would be significant efforts from extremist conservatives to censor LGBT+ communities online by labeling sex education or mere discussion of our lives as pornographic. How are LGBT+ people supposed to live if our very existence is considered impolite?
Nevermind the fact that the existence of a government database of all the (potentially weird) porn you look at is a gold mine for anyone who wants to blackmail or pressure you into silence.
The horrors and dangers of porn are squarely a domestic and family issue. The government does not need to come into my bedroom and look over my shoulder.
> The biggest problem with this is how we would define "porn". Some states are currently redefining the existence of a transgender person in public as an inherently lewd act equivalent to indecent exposure.
Agreed, it's just rhetoric. Same as all these claims of an ongoing 'trans genocide' in the USA. Absolute nonsense, but it gets the believers in this ideology all riled up, and so the purpose of this rhetoric is fulfilled.
> Nevermind the fact that the existence of a government database of all the (potentially weird) porn you look at is a gold mine for anyone who wants to blackmail or pressure you into silence.
If you don't want a record of you looking at it, then don't look at it. All you need to do is refrain from pornography consumption. It really is that easy and simple.
> The biggest problem with this is how we would define “porn”.
I wholeheartedly agree. And that’s a problem we should lean into and solve. Its difficulty doesn’t make it less worth of solving.
> The horrors and dangers of porn are squarely a domestic and family issue.
Therein lies the problem however. Every systemic issue in our world begins in a family or domestic situation of some form. While I am well aware and also concerned about the implications of government overreach here, I don’t think we can throw up our hands and say, “Meh”. At a minimum it can begin with education. We can teach people about the destructive nature of porn (and social media).
The fact that this impacts every family, domestic situation, and therefore indirectly or directly touches every single life in our society actually kinda makes it a great candidate for government oversight.
I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.
There is a societal problem that is beyond just parenting. The peer pressure for kids to feel left out and ostracized because they are the only ones not on the socials is something a teen is going to definitely rebel against their parents on. It's part of being a teen. I'm guessing the other parents would even put pressure on the parents denying the social access.
To me, the only way out of this is by changing one nightmare for another giving the gov't the decision of allowing/denying access. Human nature is not a simple thing to regulate since the desire for that regulating is part of human nature
Is talking to other people online really so bad that we need the government to step in and tell us who we can and can't talk to? How quickly will that power expand to what we can and can't talk about?
I agree that neither solution is perfect, but exchanging an imperfect but undoubtedly free system of communication for one that is explicitly state-controlled censorship is an obvious step backwards.
"Thinking about the children" should also involve thinking about what kind of a society you want to build for them. A cage is not the answer, especially not with fascism creeping back into our politics.
I think you are willingly playing this down as "talking to people online" to make some point. However, it is beyond what one kid online says to another online. It is what predators say to those kids online. I don't just mean Chester and his panel van. I'm talking about anyone that is attempting to manipulate that kid regardless of the motive, they are all predators.
Social media has long since past just being a means of communicating with each other, and you come across as very disingenuous for putting this out there.
> I think you are willingly playing this down as "talking to people online" to make some point. However, it is beyond what one kid online says to another online. It is what predators say to those kids online.
No, it's not about that either. It's the algorithmic nature of social media apps which rewire the brain's mechanisms in unnatural and unhealthy ways. Predators are a minor concern besides this one. See https://www.afterbabel.com/archive?sort=top
> I think you are willingly playing this down as "talking to people online" to make some point.
They won't engage on this topic in a fair and balanced manner. I've tried. They want unlimited ability to push agendas and incite strife and they just throw out keywords and thought-terminating cliches when criticised; diversity, marginalisation, fascism, they just gaslight and gaslight and gaslight.
I mean, don't stop trying, but you'll be very frustrated
I think people are being incredibly disingenuous when they imagine that the government won't abuse this power to censor and harm marginalized communities. Many states are trying to remove LGBT books from school libraries for being "pornographic" right now, for example. All it takes is some fancy interpretations of "safety" and "social media" for it to become a full internet blackout, for fear of "the liberals" trying to "trans" their kids.
I don't deny that kids can get into trouble and find shocking or dangerous things online. But kids can also get in trouble walking down the street. We should not close streets or require ID checks for walking down them. Parents should teach their kids how to be safe online, set up network blocks for particularly bad sites, and have some kind of oversight for what their kids are doing.
Maybe these bills should mandate that sites have the ability to create "kid" accounts whose history can be checked and access to certain features can be managed by an associated "parent" account. Give parents better tools for keeping their kids safe, don't just give the government control over all internet traffic.
> when they imagine that the government won't abuse this power
I've suggested no such thing. In fact, I described putting regulations in place as a "nightmare". Parenting alone will not work. Self regulation from the socials will not work. The entire situation is a nightmare of our own making. There is no simple solution. These tendencies of human nature were present long before social media. It was just fuel for the fire for the worst qualities.
It seems like you are in favor of something that requires coordination, but don't believe in coordination. Is there a different way you think this could be achieved?
I think GP means the coordination between parents, and I agree on that: if you can’t get a strong majority of parents to agree on keeping their children off of social media & smart phones, you as parents have the choice between two outcomes: enforcing isolation vs. letting social media slip through and just trying to delay as much as possible.
Almost all parents either don’t care or opt for the second option (which I also think is the better one), but the dynamics today are that between 10-12yo the pressure to get your kid a smartphone will mount and you have to give in at some point. Being able to wait till 16 would be much better IMO.
Gambling is outright banned for a majority of regions in the US, not just kids. I don't think they are equally bad, just different bad. Gambling is addictive, and it destroys people. Social media is addictive and socially toxic, on the whole it erodes the very fabric of a society.
>Social media is addictive and socially toxic, on the whole it erodes the very fabric of a society.
It's interesting how every generation seems to decry new forms of media as eroding the fabric of society. They said it about video games, television, music, movies, etc. I'm sure we're right this time, though.
… we’ve been right. Television, music, video games, movies, etc., have decimated social capital and communication in the western world. It is not uncommon to “hang out” with people who are your “friends” without ever interacting with them in any meaningful way because everyone is focused on, e.g., a television. That’s assuming everyone isn’t too busy playing video games to leave their houses.
Whether you agree or not that they’re “eroding the very fabric of a society” (I would argue they are), it should be acknowledged that almost all the downsides predicted have come to pass, and life has gone on not because these things didn’t happen, but in spite of them having happened.
People are interacting through online games quite often though. You can't throw them all in one basket. You can make friends / longer term connections that way (I did) or keep in touch with people you don't live near to.
That's not community, though. You're not inhabiting the same space, share the same problems and try to work them out them together. You're not helping each other out if someone falls into trouble.
It's up to you what you build that way. You can build a real community, you can build a shallow friend group, the medium doesn't have to limit it. My boss met his wife in a MUD.
You can do all of those things sans inhabit the same space with people you meet online. I met my 2 girlfriends online and we support each other through everything even though we aren't in the same place most of the time. I'm part of a niche group on reddit that supports each other emotionally in a way I've struggled to find in physical spaces. I still meet up with people in physical space, but I absolutely get meaningful social connection from online spaces.
Television clearly did something pretty significant to society. Cable news makes the country more polarized, that we didn’t do anything about it doesn’t mean it wasn’t a problem. We just missed and now the problem is baked into the status quo.
Video games are typically fiction, so the ability to pass propaganda through them is usually a little more limited. It isn’t impossible, just different.
Social media is a pretty bad news source for contentious issues. We should be pretty alarmed that people are getting their news there.
Not anymore. A 2018 Supreme Court decision opened the floodgates to legalized sports gambling, much of it online. The only states with existing bans are Hawaii and Utah, which combined have only 5 million residents.
This is just blatantly not true, anyone who has tried to gamble on sports can tell you that companies go to quite incredible lengths to make sure that nobody outside of the few jurisdictions where they're legal can gamble online. I live in Nebraska, and I have to go travel across to Iowa before I can do anything.
True, it's state by state, but a lot of states allow it. And even if you're not in the right state you can always go to sites from other countries (e.g. betonline.ag)
The ban we have on gambling seems weak. From trading card games to loot boxes to those arcade games that look to be skill based but are entirely up to chance, children are allowed to do all. The rules feel very inconsistent to the point that they appear arbitrary in nature.
Is there an alternative? Self-control - as we have now - brought us here. If the government shouldn't step in then the only other option left (only I can see) is magic. And we have a bad record with magic.
Several governments have already effectively banned sites like Pornhub by creating regimes where people have to mail their ID to a central clearinghouse (which creates a huge chilling effect.) The article talks about “reasonable age verification measures” and so saying it’s unenforceable seems a little bit premature. Also, you can bet those measures won’t be in any way reasonable once the Florida legislature gets through with them.
In my opinion, these governments haven't implemented 'effective' bans (though maybe chilling, as you say) but primarily created awkward new grey markets for the personal data that these policies rely on for theatrics. Remember when China 'banned' youth IDs from playing online games past 10PM? I think a bunch of grandparents became gamers around the same time...
Another example is some Korean games requiring essentially a Korean ID to play. A few years ago there was a game my guild was hyped about and we played the Korean version a bit. You more or less bought the identity of some Korean guy via a service that was explicitly setup for this. Worked surprisingly well and was pretty streamlined.
Which is exactly what happens for markets that are desirable enough. We compare bans of things not enough people care about, to bans of things that people are willing to do crazy things for. They don't yield the same results.
At least from personal experience, when there was a period where my ISP in the UK started requiring ID verification for porn, I literally ceased to watch it.
Making something difficult to do actually works to _curb_ behavior.
You're proving the argument that the parent set forth. Anyone who wants to visit Pornhub can just visit one of the many sites that isn't abiding by the new law. However, that's not due to a lack of legislation, but rather a lack of enforcement, or, perhaps, enforceability. If laws always worked I'd be for more of them. My argument is not that we should never make laws because it's futile but rather that some laws are more futile than others and having laws go unenforced weakens government and enforcing them inequitably is injust.
Also social policy enforcement is a generational thing. The UK is only just getting toward outright banning cigarettes by making it illegal for anyone born after X date from ever buying them. Eventually you have a whole generation that isn't exposed to smoking and on the whole thinks the habit is disgusting, which it is.
Except that some people born after that date will still acquire them, get addicted, and then what? Prosecute them like drug possession?
It's infantilizing and dumb. Grown adults should be allowed to smoke tobacco if they so wish and smoking rates are already way down due to marketing and alternatives. Noone needs to be prosecuted.
You don't need to prosecute any buyers at all though. All you need to do is make it illegal to sell in shops, and illegal to import. There will be a black market, sure, but how many people are going to go through the trouble and expense to source black market tobacco? Not that many. And everyone benefits because universal healthcare means everyone shares the cost of the health effects that are avoided.
I think it's hyperbolic to look at tobacco like other drugs. Tobacco is a lifestyle thing, it doesn't get you high, it's a cultural habit. There are only upsides to getting rid of the social demand for it.
If you think taking tobacco away from consumers is infantilizing, why yes, yes it is. We are dealing with children's futures. Adults get to continue smoking, children less likely to even want to smoke as the social acceptance goes down and with that there is less and less desire to smoke. Nicotine doesn't do much other than get you addicted, no one is chasing after a pronounced high with it, people start smoking because it's perceived as cool.
I can't imagine an adult wanting to start smoking, most adults get addicted in their teens.
I think you have have an import ban, and a black market, and still see significant gains in eroding the demand. I do not think people should be prosecuted for possession, but the UK will probably make some bad decisions there, but that doesn't mean the overall policy is bad.
My main issue is that the only effective way to ban access to a website is to also ban VPNs and any sort of network tunneling. A great firewall would have to be constructed which I am very much against. Even China’s firewall is surpassable and it is questionable how much it is worth operating given the massive costs which would be incurred.
I think the government should invest in giving parents the tools to control their child’s online access. Tools such as DNS blocklists, open source traffic filtering software which parents could set up in their home, etc.
I used to want no govt intrusion for this. Then I understood how there are teams of PHDs tweaking feed to maximize addition at each social network. I think there could be even limits, or some sort of tax on gen pop.
It would be less imposing on the populace if you just demanded a government hit squad kill those PHD's instead of demanding everyone submit to some onerous government crapware to access the internet.
There are plenty of middle ground approaches - one possible one: App default would be restricted time use. To get unrestrxistricted use you use a driver's license to authenticate birthdate only with OS. Mobile OS only confirming with app that user is above age requirement (does not share birthdate). App can only query if user is above 18yrs and nothing else. Easy peasy.
I'm in favor of kids not using porn, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.
We don't even have to speculate, isn't it already the case for <13yo? Or is just Europe? Anyway - yeah, of course they're still on it. Expect less compliance and harder enforcement the older they are, not more/easier.
The only protection in the Us is technically “collection of personal information” via COPPA[0] which you can argue would kneecap social media. Any parent can provide consent for their child, however. Children themselves can also just click the button that says they are over 13 if it gets them what they want.
When I was first getting online, the expectation was that you at least had to be bright enough to lie about your age. Now I have to occasionally prune my timeline after it fills up with "literally a minor." Even an annoyance tax might have some positive effect. Scare the pastel death-threats back into their hole...
Social media's existence is predicated on their algorithms being good at profiling you. Facebook's already got some level of ID verification for names, where they'll occasionally require people to submit IDs. No reason that similar couldn't be applied to age if society agreed it was worthwhile.
I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required.
People said the same thing about age restrictions for smoking, alcohol, movies, and on and on and on.
It's not some unsolvable new problem just because it's the ad-tech industry.
Your comment begs the question of whether or not those age restrictions are a "solved problem." In the US, government age restrictions on movies DO NOT exist for the exact reason the government cannot impose age restrictions on social media: the First Amendment flatly forbids it.
Growing up in the 80s, no one I knew avoided cigarettes because of regulations. Cigarettes were easy to get your hands on as a kid, even though it was illegal to sell to children. We avoided cigarettes because we had a general understanding of the health risks, because we knew our parents would beat our asses if they caught us smoking, and because smoking makes your clothes smell like shit.
A massive, decades-long decline in the habit? Stemming from ad restrictions, warning labels, media campaigns, taxation, legal action by states/Feds, etc.
I agree it is different, but the jury is out on whether it is better. Banning "social media" is likely to push users to a "lite" version of it. I'm not convinced that will be better.
They are consuming nicotine, and tobacco companies are invested in / are the companies producing products in that space. It seems functionally to be the same.
Only if you ignore... a lot. Cancer rates, smoking sections in restaurants, the smell, the yellow grime and used butts sprinkled everywhere, the impact on asthmatics... Smoking a cigarette gets you a lot more than just the nicotine.
A smoker moving to vaping is an enormous benefit to health and society.
That sounds like you are being disingenuous. Smoking sections haven't been a thing in the US for a long time (I went to the last one I could find around 2009). Waste from single use vapes is also a huge problem. Similarly there are health effects specific to vaping, time will tell if cancer is among them.
> It seems like vaping is close enough to smoking to say that it is functionally close enough to be equated.
Bullshit. Both are nicotine delivery methods. One is far better for both individual and societal reasons. Water and whiskey are both wet, but that doesn't make them the same.
> But you have raised my curiosity about your relationship with vaping. Do you work in the industry?
No, nor do I vape/smoke. I'm just old enough to remember how shitty it was to have smokers everywhere, in a way that isn't the case for vapers... and I've seen the multi-decade decline in lung cancer incidence stats.
Nicotine by itself is harmless besides the addictiveness. A nicotine addiction is not going to drastically affect your mental state or cause socially disruptive behavior like domestic violence or armed robbery so it’s really nothing to be concerned about.
No, they are not remotely the same. Nicotine isn’t really that harmful, but combustion byproducts very much are. Also the effects on bystanders are orders of magnitude better.
Most of the harm from smoking comes from the smoke, not the nicotine or associated addiction.
Don't get me wrong, I'd love to see vaping turn into a prescription-only smoking cessation aid, but it's not smoking. I'm 100% happy with even a one-for-one replacement of smoking for vaping, even in kids, given the dramatically lower risk of resulting health problems.
Drinking, smoking and drugs don't depend on a central point of control. Social media companies of any significance can be counted on two hands, and are accountable to corporate boards.
Absolutely. Almost no kids smoke cigarettes (vaping non flavored varieties have almost no risk associated with it) and drunk driving is a shadow of what it used to be. Getting alcohol for someone under 21 is not child’s play either.
Eliminating all laws banning smoking and drinking up until a certain age would not make one single person safer. Are you in favor of parents moderating access to drugs and alcohol for their kids at any age?
Are you a parent? It's not as easy as saying "no social media" to your kids. In this day and age, it's basically equivalent to saying "you can't have friends". Online is where kids meet, hang out, converse etc. I'd LOVE to go back to the days before phones and social media, where kids played with neighbours and ride their bikes to their friends house, but that's slowly slipping away.
We try pretty hard to get our kids to play with their friends in person (we invite them or give rides to playdates) but what do they do when they meet up? Sit on the couch with their tablets and play virtually in Roblox :-)
When my friends would meet up in the 80's-90's, quite a bit of Nintendo happened. Is it really that different? The proportion of video games should eventually drop (not to zero), in favor of (if you're lucky) talking and whatever music bothers you the most.
> It's not as easy as saying "no social media" to your kids.
I didn't. I taught them about risks so they'll have tools. But social media isn't where life-changing harm came from.
The harm came from growing up in a world with 24/7 adult presence and zero places to roam. It's development-ruining hellscape that 10k years of children didn't have before.
I'm almost invariably anti-regulation, but in this case - absolutely!
There's extensive evidence that social media is exceptionally harmful to people, especially children. And in this case there's also a major network effect. People want to be on social media because their friends are. It's like peer pressure, but bumped up by orders of magnitude, and in a completely socially acceptable way. When it's illegal that pressure will still be there because of course plenty of kids will still use it, but it'll be greatly mitigated and more like normal vices such as alcohol/smoking/drugs/etc. It'll also shove usage out of sight which will again help to reduce the peer pressure effects.
This will also motivate the creation/spread/usage of means of organizing/chatting/etc outside of social media. This just seems like a massive win-win scenario where basically nothing of value is being lost.
Do you also think that children should be allowed to buy cigarettes? I'll be honest, I am not certain that social media is any less deleterious than tobacco.
I'm a pretty pro-market guy, but there are times when the interests of the market are orthogonal to the interests of mankind.
I can pretty confidently say that the half a million deaths a year attributable to smoking is a little more deleterious than getting bullied online and the suicides which follow. Many orders of magnitude more.
> "Do you also think that children should be allowed to buy cigarettes? I'll be honest, I am not certain that social media is any less deleterious than tobacco."
To which I pointed out that cigarettes kill far more people than social media. And your response was somehow that I'm implying that a less-bad thing is good? Are you sure you're following the conversation? It's really not clear that you're addressing anything I said, and it's unclear what your point is.
I was following the conversation. You seemed to be implying that social media restriction was unnecessary.
If you weren't, then I think you were being a bit pedantic, because I don't think the person you were replying to was literally suggesting that millions of smoking deaths were equivalently bad to social media. Context matters, and the context of the conversation is legal restriction of social media.
Either way, feel free to ignore my comment if it bothers you.
Yes, parents should not have unlimited rights to determine what is good/bad for their children.
Social media is a powerful, addictive and dangerous. Pretty much anywhere on this earth, parents will end up in jail or lose custody of their kids, if they give them harmful substances like drugs. Social media should be regulated like how drugs, alcohol, and cigarettes are regulated.
Missouri law allows minors to consume alcohol if purchased by a parent or legal guardian and consumed on their private property.
edit: Apparently Missouri is not the only state. I had trouble finding a definitive list though. There are also other exceptions such as wine during religious service.
While there are exceptions, and in general exceptions seem pretty common, they still require businesses to officially get approval and it gives power to parents to enforce rules that would otherwise be hard to do so. Even with the exceptions for children to legally be allowed to drink, I would be surprised if that led to more kids drinking than alcohol obtained illegally, which means the question should be back on how well does the law work (obviously not perfectly, but there is a large gap between perfect and so poorly that it is useless purely from an efficiency perspective).
Your position is that decisions about youth access to social media should be fully taken from the parents and made by govs instead. Penalties can be assumed from your examples.
This is the reality that you want imposed on parents and children - yes?
There's actually a really simple and elegant penalty - forfeiture of the device used to access the social media. With all seized devices to then be wiped and donated to low income school districts/families. This gets more complex when using something like a school computer, but I think it's a pretty nice solution outside of that. That's going to be a tremendous deterrent, yet also not particularly draconian.
This is already the reality for alcohol and plenty of other things. Maybe not everywhere. Reality check: parents giving unrestricted access to these things are usually perceived as irresponsible.
I can't agree. This teaches kids that the government is the answer to everything. this should 100% be the responsibility and decision of parents. Kids are different, and these one-size-fits-all authoritarian tactics that have become a signature of the current GOP Floridian government are just the beginning of the totalitarian christofascist laws that they want to implement. Before you ask, I am a parent, and my kid's devices all have this crap blocked and likely will remain that way until he's at least 15, give or take a year depending on what I determine when he gets to that age. He knows that there are severe ramifications if he tries to work around my decision, and will lose many, many privileges if such a thing happens.
The libertarian equivalent is the model we are currently running and it hasn't worked. It's psychologically addictive, harmful and has parallels with smoking in a way. If the evidence for its harm was less robust.
Do you think simply labelling it as bad is sufficient? Parents have no idea.
I also believe that this is a Big Deal™ that we need to take seriously as a nation. I have yet to see any HN commentator offer a robust pro-social media argument that carries any weight in my opinion. The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.
As an added bonus, this latest legislation removes the issue of "everyone is doing it". I mean, sure, a lot still will be—but then it's illegal and you get to have an entirely separate conversation with your kid. :)
> The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.
This is so incorrect it makes the flat earth theory look good.
I have two teens and have yet to see the negative effects of social media for them or any of their peers. Not to say it doesn't exist, but I sincerely doubt it's as awful as the doomsayers think. My personal observation of being raised in he 80s is that kids were far more awful to each other then than now.
Every generation has this freakout about something or another. I expect modern kids will end up better at handling social media than their parents are.
I am an adult and I wish someone would take social media away from me. Honestly, I think social media has done more harm than good and I wish it would just cease to exist.
However, especially in Florida, social media may be the only way for some teens to escape political and religious lunacy and I fear for them. I think it's not wise to applaud them taking away means of communication to the "outside", in the context of legislation trends and events there.
Addiction mechanics are a real thing and sooner or later social media will pop into your life again. For me it's Reddit, not Instagram or Twitter. Luckily, HN makes me always ragequit at some point, cause y'all are a bit special.
IMHO the general idea isn't terrible the implementation is, suppar. But hy that's why it's good that's it's not yet US wide as it means there is time to make improvements.
- I'm not the biggest fan of hard cutoff
- addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud. (And age restrict some less dark but still problematic patterns.)
- I think this likely will make all MMORPGs (and Roblox, lol) and similar 16+, I'm quite split about that. I have seen people between 14-18 get addicted to them and mess up their education path. But I have also seen cases of people which might not be alive anymore today if they hadn't fund a refuge and companions in some MMORPG.
- I guess if it can make platforms like YT, Facebook, Instagram, Snapshat etc. implement a "teen" mode with less dark patterns and tracking it would be good.
- The balance between proving your age and making things available and keeping privacy is VERY tricky (especially in the US) and companies, the government and spy agencies will try to abuse the new requirements for age verification to spy more reliable on everyone 16+.
- It's interesting how that affects messengers. Many have less dark patterns, some do not track users, or can easily decide not to track children. The aren't social networks per-se. But most have some social network like features. Even such which do not try to create compulsive use might still end up with it as long as their as is "live" chatting.
>addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud.
How would you write a law that accomplishes your goal?
The usage with of user interface design patterns which take advantage of [..] to cause compulsive use is not allowed. In context of this law user interface refers to the mechanism of a user interacting with a software, this includes any kind of user interface no matter weather which medium it uses for presenting information and allowing interactions and no matter which form it is presented at in the software.
Where [..] is a rough definition of dark patterns which there exist multiple of in CS and which you can base on factors like 1) deceptiveness, 2) intend to manipulate the users behavior, 3) how it interact with various human feedback systems etc.
The thing about law people on HN often tend to forget or complain about is that they intentionally do not need to be perfect precise scientific definitions, predicate logic, or anything like that. This means you can reasonable describe them by likely effect and weather it seems reasonable that the effect could have been intended (without bothering to actually determine intend) without needing to go into _any_ technical details outside of clearing that indeed any form of user interface is covered.
I am against government regulation of websites and classification of websites. If we allow government to do this, it will be politicized at some point.
We need to find a better way, for example (just a quick idea), social websites run and protected by a school-by-school basis. This way, it can be regulated and controlled.
In other words, government should regulate what they already have control over, not impose new control measures over things they don't.
> I might be the only one here in favor of this, and wanting to see a federal rollout.
I'm not American, I think it's perfectly reasonable to ban kids from the internet just by applying the logic used for film classification. Even just the thumbnails for YouTube (some) content can go beyond what I'd expect for a "suitable for all audiences" advert.
This isn't an agreement with the specific logic used for film classification: I also find it really weird how 20th century media classification treated murder perfectly acceptable subject in kids shows while the mere existence of functioning nipples was treated as a sign of the end times (non-functional nipples, i.e. those on men, are apparently fine).
Also, I find it hilariously ironic that Florida also passed the "Stop Social Media Censorship Act". No self-awareness at all.
Film classification is also dumb. In Australia Margaret Pomeranz used to run banned film viewing sessions that ended up in cops wrestling them for dvds.
The first is saying that no matter what your rules are, the internet may at any time show you a clip from the highest rated category and should be treated as such.
The internet may also at any time show you outright banned content, but that's often treated as a separate battle no matter if that's Tiananmen Square (or "content promoting terrorism", although now I'm wondering if the Chinese Communist Party think the former is a subset of the latter?) or sexual acts you're not allowed to show in film (varies by jurisdiction).
You and I both know they're doing this for two reasons. To put a heavy burden that will be almost impossible to enforce on tech companies so they can easily punish them for political reasons and to stop teens from learning early how shitty the GOP is.
- Make this an ISP level thing? Somehow? They already know the makeup of a household. If they know a house has kids, something something ToS "You as the parents are liable..." Then maybe repeat those scary RIAA letters but "for good" when someone in that household hits a known adult IP?
- Maybe browsers send an "I'm an adult" flag similar to "Do Not Track," and to turn it on, the user has to enter a not-to-be-shared-with-kids PIN? If the browser and OS can coordinate, OSes would be able to tell the browser if the user is an adult, and skip the PIN entering.
- Force kids to use a list of Congress-approved devices that gate access to the wider Internet? YouTube Kids but for everything. Yes, hacker kids will be able to get by, but this being Hacker News, they'd deserve the fruits of that particular labor.
Just spitballing. Anything obvious I'm missing?
PS- I am neither for nor against the Florida-type legislation as of this comment.
I find myself agreeing with this.
Me 15 years ago would have raged at this. I have kids now and they are pressured to join all sorts of social media platforms. I still don't allow them to have it but I know they take a slight social hit for it.
There is zero positive to giving kids the ability to access social media sites designed to be addictive when they don't have the mental facilities to determine real from not real. Many adults seem to suffer from this as well. Plus kids don't understand that the internet is forever, really no need for an adult looking for a job or running for office to be crippled by a questionable post they made as an edgy teen.
I'm against a lot of government regulation but in this case I am even more against feeding developing kids to an algorithm
Just remove the temptation and pressure all together.
Same here. With the way the internet is nowadays, its probably best to keep kids off the internet until they are older. One just has to look at whats on places like Youtube 'Kids' to see all the stuff that is not kid friendly and probably detrimental to their mental health.
One thing I've found interesting with social media and children is that almost every parent I know recognizes the impact social media has on their children, but they willfully ignore it or feel powerless to avoid it. I hear stuff like, "It's impossible for kids to not be on social media. It's required these days.", and "The social consequences will be worse for them if they're not a social media."
The answer to our problems is not less freedom. It should never be less freedom. I’m opposed of any law that restricts freedom of information. No matter the age. I think we need to do better on educating kids on online behaviors and we should hold social media companies accountable for addictive features but what we absolutely shouldn’t do is blame little Jenny and take away her access to groups and social interaction online.
Yes. Little Jenny should also be free to purchase and drink a bottle of whiskey, but only if her parents are sufficiently negligent.
I would've backed your argument up until a few years ago, but the science is coming down pretty hard now showing that social media use is absolutely detrimental to still-developing minds.
I’m not refuting that. Only I don’t think we should be reacting with laws to restrict access. Who’s to say “what” a social media platform is? Does this mean discord as well? Steam? Roblox? Where do we draw the line on what is social media and what is social?
I'm in favor of this if the only enforcement actions are against social media companies for being predatory, and not against families for breaking the law and allowing their kids on social media. And it's useful for indicating that social media on the balance is not good for kids.
Enforceability is a foregone conclusion, and when it comes to things like this it's somewhat expected. The same can be said for pornography, drugs and alcohol and tobacco (remember Joe Camel?), and anything else that would fall under blue-laws.
The goal of this is to bring attention to the fact that it's a problem and should be seen as undesirable, like pornography or Joe Camel. The cancellation of Joe didn't prevent kids from getting cigarettes but it did draw attention to the situation and there has been a marked decline in youth smoking since the late 90s when the mascot was removed. It's correlative, for sure, but the outcomes are undeniable. The same happened with the DARE program and class 1 drugs (except for marijuana iirc).
This discussion can be seen every time when the EU decides on some regulation against tech industry. A lot of people will jump that it won't be enforced, then when we see the first fines those people will jump that it won't move a needle, then when the tech giants do change a bit their course then... well the tech bros will always find a reason to jump against doing anything to curb tech.
It’s dumb policy, because Florida GOP. The smart move is to target advertising for kids. If you attack the ability to advertise to the underage audience using mom’s iPad, social media will self police.
Yes. Rather than mandating verification, can we just mandate that there a registry or that websites are legally required to include a particular HTTP header, combined with opt-in infrastructure in place for parents to use?
e.g. You could set up a restricted account on a device with a user birthdate specified. Any requests for websites that return an AGE_REQUIRED header that don’t validate would be rejected.
we have laws against drinking and smoking / vaping… for kids under age of 16 and that has been working so GREAT that we should get more of these laws in place (federal preferrably) so that our kids can be even healthier adults. moar laws please - need as much of federal government in our families and parenting as we can get that I’d suggest 1 federal agent be assigned to each child born to ensure healthy adulthood
You can be in favor but in the US it is unconstitutional for a gov to broadly restrict speech. It's why each of these age verification + social media laws eventually get tossed. Legislators know this (or are too dumb to) but it's not their own dollars that are getting burned during this vote-baiting performance.
> is an american child granted the right of free speech? or do you get that right at a certain age?
This isn't the right way to frame the question. The Constitution prohibits government actors from infringing upon the right to free speech[0], including that of children; meaning e.g. an attempt to censor a child's treatise on some subject would be as illegal as doing the same to an adult. It's not that you get a "free speech license" when you turn 18. The restrictions on government power applies whether or not the person being infringed upon is a minor or an adult.
[0]: This isn't an absolute, unqualified right. A child breaking into a profanity-laden tirade in the middle of class at a public (meaning taxpayer-funded and an extension of the government) school may believe they are exercising their First Amendment rights, but the school may still legally send that child home, for example, on the grounds that it disrupts the schooling of other children.
Only on broadcast TV, and the decision is fundamentally reliant on the nature that RF spectrum is a finite resource to be able to justify the restriction.
SCOTUS has routinely struck down prohibitions against the same things in other media, including explicitly the internet.
No I definitely agree. I'm a little skeptical of how they'll enforce this but ultimately I think less kids on the internet and social media will be a positive and I agree that it doesn't seem like parents have managed to figure out how to address this.
I don’t think so. There’s nothing about social media that makes me feel like kids need it. Hell, if we could ban it for adults that would be an unmitigated good.
16 seems too young. Why not tie it to drinking age. There are way too many people who have gotten better at online manipulation in the past few decades.
I guess for me it depends on what the law considers "social media".
Is something like the bulletin boards we used to have around the late 90s/early 2000s social media? What about chat rooms? Local social web sites for the school or your city? I think a lot of these things can even be beneficial, if I think about my own experiences as a somewhat introverted teenager.
And what about things like Netflix, Youtube, Podcasts? They can be just as harmful as TokTok and Instagram. Especially on Youtube you have a lot of similar content.
I've found accounts that claim to be official accounts of children's shows - maybe they even are - and which are full of nonsensical videos, just randomly cut together action scenes of multiple episodes. It's like crack for children. Of course YouTube doesn't do anything, they want you to pay for YouTube kids. And the rights holders want you to buy the content, so they leave the poor quality stuff up.
The thing is, exploitative content is always going to be created as long as there are incentives to do so. You can ban stuff, but it's whack-a-mole, and you are going to kill a lot of interesting stuff as collateral damage. The alternative is much harder, change the incentives so we can keep our cool technology and people are not awarded for making harmful stuff with it. But that would require economic and political changes, and people don't like to think about it.
> I guess for me it depends on what the law considers "social media".
It's a bill written by the Florida House of Representatives, so there's a definition there. Mind you, it's the Florida House, which has put out some extremely bad laws in its current session -- from "Parental Rights in Education" to the Disney speech retaliation. But given that this is a less ostensibly partisan issue, there are reasons for hope.
The definition seems narrowly tailored. I think that part (d)1d is a questionable choice, since most social media platforms will probably argue that they are not really "designed" to be addictive (for various definitions of "designed" and "addictive"). It appears that specific exemptions were made for YouTube, Craigslist and LinkedIn (without mentioning those companies by name), and algorithmic content selection is part of the definition. This is one of the better versions of this law I could imagine being written by a state legislature, though it isn't without its faults. It's nice to see my home state in the news for something good for once.
I agree that YouTube is a particularly difficult case. But part of the problem comes from using it as a digital pacifier, rather than peer pressure. There's no particular reason why the technology market should produce a free stream of child-appropriate videos. Ad-supported media has its ups and downs, but when the targets of those ads are young children, it's much harder to defend. And parents have more control over the behavior of their 4-year-olds than their 14-year-olds.
Here's the definition:
>(d) "Social media platform:"
>1. Means an online forum, website, or application offered39
by an entity that does all of the following:
>a. Allows the social media platform to track the activity
of the account holder.
>b. Allows an account holder to upload content or view the
content or activity of other account holders.
>c. Allows an account holder to interact with or track
other account holders.
>d. Utilizes addictive, harmful, or deceptive design
features, or any other feature that is designed to cause an
account holder to have an excessive or compulsive need to use or
engage with the social media platform.
>e. Allows the utilization of information derived from the
social media platform's tracking of the activity of an account
holder to control or target at least part of the content offered
to the account holder.
>2. Does not include an online service, website, or
application where the predominant or exclusive function is:
>a. Electronic mail.
>b. Direct messaging consisting of text, photos, or videos
that are sent between devices by electronic means whe re messages
are shared between the sender and the recipient only, visible to
the sender and the recipient, and are not posted publicly.
>c. A streaming service that provides only licensed media
in a continuous flow from the service, website, or application
to the end user and does not obtain a license to the media from
a user or account holder by agreement to its terms of service.
>d. News, sports, entertainment, or other content that is
preselected by the provider and not user generated, and any
chat, comment, or interactive functionality that is provided
incidental to, directly related to, or dependent upon provision
of the content.
>e. Online shopping or e-commerce, if the interaction with
other users or account holders is generally limited to the
ability to upload a post and comment on reviews or display lists
or collections of goods for sale or wish lists, or other
functions that are focused on online shopping or e-commerce rather than interaction between users or account holders.
> f. Interactive gaming, virtual gaming, or an online
service, that allows the creation and uploading of content for
the purpose of interactive gaming, edutainment, or associated
entertainment, and the communication related to that content.
> g. Photo editing that has an associated photo hosting
service, if the interaction with other users or account holders
is generally limited to liking or commenting.
> h. A professional creative network for showcasing and
discovering artistic content, if the content is required to be
non-pornographic.
> i. Single-purpose community groups for public safety if
the interaction with other users or account holders is generally
limited to that single purpose and the community group has
guidelines or policies against illegal content.
> j. To provide career development opportunities, including
professional networking, job skills, learning certifications,
and job posting and application services.
> k. Business to business software.
> l. A teleconferencing or videoconferencing service that
allows reception and transmission of audio and video signals for
real time communication.
> m. Shared document collaboration.
> n. Cloud computing services, which may include cloud o. To provide access to or interacting with data
visualization platforms, libraries, or hubs.
> p. To permit comments on a digital news website, if the
news content is posted only by the provider of the digital news
website.
> q. To provide or obtain technical support for a platform,
product, or service.
> r. Academic, scholarly, or genealogical research where the
majority of the content that is posted or created is posted or
created by the provider of the online service, website, or
application and the ability to chat, comment, or interact with
other users is directly related to the provider's content.
> s. A classified ad service that only permits the sale of
goods and prohibits the solicitation of personal services or
that is used by and under the direction of an educational
entity, including:
The fact that are well over a dozen exceptions carved out strongly suggests that the definition is anything but narrowly tailored, and the authors of the bill preferred to add in exceptions to everyone who objected rather than rethinking their broad definitions.
1a-c will be trivially satisfied by anything that "has user accounts" and "allow users to comment". 1e is clearly meant to cover "algorithmic" recommendations, but it's worded so broadly that a feature that includes "threads you've commented on" would satisfy this prong. 1d is problematic; it can be interpreted so narrowly that nothing applies, or so broadly that everything applies. IANAL, but I think you'd have a decent shot of going after this for unconstitutionally vague on this prong for sure.
Discounting 1d, this means that virtually every website in existence qualifies as social media sites, at least before you start applying exceptions. Not just Facebook or Twitter, but things like Twitch, Discord, Paradox web forums, Usenet, an MMO game, even news sites and Wikipedia are going to qualify as social media platforms.
Actually, given that it's not covered by any of the exceptions, Wikipedia is a social media platform according to Florida, and I guess would therefore be illegal for kids to use. Even more hilariously, Blackboard (the software I had to use in school for all the online stuff at school) qualifies as a social media platform that would be illegal for kids to use.
>Discounting 1d, this means that virtually every website in existence qualifies as social media sites
Most websites would not satisfy 1e. Hacker News, for example. Traditional forums do not satisfy 1e.
>1e is clearly meant to cover "algorithmic" recommendations, but it's worded so broadly that a feature that includes "threads you've commented on" would satisfy this prong.
There could be some haggling over this, but I don't think that reading it in the least reasonable possible way is likely to fly in court. In particular, 1e stipulates "content offered". If "threads you've commented on" is content that the user has to request, e.g. by viewing a profile page or an inbox, that might not be considered "offering". It also says "control or target", but content with a simple bright-line definition like that is probably not controlled and certainly not targeted.
>The fact that are well over a dozen exceptions carved out strongly suggests that the definition is anything but narrowly tailored
>at least before you start applying exceptions.
Yes, the definition is excessively broad if you ignore the majority of the text in the definition. This is a circular argument.
>Actually, given that it's not covered by any of the exceptions, Wikipedia is a social media platform
Exception 2m, shared document collaboration. But I don't think Wikipedia satisfies 1e either.
>Blackboard (the software I had to use in school for all the online stuff at school) qualifies as a social media platform
Probably qualifies under 2s or 2m. I'm not familiar enough with the platform to know if it satisfies 1e.
> Most websites would not satisfy 1e. Hacker News, for example. Traditional forums do not satisfy 1e.
Hacker News has a page that lets me see all of the replies from comments I've posted. Posting is clearly "activity of an account holder", and that means there is "at least part of the content" being "control[led]" by that activity.
> If "threads you've commented on" is content that the user has to request, e.g. by viewing a profile page or an inbox, that might not be considered "offering".
You're the one who's criticizing me for "least reasonable possible way", and you're trying to split hairs like this? (FWIW, an example that would qualify under your more restrictive definition is that the downvote button is not shown until you receive enough karma.)
But at the end of the day, it doesn't matter what you think, nor what I think, nor even what a court will think. What matters is how much it will constrain the government using this law to harass a website. And 1e isn't going to provide a barrier for that.
> Yes, the definition is excessively broad if you ignore the majority of the text in the definition. This is a circular argument.
No, it's not. The definition boils down to "everything on the internet is social media, except for these categories we've thought of" (or more likely, had lobbyists who pointed out that the definition included them and so we through them into the list of exceptions). That the majority of the text of the definition is a list of exceptions doesn't make it a narrow definition; indeed, it just highlights that the original definition sans exceptions is overly broad.
> Probably qualifies under 2s or 2m
Definitely not 2s, it's not "a classified ad service". I'm not sure there are any classified ad services are "used by or under the direction of an educational agency", but that's what you have to be to qualify under 2s. (This makes me think someone did a bad job of splicing in the exception, and the second half of 2s is supposed to be 2t. Just goes to show you the level of quality being displayed in the drafting of this bill, I suppose.)
>Hacker News has a page that lets me see all of the replies from comments I've posted. Posting is clearly "activity of an account holder", and that means there is "at least part of the content" being "control[led]" by that activity.
It doesn't say "control" by the activity, it says control by the platform. Control is a little bit difficult to define, but one reasonable necessary condition is that if someone is in control of something, then someone else might do it differently. "Show me threads I've commented on" should produce the same result regardless of platform. "Show me a random page" should at least have the same probability distribution. But "my recommendations" is fully under the platform's control.
I grant that trying to interpret "offer" was a bit of a reach on my part. On the other hand, "control" can be interpreted in a pretty reasonable way to imply some form of meaningful choice.
>You're the one who's criticizing me
I'm not criticizing you, I'm criticizing your argument. It's important to keep a safe emotional distance in this kind of discussion.
>it doesn't matter [...] even what a court will think. What matters is how much it will constrain the government using this law to harass a website. And 1e isn't going to provide a barrier for that.
It certainly does matter what a court will think. Once a couple of precedents are set, it should be possible to identify what legal actions are meritless, and governments bringing frivolous suits may find themselves voted out of office. Unfairly targeted websites can bring a claim of malicious prosecution.
Now, if you don't trust the voters and the courts, that's a different issue, but it's going to affect every law, good or bad. That's just how government works.
>That the majority of the text of the definition is a list of exceptions doesn't make it a narrow definition; indeed, it just highlights that the original definition sans exceptions is overly broad.
If we assume that ad targeting and algorithmic content recommendation are profitable, the original definition clearly constrains the ability of sites to make money while offering user accounts for minors. Lobbyists probably don't want profits constrained for their employers, even if the targeting aspects aren't necessary.
But just because it targets features that can be added to every website, it isn't reasonable to say that it targets every website. Most of the Internet functions just fine without needing to create accounts, and when accounts are necessary, they're for buying stuff.
> f. Interactive gaming, virtual gaming, or an online service
This bill is already out of date. The new generation's social media are games like Roblox. And these are as addictive as the old social media.
Good luck with this whack-a-mole. A comprehensive bill would stop this at the source: kids owning smartphones. But addressing smarphones would upset too many parents and too much business, so it won't get done.
> Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.
Many platforms can argue that they're not engaging in this behavior. Do Mastodon and Lemmy count as addicting? They look like Twitter and Reddit on the surface, but they don't have a sorting algorithm that maximizes for engagement. So would they be included in the definition or not?
And if they don't, what's stopping big companies from claiming the same, since you can't actually see their source code for news feed sorting?
Thank you for sharing all the context; very interesting.
1d does stand out. I can guess what they were going for. I wonder if it could be somehow scoped to gamified, or to feed-based algorithmic sites. As a random example, Reddit's site definitely underwent such feed based boosting in the last few years. I'm constantly getting suggested content that is some form of region-based outrage event, Person X doing horrible thing to person Y, etc, and its nauseating. You click one such thing and it knows and it just hits you again and again, and eventually you just have to get out. Which sucks because every time I pick up a new interest, its an easy place to go to find more people that are into it; but you can't get that without the BS.
This makes me think of how, as a child, every site asked “are you over 13”, and I diligently clicked “yes”. Some more clever sites asked for my birth year… forcing me to do the arduous work of taking the current year and subtracting 14.
Though I suppose the real plan here is to pass the law and then have the government selectively prosecute social media companies for having users under 16.
I remember my daughter at an astonishingly young age encountering a age-login screen, turning to me and asking "How would they be able to tell?" then merrily telling the system she was 18.
A small transaction would cover 99% of cases (e.g., pay a dollar that's immediately refunded). It would stop kids from casually creating accounts. The kids who can do this are already precocious enough to bypass any other verification steps you could come up with.
Maybe if they use a profile pic that you algorithmically determine is someone underage, you could do some additional checks. The smart ones would learn not to utilize a profile pic of themselves, which would ultimately be better.
I wonder if it'd really cover anything remotely close to 99% of cases. Even if 100% of parents knew about it and watched their credit cards enough to notice a $1 refunded transaction it just takes something like one friend in high school with a credit card to sign up all their little brother's friends. It may even just cause more credit cards being shared around than kids it stops from getting to the site they want on.
Then there'd be even more unintended consequence. Instead of sites you don't want kids creating accounts on you'd have sites selling 5 minutes of ads to create an account for them or increasingly shady stuff. Preventing this kind of site is the same as the original issue.
I understand the point you're trying to make with this (social media will definitely abuse the additional knowledge/opportunities they get by having compulsory credit card info), but chargebacks are actually a pretty effective incentive against this. Given that chargeback fees are ~$20-$100 per incidence, you'd only need 5% or less of users calling out the social media site's false charge for that company to be netting a loss.
I would relish the opportunity to cost Facebook $20 because they gave back a couple cents less than they should have.
> have the government selectively prosecute social media companies for having users under 16.
The US government is already legally mandated to prosecute companies known to harbor information, collected online, concerning minors less than 13 years old without consent from their parents or legal guardians.[1]
It's why Youtube blocks comments and doesn't personalize ads on videos published for kids, to pick out a prominent example.
Laws are getting stricter. Around the world, there is increasing regulatory requirement for businesses to actively investigate user behavior (tracking!) to identify and exclude underage users who are concealing their age.
Yeah, similarly I had just gotten used to entering an elder sibling's birthday whenever asked. Adding these arbitrary age restrictions does nothing but make it increasingly obvious to kids how little our leaders and other supporters of these arbitrary age restrictions actually care.
I don't think enforcement actually needs to be very tightly controlled. The barriers that are put in place like the one you describe are already enough to create a social milieu where parents and kids with think twice about these things and understand that there is a recognised harm potential.
There's nothing stopping you pouring your youngsters a glass of wine with dinner, but as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.
> as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.
Unfortunately, as a society, we have a much harder time grasping social media threat data. I suppose some of that is due to how news orgs consistently+bizarrely+hugely overstating the actual harms in the data.
I realized the error after submitting (added ten years to my age/subtracted ten years from my birth year), but I didn't think anyone would be confused by it so didn't bother correcting it
Some more clever sites asked for my birth year… forcing me to do the arduous work of taking the current year and subtracting 14.
But why? You could have just picked a year that worked, and sticked with it. Obviously, there's no way of telling which year works, but you could have bruteforced that just once.
I remember when I was 10 years old at a computer camp during the summer at a local college. They had me set up my first email account with hotmail. They all asked us to lie about our age. I think even then they had restrictions that you had to be 13 years old.
But - that was over 25 years ago. The internet was a much different place.
Fast forward to today. Ours came home with a google account in the 5th grade I think. Something I explicitly did not want. They didn't send a permission slip home like they do for everything else either.
Another teacher around the time had the kids set up on GoodReads. They were under 13 and there was a TOS at the time restricted to 13+. Mostly adults on that site.
> Fast forward to today. Ours came home with a google account in the 5th grade I think. Something I explicitly did not want. They didn't send a permission slip home like they do for everything else either.
Google Workspace accounts, especially those for education[0], have Web & App Activity, as well as Location History, automatically turned off. It's just a tool for schools to get free/cheap email, storage, and classroom tools. For your child under 13 to be able to use it compliant with COPPA[1], your school must have either used some level of blanket consent, or the school didn't bother to actually get the parental consent Google requires.
I remember joining ebay (well, auctionweb - aw.com/ebay, IIRC) and it not even being an issue that I was around 14, we mostly trusted each other, and just mailed money orders around. A different time.
I feel like I can almost guarantee that this bill has nothing to do with protecting children and has more to do with brainwashing children and restricting their access to opposing viewpoints, especially given this is Florida.
That being said, I am not strictly opposed to a bill like this. But 16 is way too old. I feel like likely somewhere within the 10 to 13 range since most don't allow for under 13 anyways would be fine. But then if they all block under 13 what is the point of the bill?
Restricting social media use is tantamount to brainwashing? I don't see the connection.
As for restricting access to "opposing" (opposition to what?) viewpoints, what children can be exposed to has long been restricted.
But since there isn't a syllabus for what children will be presented on social media, I don't see the viewpoint restriction angle either.
In fact, that position is illogical to the point that it raises the question of whether or not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of. Under the radar of supervision.
Going only on my experience with social media, a valid and more plausible reason for this restriction would be that social media seeks to optimize the feed of users for engagement. In a manner that "hacks" psychology in a way that makes it difficult for even adults to disengage. Given that minors do not have fully developed brains, the ability to disengage may be even more hindered.
Television programing has long sought this goal as well and with some success. While that use isn't restricted, there is theoretically a red line. Florida may see it in social media use.
> Restricting social media use is tantamount to brainwashing? I don't see the connection.
The idea is that social media exposes kids to viewpoints that they wouldn't otherwise be exposed to, so parents who want their kids to be a certain way would not want this, as they cannot easily control what viewpoints their kids are exposed to online.
Of course, every parent wants their kid to be a certain way, whether or not this is negative is dependent on how narrow that certain way is. The same applies to restricting what kids are exposed to: it is good to restrict exposure to some things, but too much restriction becomes bad.
The Florida legislature has recently been restricting the education system's ability to talk about gender and race, and pushing for more Christianity in schools. This makes some people feel there is an implied extension to the apparent "This is to protect kids" message: This is to protect kids (by making them conservative and Christian)
Well, true, TikTok probably has more negatives than positives, but I have a feeling the American Talibans[1] in power don't like teens organizing, and where do they organize? On social media...
[1] Yes this is an apt comparison. Suppression of opposing viewpoints, growing voter suppression and not even accepting results of democratic elections, and then the whole anti-Abortion movement.
This is coming from the state that is trying to ban books based on some backwards concern of a white kid feeling bad. (for the record I am white).
I don't care what side of politics you are on... you know what opposition I am talking about. Whether you are for or against it.
> not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of.
Yes! Because otherwise the parents are brainwashing their kids into their viewpoint not allowing them to see the real world.
This isn't a hard concept to understand here.
I mean I am liberal and atheist. But even I have wondered if I ever have kids if I don't expose them to the choice of religion if that is brainwashing of its own. I may try to justify it with the harm that religion has caused, but I am still denying my kid another prospective that is different than my own.
Edit:
To clarify here. If this was a state that was not actively removing opposing viewpoints from their libraries and teachings than I may buy that they actually care about their kids. But it's not, it's Florida. The beacon of being scared of their kids knowing anything about the real world and daring to have compassion for someone different than them.
I don't mean to turn this into a religious discussion, but adding onto what you are saying here with a personal anecdote: having access to the internet allowed me to see viewpoints besides the religious right-wing perspective I had been conditioned with (for lack of a better term), but the internet did not start the process of leaving religion. I made the decision to seek other perspectives and did so through many avenues. Blocking teens from generally interacting with other people online certainly looks like a hamfisted attempt at ideological preservation. It won't work.
If you don't mind me asking what was the thing that prompted it? Tbh I was in a similar boat growing up and for me it was the internet that really first exposed me to things because I grew up in a very conservative area. I mean part of it for me is also being gay and at that time being pushed out in a way so I was also seeking that.
I consider myself very fortunate that I had gone to talk to the school guidance counselor and she was able to get me a book to help as well.
But Florida in particular is going multiple routes which is why I said what I said. Some kids will still seek it out, but some may just not be exposed to other viewpoints if they are more sheltered or live in a certain type of community.
And I know that this really isn't going to work, especially in today's age. But there is also something unique about seeing someone else's life that is different than yours (admittedly though the filtered lens of social media) vs just seeing on tv/movies or even the news.
There's nothing hamfisted about monitoring what other adults say to teens (and younger) when alone. It falls under basic supervision of minors, in order to prevent abuse.
Including abuse that abusers try to shoehorn in under the guise of noble cause. For example, like sexual grooming, cultic induction, political radicalization, etc.
Anything that anyone can say to a child, can be said to an adult. I mean that in two fashions.
The first is that it can be said in the presence of a guardian. If it can't be, that's a red flag for abusive and grooming behavior. Children have what is known as "guardians" for a reason. They are charged with making decisions for the child until that child is an adult. Which includes making decisions about steering them around abusive adults. Trying to circumvent that firewall is red flag behavior. Everyone who aims to do this should know how they are framing themselves. If an adult has something to say to children that is outside of the known school curriculum, then there is zero reason that it can't be said to their guardian at the same time.
The second fashion that I mean the assertion that "anything that anyone can say to a child, can be said to an adult" is the following:
If someone believes that what a parent teaches a child is misguided, then there is no reason whatsoever that their alternate presentation of facts cannot wait until the child is an adult. Assuming that their guardian wants to sheild them from that alternative view.
Needing to present alternative views to a child, instead of an adult who is more capable of wieghing what they say, falls in the same category of suspicious red flag behavior.
You may have appreciated the alternative views that you sought out as a child, but the truth is that you circumvented your parent's guardianship. And if your current views are valid or more valid, then they would have remained so once presented to you as an adult.
Teens shouldn't be coddled as though they are children. They should be given responsibilities and some amount of autonomy. Prepare them for when they eventually screw up and they will be okay. Also, they are probably being exposed to way worse shit by their peers in school than from using the internet.
I feel like you might be looking at this incorrectly. This is political of course given it's FL, but I'm attempting to look past that. The issue, which has been exhaustively studied by people like Jonathan Haidt and posted about on HN, is that there is great evidence that the underlying psychological impact and algorithms with social media platforms is having a negative effect on teenagers, especially females. Some of these studies have been shared right here on HN, but here is a link - from a mostly left leaning source just for you:
This isn't the same as the Yahoo chatroom or social media that some of us older people grew up with. We might as well compare apples to oranges. Does that mean we should ban it? I don't think it matters.
My personal opinion is that stuff like this can't be stopped by a stroke of a pen or by whether you identify as left or right wing. 13 year old me didn't care about some checkbox asking if I was a certain age. The only thing that will stop this is for kids to not want to use social media - they have to think of it as not cool. Given the ever increasing sophistication of this type of software on a psychological level, I don't see that ever happening.
And judging by the quite fascinating trend of voting up and down I am noticing on my comment most people understand exactly what I am referring too. Whether they agree with those actions or not.
This is a valid question for pretty much all legislation. It serves by allowing the congress critters to toot their horns as doing something for those that only pay attention to news bites while doing no harm by doing nothing
I feel like the Australian TV show Utopia should be mandatory viewing for anyone who wants to understand government, even though it is ostensibly a comedy.
I really hate how right you are. And that meaningless thing will be all over ads and or your opponent voted against it (since it was meaningless and shouldn't be on the books) that turns into an attack on them.
Its should be categorized in the same way as gambling. Its addictive and useless in any form. The whole world would be better off without any social media. Including the most anti social people.
I disagree, what social media has turned into thanks to algorithms and engagement is a problem.
But in its purest form Social media isn't a bad thing and is a good way to actually keep in contact with friends. Also a good way to keep up on events happening around the world without relying on the news for everything.
I'm in agreement with you. I'm amazed at how many people that do I do can look at this myopic, sanctimonious rage bait and immediately start figuring out how to implement it with an encrypted token exchange.
This is still awful no matter how much crypto you throw at it. The end result of solving this little puzzle of a problem is that everything is worse shortly afterwards. Congratulations to them, I guess.
The reddit /r/politics trolls are entering HN. If you want to criticize something, better to understand it first.
Small Federal Government, more local government control. Let local communities decide for themselves what is important. Now I know your first thought is to ignore what I'm saying to find examples that prove this wrong. But this is the conservative approach in general.
NY and CA are never going to pass a bill like this, if having your kids on TikTok is important, move somewhere where you are with like minded people.
Pretty dismissive to label me as a /r/politics troll.
I think there is something that you need to understand about our society. "Move somewhere else" is not a solution. Our country should not be operated in a way that something so radical can be decided by smaller and smaller groups of people, and if you don't like it, you must uproot your life and find somewhere else to live.
What do you define as more local government control? How much control should be provided at more localized levels? At what point does a higher authority act to disallow local government overreach? Is that seen as Federal government overreach?
At any level, we should be having a discussion to consider: whether we agree on the problem, that the problem is a problem, what solutions we have and what are the unintended consequences of those solutions. If we instead have knee-jerk overreactions such as this, and then tell everyone who disagrees to get out... is that the country you want? A country of more local government control?
As a Floridian and someone in IT - I'm curious how this will be implemented
I can't remember the last time I signed up for a new social network; do they ask age? Is it an ask to Apple / Google to add stronger parental approval? Verify drivers license #?
We heard about this days ago on local news and I've been struggling to figure out short of are you 16 years or older how this is gonna get done and how do you fine someone if it's breached.
If I remember correctly, at one time Google even tried to enforce it and there were usability problems with typos and wrong dates and things - there was no verification and no easy way to fix an error. IE, if a mid-40s adult accidentally entered 1/1/2024, they'd be locked out. And if a kid entered 1/1/1977, they'd have an account (but not way to correct that date when they eventually turned 18).
(Putting aside if the law is good or bad and the constitutionality of it.)
Put criminal penalties to the directors if no reasonable attempt to keep kids out.
Plus corporate death penalty if they purposely target kids.
Then how they enforce it doesn't really matter as long as there are periodic investigations. The personal risks are too great and the companies will figure it out.
The FTC already implements a "corporate death penalty" in the form of massive fines if an organization collects data on kids and uses it to target advertising (see COPPA)
The only way to determine age is to compile a database of gov-issued IDs and related data. Which is an unconstitutional barrier to speech. Which is why this will get struck-down like each similar law.
The part about ID data eventually being shared with 3rd parties, agencies - and/or leaked - is a bonus.
It sounds like you are envisioning age verification that involves just two parties: the user and the site that they need to prove their age to. The user shows the site their government issued ID and the site uses the information on the ID to verify the age.
That would indeed allow the site to compile a database of government issues IDs and give that information (willfully or via leaks) to third parties.
Those issues can be fixed by using a three party system. The parties are the user, the site that they need to prove their age to, and a site that already has the information from the user's government ID.
Briefly, the user gets a token from the social media site, presents that token and their government ID to the site that already has their ID information, and that site sign that token if the user meets the age requirements. The user presents that signed token back to the social network which sees that it was signed by the third site which tells it the third site says the user meeds the age requirement.
By using modern cryptographic techniques (blind signatures or zero knowledge proofs) the communication between the user and the third site can be done in a way that keeps the third site from getting any information about which site they are doing the age check for.
With some additional safeguards in the protocol and in what sites are allowed to be the ID checking sites it can even be made so that someone who gets records of both the social media site and the third site can't use timing information to match up social media accounts with verifications and so could work with sites that allow anonymous accounts.
> It sounds like you are envisioning age verification that involves just two parties: the user and the site that they need to prove their age to. ... Those issues can be fixed by using a three party system.
Okay. That sounds promising.
However the method of collecting childrens' private data isn't what makes these laws unconstitutional. It's a government erecting broad, restrictive barriers to speech.
Utah caught a glimpse or reality and stayed their own unconstitutional law. They seem to looking for a way to retool it so it won't be quite so trivial to strike down.
> With some additional safeguards in the protocol and in what sites are allowed to be the ID checking sites it can even be made so that someone who gets records of both the social media site and the third site can't use timing information to match up social media accounts with verifications and so could work with sites that allow anonymous accounts.
I'm assuming that there will be some kind of way to prevent matching of logged IP addresses between the social media site and the verification site. Is there really a method for preventing matches of timing without requiring the user to bear the burden of requesting tokens from the sites at different times?
As I hinted at in a different comment [1] though, there remains a tradeoff of letting the verification party know how frequently I visit a single type of website vs. avoiding the first problem but needing my ID for multiple types of websites i.e. more of the internet.
I'm still on the fence about government doing a parent's job here, especially for kids under 13, but I can't stand that no one pushing these bills has come up with an actually reasonable age verification method.
The problem here is that it's pretty much out of the hands of the parents. If your kids' friends have social media, your kids will absolutely need it too in order to not be left out. I've witnessed the pressure, and it's not pretty. Add to that the expectation from society that children shall have access to social media.
Regulation is pretty much the only way to send the right signals to parents, schools, media companies (e.g. Swedish public service TV has a kids app that until recently was called "Bolibompa Baby", but it's now renamed to "Bolibompa Mini"), app designers, and so on.
We're barreling towards an internet that requires an id before you can use it.
It's a bit upsetting but I don't harbor the early 2000s naiveté about the free internet where regulation doesn't exist, the data exchange happens over open formats and connecting people from across the world is viewed as an absolute positive.
Govt meddling on social media platforms, the filter bubble, platforms locking data in, teenage depression stats post Instagram, doom scrolling on tiktok have flipped me the other way.
Internet Anonymity is going to die - let's see if that makes this place any better.
And the government having unfettered knowledge of every site you visit - in particular the more salacious ones - is how we solve that? Surely that won't be used as a political cudgel to secure power at any point, nor will it ever be used to target specific demographics or accidentally get leaked.
I'm still on the fence about government doing a parent's job here, especially for kids under 13, but I can't stand that no one pushing these bills has come up with an actually reasonable age verification method.
That sounds nightmarish. I don't want the verifier to know what porn sites I visit.
Someone else proposed the following system to me: a third party authority issues a certificate which I can then use to prove I'm 18. The CA cannot see where I use the certificate, though.
Er... The CA needs to be used to verify the certificate by the third party, ergo it will know the websites.
It's virtually impossible to make a verification system that's anonymous. Somewhere the third party and authenticator will need to share a secret that you cannot touch.
Furthermore, you would need the government to agree to this system and mandate this system universally and pay for the authentication services to exist. That's not what Florida is doing.
I can show my government-issued ID to any third party without the government knowing about whom I've shown this ID to. The third party needs to trust the government and the authenticity of the ID.
The problem is that my ID contains too much information; I would prefer a document (i.e. digital certificate) that only certifies my age, not my name, address etc.
Any such ID would need to be validated by the service. Therefore the service and the authenticator would need to speak. And in doing so, the authenticator will be able to see that an ID issued to you is being used for that service.
You cannot get around this. The service must confirm with the authenticator. The authenticator must know you are authenticated, and be extension, who you are.
Anonymous credentials. A central authority with verified age information of each person grants credentials that verify the age to third parties, but the authentication tokens used with the third party can't be used by the third party nor the central authority to identify anything else about the credential holder.
This is technically possible but politically impossible. Any system you make like this will get special government peaking exceptions added making it non-anonymous and probably rank corruption from industry lobbying will add some sort of user tracking for sale with data that is poorly anonymized. Once the sham system is in place they'll probably expand the requirement to other things.
The central authority should be someplace that already has your non-anonymous ID data, so using your ID for age verification doesn't give them any new ID information. The only new thing that them doing age verification adds is that they might keep a list of verification tokens they have issued.
Someone who obtained copies of the verification tokens you requested might go to various social media sites and ask them who used those tokens, allowing matching up your social media identities with your real identity.
That's fixed by making it so the token that is given to the social media site is not the token that came from site that checked your ID. You give the social media site a transformed token that you transform in such a way that the social media site can recognize that it was made from a legitimate token from the ID checker but does not match anything on the list of tokens that the ID checker has for you.
> The central authority should be someplace that already has your non-anonymous ID data, so using your ID for age verification doesn't give them any new ID information. The only new thing that them doing age verification adds is that they might keep a list of verification tokens they have issued.
But the central authority, a third party, will get a heads-up every time someone - whether child or adult - logs into the social media site. That's a privacy violation. Even if the verification system were set up in such a way that the third party wouldn't be able to know which exact website I'm trying to visit, the third party would be able to track how frequently I visit websites that require age verification. With just this law, it would be "you visited social media during X, Y, and Z times." With extensions of this law to other kinds of websites, it would be "you visited social media or porn or violent video games or alcohol sites during X, Y, and Z times", which obfuscates the kind of website I visit but also makes the internet into something I have to whip out an ID for just to use.
> That's fixed by making it so the token that is given to the social media site is not the token that came from site that checked your ID. You give the social media site a transformed token that you transform in such a way that the social media site can recognize that it was made from a legitimate token from the ID checker but does not match anything on the list of tokens that the ID checker has for you.
Is it possible to transform the token such that the social media site would be able to link it to your identity but an attacker who gains access to the social media site's data wouldn't? If so, I'd appreciate an example of a transformation for such a purpose. But it doesn't wipe out my privacy concern, that I - or anyone else - wouldn't be able to log in to a social media site without letting a third party know against my will.
> But the central authority, a third party, will get a heads-up every time someone - whether child or adult - logs into the social media site. That's a privacy violation. Even if the verification system were set up in such a way that the third party wouldn't be able to know which exact website I'm trying to visit, the third party would be able to track how frequently I visit websites that require age verification.
It doesn't have to work like this.
It's technically possible to do verification such that the authority (probably the government which already has a database with your age), doesn't get any communication when verification takes place. They'd have no idea which sites you visit or join, or how often.
And the site which receives the verification token doesn't learn anything about you other than your age is enough. They don't even learn your age or birthday. They couldn't tell the government about you even if subpoenaed.
(But if you tell them on your birthday that you are now old enough, having been unable to the day before, they'll be able to guess of course so it's not perfect in that way.)
Using modern cryptography, you don't send the authority-issued ID to anyone, as that would reveal too much. Instead, on your own device you generate unique, encrypted proofs that say you possess an ID meeting the age requirement. You generate these as often as you like for different sites, and they cannot be correlated among sites. These are called zero-knowledge proofs.
They work for other things than age too. For example, to show you are an approved investor, or have had specific healthcare or chemical safety training, or possess a certain amount of credit without revealing how much, or are citizen with voting rights, or are a shareholder with voting rights, without revealing anything else about who you are.
Do you mean that I can get a permanent age-verification key from the third-party authority, then never have to contact the authority again (unless I want a new key)? If so (and assuming that zero knowledge proofs, which I'm not very familiar with, work), then my privacy concerns are resolved. (Well, I don't want the authority to keep a copy of my verification key, but FOSS code and client-side key generation should be feasible.)
An example of the kind of token transformation I'm thinking of follows.
Assume RSA signatures from the site that looks at your ID having public key (e,m) where e is the exponent and m is the modules, and private key d. The signature s of a blob of data, b, that you give them is b^d mod m.
To verify the signature one computes s^e mod m and checks if that matches b.
Here's the transformation. You generate a random r from 1 to m-1 such that r is relatively prime to m. Compute r' such that r r' = 1 mod m.
Instead of sending b to be signed, send b r^e mod m.
The signature s of b r^e is (b r^e)^d mod m = b^d r mod m.
You take that signature and multiply by r'. That gives you b^d mod m. Note that this is the signature you would have gotten had you sent them b to sign instead of b r^e.
Net result: you've obtained the signature of b, but the signing site never saw b. They just saw b r^e mod m.
That gives them no useful information about b, due to r being a random number that you picked (assuming you used a good random number generator!).
For any possible b, as long as it is relatively prime to m, there is some r that would result in b r^e having the same signature as your b, so the signing site has no way to tell which is really yours.
b is unlikely to not be relatively prime to m. If m is the product of two large primes, as is common, b is relatively prime to it unless one of those primes divides b. We can ensure that b is relatively prime to m by simply limiting b to be smaller than either of the prime factors of m. Since those factors are likely to each be over a thousand bits this is not hard. In practice b would probably be something like a random 128 bits.
> But the central authority, a third-party, will get a heads-up every time someone - whether child or adult - logs into the social media site. That's a privacy violation.
Why would you do age verification on login? It only needs to happen once on account creation.
> Why would you do age verification on login? It only needs to happen once on account creation.
Oops. That slipped my mind. For sites that require log-in, my previous comment is wrong.
I had unconsciously assumed that at least one site would implement the age verification system without requiring users to make accounts to browse the site. In this comment, I will make explicitly make that assumption. For sites without log-in walls but with government-mandated age verification, the concerns in my previous comment would apply. But sites with log-in walls have their own privacy problems independent of age verification, chief being that having to log in means letting the first party site track how often I use the site. A different problem (not necessarily privacy-related, but can be) of log-in walls is that I would be forced to create accounts. If I don't wish to deal with the burden of making accounts, then I won't browse the website. If the website made a log-in wall in response to an age verification mandate from a government, then my First Amendment right to access the speech the website wished to provide will have been chilled.
I think you’d want to also reverify now and then. People only rarely create accounts, which I think would make de-anonymizing someone from simultaneous breaches of site and verifier logs easier.
If you have to verify often enough, and age verification is required on enough sites that are widely used by the general public so that the mere fact that you are using sites that require age verification is not something you might need to hide, I think it would make it much harder to get useful information from log comparisons.
Say you have a user U who wishes to demonstrate to site S that they are at least 16, and we have a site G that already has a copy of U's government ID information.
Here's one way to do it, with an important security measure omitted for now for simplicity.
• S gives U a token.
• U gives G that token and shows G their ID.
• G verifies that U is at least 16, and then signs the token with a key that they only use for "over 16" age verifications. The signed token is given back to U.
• U gives the signed token back to S.
If G saves a list of tokens it signs and who it signed them for, and S saves a list of tokens it issues and what accounts it issued them for, then someone who gets both of those lists could look for tokens that appear on both in order to match up S accounts with real IDs.
To prevent that we have to make an adjustment. G has to sign the token using a blind signature. A blind signature is similar to a normal digital signature, except the the signer does not see the thing they are signing. All they see is an encrypted copy of the thing.
With that change a breach of G just reveals that you had your age verified and gives the encrypted token associated with that verification. These no longer match what is in the records of the sites you proved your age to since they only have the non-encrypted tokens.
Someone with both breaches might be able to match up timestamps, so even though they can't match the tokens from S directly with the encrypted tokens from G they might note that you had your age verified at time T, and so infer that you might be the owner of one of the S accounts that had a token created before T and returned after T.
This would be something people trying to stay anonymous would have to be careful with. Don't go through the full signup as fast as possible--wait a while before getting the token signed, and wait a while before returning the signed token. Then someone who is looking at a particular anonymous S account will have a much larger list of items in the G breach that have a consistent timestamp.
Also note that to G it is just being asked to sign opaque blobs. Occasionally have G sign random blobs. If your G data shows that you are getting your age verified a few times a month, then it is even more likely that if one of those verifies is at about the same time as a particular social media signup it is just a coincidence.
Private State Tokens enable trust in a user's authenticity to be conveyed from one context to another, to help sites combat fraud and distinguish bots from real humans—without passive tracking.
An issuer website can issue tokens to the web browser of a user who shows that they're trustworthy, for example through continued account usage, by completing a transaction, or by getting an acceptable reCAPTCHA score.
A redeemer website can confirm that a user is not fake by checking if they have tokens from an issuer the redeemer trusts, and then redeeming tokens as necessary.
Private State Tokens are encrypted, so it isn't possible to identify an individual or connect trusted and untrusted instances to discover user identity.
This system clearly and trivially deanonymizes the internet. Even worse than a centralized system, it uses a simple "just trust me bro" mentality that issuers would never injure users for personal gain and would never keep logs or have data leaks, which would expose the Internet traffic of a real person.
> I'm still on the fence about government doing a parent's job here
The issue is, as a parent who is not very technical, how do they _safely_ audit their child's social media use?
I am reasonably confident that I could control my kid's social media habit, but only up to a point. there isn't anything really stopping them getting their own cheap phone/signing in on another person's machine.
The problem is, to safely stop kids getting access requires either strong authentication methods to the ISP. ie, to get an IP you need 2fa to sign in. But thats also how censorship/de-anonymisation happens.
Let's assume oposition to the law is a "progressive" position:
If there is a constitutional right to absolutely 100% friction free access to information then what happens to all the barriers the government has erected to access covid, Trump, Russia and other "disinformation" progressive pushed for?
(You can invert this example for a right wing if you want)
Not everyone has a credit card. Some people cannot obtain a credit card. People under the age of 18 can also have a credit card. I do not trust random sites with my credit card.
How about this: I don't want every little thing I do on the internet tracked and tied to my real identity?
Is the CCP conducting a psyops on HN right now or something? Since when were we all for every tiny interaction you have on the internet requiring you to look in the scanner and say "My name is X and I love my government and McDonalds"?
> If there is a constitutional right to absolutely 100% friction free access to information then what happens to all the barriers the government has erected to access covid, Trump, Russia and other "disinformation" progressive pushed for?
...those barriers go away. They never really existed in the first place in any real way. Like the Great Firewall, they were a polite fiction defining what people are allowed to know, but were trivially circumvented from minute zero.
This is one of the most reliable and desirable features of the internet in the first place.
While people are on the fence about it, our children are having their youth, innocence and brains destroyed by tiktok et al. Those platforms are cancer to adults even, let alone impressionable kids... yet here we are still debating it and faffing around about "1st amendment yaddi yadda".
>children are having their youth, innocence and brains destroyed by tiktok
For one, ease up on the hyperbole if you want to be taken seriously. I'll give you the benefit of the doubt because the news is nothing but hyperbole these days, so it's easy to pick up the habit. Second, most kids aren't having "their youth, innocence and brains destroyed." The news takes the edge cases, amplifies them and presents it as the norm to peddle fear because fear sells. Nothing ever is bad as the news makes it out to be, but they gotta make a dollar, you see how bad the news business is since the internet?
FWIW, my kid uses social media and just connects with her friends. Nothing overly malicious goes on, they just goof off. I've checked.
You really wanna protect the kids from anxiety and whatnot, block the news and all the talking heads trying to manipulate the next generation to their political opinions.
Your comment would be fine without that sentence and the one after it.
(I'm not saying the GP comment is particularly good—it was pretty fulminatey—but it wasn't quite over the line, whereas yours was, primarily because yours got personal.)
There's a massive rise in depression in young children. The teen suicide rate has almost doubled.
The idea that you know what's going on, on their social media is pretty funny. Certainly what every adult always assumed about me. And now that I have kids, I can see how easily other kids fool their parents all the time.
And what's overly malicious? It may be social media itself without anything bad driving this. Merely seeing a sanitized version of people's lives over and over again, without anyone bullying you, that leads to depression because your life isn't as good.
I think if you wanted to reduce teen suicide a significant amount, banning social media isn't going to do it. It certainly isn't responsible for half. Of course banning it doesn't cost the government any money, so it's top of the list as opposed to any real solutions.
You also cherry picked your stats. If you open a larger window, the current teen suicide rate is not as abnormal as you are making it out to be.
Umm, that chart is 10 years out of date - it ends in 2015; the beginning of the social media era.
The current teen suicide rate is ~62 / 100K, which is just about double (or triple!) the last value in that chart. And is also an anomaly over the last 40 years.
I stand corrected on the current stats. I went with what the CDC had on a google search. It's aggravating that most sources don't show the entire picture.
Here's a chart showing that this trend is mostly in the mountain states and Native Americans are the largest demographic affected by this trend by nearly triple. Both these stats disprove the theory that social media has much of an impact on teen suicide across the entire nation, otherwise why wouldn't states like California and Florida have a higher rate? Residents of those states obviously use social media too.
The social media area had been in full swing for 10 years by 2015. Facebook was established in 2004 and blew up by 2006. Twitter blew up a few years later.
The narrative that social media is the cause in the rise of teen suicide across the country is simply false. Native Americans in mountain states are bearing the brunt of it and causing the national average to spike. Instead of "tilting" to social media, we should try to understand why Native American teens are having such a difficult time and solve that problem. That doesn't draw headlines though, does it?
Teachers have ALWAYS made those screams. My mother, a french teacher always complained that before she could teach kids french, she had to teach them how to read a clock, how to do math, and how the days of the week work (these were fifth graders mind you). She blamed education policy, but this is nothing more than what happens when 30% of your students are in poverty.
The reality is that some percentage of students will always fall through the cracks, and the human brain loves to blame whatever is "new" for problems that are "new" to you. This has been a problem for teachers since at least the No Child Left Behind policy, and even goes as far back as Socrates bemoaning his students being terrible because books meant they didn't have to have perfect memories.
Students suffered because covid was both a huge disruption to their education, and parents freaked out instead of trying to handle it (and plenty of people literally could not handle it anyway). It doesn't help that half the country openly cries that education is nothing more than liberal indoctrination, and openly downplay the value of even basic education, like the three Rs, and claims that anything higher than a high school education is also liberal indoctrination, is "woke", and is valueless.
I 100% hate tiktok, but I don't think it is (currently) being used to mentally attack the US. Maybe someday if we are ever at war with China, but right now they are content believing that inclusivity is toxic on it's own. I don't think tiktok changes people's brain significantly. I do think it is extremely low value way to spend time, and that it is addictive, two serious issues when taken together, but then again I spent my life watching several hours of TV a day. I especially don't like how tiktok seems to purposely direct new male users to what is basically softcore porn.
What is "social media", and how much are they actually banning communications, speech, association, organizing, political activity, etc. by young people?
Plus a backdoor way to promote surveillance tracking to communications for everyone, by the likely mechanisms to be used in practice for excluding those under 16.
Interesting. To meet their definition of "social media platform" requires that it "Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform."
Also, they have a large list of exclusions. Some of the phrasing sounds like under-16 people are mostly limited to person-to-person messaging, consuming content produced by others, and being able to access certain services that got exclusions.
Being able to speak publicly is sometimes expressly not allowed in these exclusions, but it is at least implied in other exclusions (which I guess might have originated to allowlist specific companies).
Teens who care to observe the law at all (many will not) will quickly realize that they can figuratively drive a truck right through some of those loopholes. (Example: a "photo editing & hosting" service with accounts and likes and comments is pretty much Twitter or Facebook, once an exodus of kids from "social media" starts congregating there. Bonus rebelious vibe, and the freedom/coolness that your parents and grandparents, and mean school principal haven't yet discovered it.)
If I had an online service that had under-16 among its users who were doing public expression/communication/content, and my business wasn't already clearly excluded, I would be scrambling to lobby for an exclusion. (I would also be discouraging this particular formulation of law, in the public interest, but remiss if I didn't have a backup plan for my immediate business in the case that it passed.)
Does YouTube count? Yes I think TikTok is largely a hellscape for my daughters around this age. But one of them learns all sorts of crafting projects via YouTube and the other has taught herself an incredible amount on how to draw. Would be a shame to throw away access to resources like this with the bath water.
The law has a list of applications that are specifically excepted. User tzs posted it in this thread.
It seems like YouTube would be covered by the ban because it doesn't fall under any of the exceptions. The closest one is this:
"A streaming service that provides only licensed media in a continuous flow from the service, website, or application to the end user and does not obtain a license to the media from a user or account holder by agreement to its terms of service."
But of course YouTube does "obtain a license to the media from a user or account holder", so it's not covered by this exception.
That doesn't sound like a bad distinction. If you're shuffling the stories around to maximize some metric, you're no longer an impartial carrier, you're actively biasing what someone sees.
I am here to report as a parent of children that I have been able to keep them off of social media by not getting them phones, controlling the Internet in the house, and paying close attention.
My experience with my kids (middle and high school age) is that online is "where" most kids socialize today and if I don't let my kids go there, I am socially isolating them.
We allow my son to be online via his computers. He has a Macbook and a desktop PC. He's on Discord and a Slack. He can iMessage friends via his Macbook.
The thing about a phone is that there's a data plan and it's out of your sight a lot. But to be fair, my son is mostly fine with sticking to these channels for talking to friends. So, it is particular to the kid.
It doesn't hurt that we live in a tight nit community where he can walk to friends' houses, school, etc.
My daughter is younger and probably will want to get a phone more than he does, when she is old enough.
My (non-technically inclined) nephew age 14 managed to get caught with an old Wii u from a friend and was using it to browser the net at 3am via the friend's (neighbor) Wi-Fi. I don't think paying close attention, as much as it should be done, is really the be and end all answer for everyone to know their kid is never on social media or sites they shouldn't be on. Beyond this example unless you're home schooling them in remote isolation with regular prison level searches there are going to be other ways your kids do things you wouldn't like them to. If you are talking about that kind of setup I'm more worried about that in society than Facebook.
That doesn't mean parents should not pay attention or give them easy access but I'm not sure the proud proclamation is really solving or responding to the conversation at hand. Most likely, IMO, it's a problem without a single golden solution.
My point really is that people get their kids' phones because they think they have to, but that's the root of the problem. We don't prevent our kids from being online - what we prevent them from doing is being online on their own, away from the house, with the internet in their pocket all the time. They each have computers.
I have a hunch that you're right, but there are a few other things that coincided with the rise of social media, so it's hard to tell what has driven the change in kids' psyches. Among them is the explosion of pharmaceuticals that are routinely prescribed to kids.
The pharmaceuticals thing is unique to the US, but every country has seen a simultaneous rise in kids mental health issues, so that rules out pharmaceuticals.
I think these things are linked. Either way, kids are having serious issues, so let's roll back the clock and see what innovation causes it. Kids were fine before social media, arguably better.
Make it ad-free (I know this is a pipe dream, but at the very least it should have ads from companies that do not sell gambling apps), scrolling time limits (that actually work which should be a bare minimum), chronological time feed
No. I'm not talking about the content itself, I'm talking about the firehose of useless garbage, appropriate or not, that's frying their dopamine receptors.
There's no downside to passing unconstitutional bills. They just need to become law and remain in effect long enough to be useful in electoral campaigns. When the courts strike the law down, they'll be painted as activist justices and yadda yadda we've seen all this before.
Here's the current status of state cases, if anyone's curious:
- "FIRE’s lawsuit asks Utah to halt enforcement of the law and declare it invalid. Other states — including New Jersey and Louisiana — are proposing and enacting similar laws that threaten Americans’ fundamental rights. In Arkansas and Ohio, courts have blocked enforcement of these laws."
I.e. at least four states' legislatures have passed laws like this, federal injunctions have paused two. Two others (Utah, Louisiana) aren't in effect yet.
How would it be different than age restricting voting, driving, or alcohol and tobacco sales? It seems there's precedent for treating minors differently than adults in many ways.
Minors do not have full constitutional rights when it comes to free speech. We've had SC precedence for this for 50+ years, thanks to Ginsberg v. New York.
>"In Ginsberg v. New York, 390 U.S. 629 (1968), the Supreme Court upheld a harmful to minors, or “obscene as to minors,” law, affirming the illegality of giving persons under 17 years of age access to expressions or depictions of nudity and sexual content for “monetary consideration.”
>Judge Fuld of the Nassau County District Court had convicted Sam Ginsberg, who owned a small convenience store in Long Island, New York, where he sold “adult magazines” and was accused of selling them to a 16-year-old boy on two occasions."
It's not good for kids to have unrestricted access to pornography in the same way that it's not good for them to have access to unrestricted social media.
> age restricting voting, driving, or alcohol and tobacco sales
I think those things are a bit more black and white.
How do you define social media? Is HN social media? You engage with others through comments, posts are ranked and it has voting elements as well as your profile has a gamified score for upvotes. Is a Disqus comment on any website social media if that's how we broadly define social media? Where do we draw the line?
You could make a case that leaving reviews on a site are a form of social media too. You can post something there and feel like you need to check back in hopes someone leaves a like or replies. If it were a wearable item you might take a picture of yourself and now hope people engage with it.
You could come up with corner cases and odd delineations for those former examples too. Yet we all know the gist of the laws and manage to somehow, on the aggregate, prevent kids from engaging in said activity.
E.g. alcohol. How do we stop kids form drinking alcohol? How do you define alcohol? What about kids medicine that includes alcohol, what about medical procedures that insert substances with alcohol content. What about shops that get conned by kids with fake ID. What if the label gets scratched off and kid doesn't know it had alcohol. What if a parent gives their child a sip of beer. What about old home remedies that include whiskey. What about colic medicine that has alcohol, okay What about carving exceptions for babies, etc etc etc.
Granted some of those are contrived, but it's not as black and White as you think.
The social media platforms the bill would target include any site that tracks user activity, allows children to upload content or uses addictive features designed to cause compulsive use.
At the time, Fortnite and Minecraft (and more) were my son’s way to socialize with friends. That was how they hung out (pre Covid). I can honestly say it would have been detrimental to him from a mental health perspective if those outlets didn’t exist or if kids were blocked from them.
Drawing the lines between what types of media are and aren’t allowed is a major issue with this type of law, regardless of if you think it’s a good idea or not.
Pretty good. Pretty good. I would go to argue that online games do more harm than good to children and teenagers. In many ways banning them from playing them might steer them to more productive use of time.
Must children's use of time be "productive"? They have their whole lives to be productive - outright banning video games is not the solution in my eyes
"I hope someone likes my review" is not that much different than "I hope someone likes my tweet" or "I hope someone replies to my IG post" or "I hope someone replies to my HN comment".
All 4 scenarios trigger the same thing which is setting up a future expectation that's hopefully met while you wait with anticipation of the event. Is that the process they are trying to get a handle on?
Personally I don't know how any of that could get enforced. Even making the internet read-only wouldn't work because that wouldn't stop people from internally comparing themselves to someone else who is allowed to post. Although that type of thing has been going on since advertising existed.
I think the difference would be in enforcement and what that would imply. How do you verify someone's age online without exposing that person to unwarranted tracking? Do we want to just say social media anonymity is dead?
The legal restriction needs to be very specific and demonstrate a good trade-off between societal gain and harms on the individual's right, not something a clean cut like many folks misunderstand.
Alcohol and Tobacco are pretty straightforward in the public health context. And you probably don't want your toddlers to drive a car given the risk. The voting age is an exception as it's defined in the 26th amendment. I consider it as political compromise though.
Minors don't have the same First Amendment rights as adults. For example, government-run schools can have speech codes that students must obey. Although I have not read this bill, the general notion blurs the line between speech and action a bit, which would make it easier to pass muster.
My understanding is that minors _do_ have first amendment rights, and school restrictions on speech can only apply to instances which disrupt the learning environment.
Yes, they do have First Amendment rights, just not to the same degree as adults. This is the same as other rights — they can't buy guns, vote, etc. either. The point is that it's definitely not a slam dunk case as claimed. I'd love to hear a constitutional lawyer weigh in, if there are any here. I used to be a lawyer, but this was never my specialty.
If someone could extralegally ground me with no devices if I say something they don't like (provided that it's speech protected under the First Amendment -- since not all speech is), I essentially have no First Amendment right.
Freedom of speech has ~2 definitions in the US... first, the legal definition (with the Constitution and related case law). A second, the general notion of being literally free to make speech however/whenever one wants without intrusion for anybody at all (government or otherwise).
In this case, we're sort of at the edge of the legal definition. Social media can be viewed as the modern version of the town square (can be, isn't 100% proven out in law yet). If one takes that statement as valid, then the government cannot regulate speech through social media without a very good reason.
But, minors don't have the exact same rights as adults for various reasons (guns, alcohol, privacy at school, etc).
My general impression (IANAL)... the government can likely limit minors' speech on media platforms, however that limit would have to very specific and tightly defined so as not to deny speech to adults. The devil is in the details (implementation)... the legality probably hinges on what method is required to verify age.
I can think of a few reasons. This doesn't prevent children from criticizing the government. The 1st amendment doesn't guarantee a platform. Children aren't legally the same as adults.
The 2nd amendment is very much alive and kicking. Those of us that would like to defend ourselves will fight for 2A.
To your point, children cannot own weapons. Most states are a minimum 18 years old for if not 21.
Full automatic weapons have been banned from civilian production for decades now. Semi automatic weapons are not weapons of war; are not the death machines non-gun owners think they are.
I was very anti gun until we (the wife and I) had a need to defend ourselves. We were literally put in a self defense position. We're both all in now and actively train ourselves and educate ourselves. More so, friends that are anti gun are no longer anti gun once they go to the range with me and are educated on what a semi automatic weapon actually is.
TL;DR - anyone anti gun is simple uneducated in the matter.
Children don't have the same standing with the constitution as adults. I don't remember the exact term but children have their rights restricted all the time. They do not have free speech in school for instance. They certainly have limits to their second amendment freedoms. So I don't think this bill will have constitutional issues. Personally, I'm all for a federal restriction on addictive social media for kids.
Spirituality (and religion) are personal choice.
There is no "religious education" for children, only indoctrination.
It may be legal, but it is in no way moral.
We've got to stop crafting a special world for children that is disconnected from the adult world. We should be moving in the opposite direction, lowering the voting age for example, allowing toplessness in appropriate places and allowing children in non-sexual adult spaces when with a guardian (like bars). How have we not yet learned that repressing kids is counterproductive?
Religion… it makes certain people think the world is full of bad people. While there are bad people, the vast majority of people are good people. Our policy here is to protect the children from X. Whatever X is at the time. Right now it’s wokeism.
To create well-rounded human beings who don't grow up with weird issues around nudity and drinking, for example. And who feel like that have some skin in the game and are actually part of society, not victims of it.
16 seems ridiculously old for such a ban. Especially if sites like YouTube count as social media (and I can't imagine how it wouldn't, given most of the content is identical between the short video platforms).
Because 15 year-olds are the prime audience for a lot of social media. And while much of it is a waste of time at best or actively harmful at worst, a lot of it is also engaging or even educational content that high school students should be allowed to access. And will access regardless; it'll just be a hassle for them or their parents to have to jump through hoops to get to it.
Private State Tokens enable trust in a user's authenticity to be conveyed from one context to another, to help sites combat fraud and distinguish bots from real humans—without passive tracking.
An issuer website can issue tokens to the web browser of a user who shows that they're trustworthy, for example through continued account usage, by completing a transaction, or by getting an acceptable reCAPTCHA score.
A redeemer website can confirm that a user is not fake by checking if they have tokens from an issuer the redeemer trusts, and then redeeming tokens as necessary.
Private State Tokens are encrypted, so it isn't possible to identify an individual or connect trusted and untrusted instances to discover user identity.
Yep - one of those "it's possible but do we want this" situations. Something feels a bit slimy about government-approved browser tokens. Like,
"We're sorry, your revocation appeal is taking longer than expected due to ongoing unrest in your area - please refrain from using internet enabled services like ordering food, texting friends, uploading livestream videos of police, giving legal advice, finding directions to your employer - a nonprofit legal representation service, or contacting high-volume providers like the ACLU. Have a nice day!"
But it could just be "Please execute three pledges of allegiance to unlock pornhub"
you've posted this in a few threads, but i dont think i understand what the scenario it is used in would be?
every user of social media in florida now has to visit a third party (who?) that sets a cookie (private state token?) on their browser that verifies their age?
Correct - ISP requires you to visit Florida.gov (or realistically a company the government trusted to set up verification) to set your token if you’re an adult. Then each social media site checks whether a visitor is from Florida, and then if they have a valid token. If valid, load like normal. If not valid, don’t load the site.
And now the state of Florida has a receipt of every website you ever visit. That will surely never be an issue when the Governor's private law enforcement arm looks through it or the inevitable data leak happens.
The intention of the API is for that to not be possible.
> The privacy of this API relies on that fact: the issuer is unable to correlate its issuances on one site with redemptions on another site. If the issuer gives out N tokens each to M users, and later receives up to N*M requests for redemption on various sites, the issuer can't correlate those redemption requests to any user identity (unless M = 1). It learns only aggregate information about which sites users visit.
Someone has to hand the browser the token. And that token has to validated by someone's backend. You now have an issuer with knowledge of who a token belongs to and a visited with a record of where they were. They go over this on that very page:
> (unless M = 1)...
> If the server uses different values for their private keys for different clients, they can de-anonymize clients at redemption time and break the unlinkability property...
> If the issuer is able to use network-level fingerprinting or other side-channels to associate a browser at redemption time with the same browser at token issuance time, privacy is lost.
This is why Mozilla rejects the proposal. We just have to trust issuers to be good and then trust that neither issuers nor websites will "accidentally" log these tokens where a data leak creates a papertrail to real-world identities.
It would be pretty simple to determine if tokens are unique per person - I agree that for many listed use cases in their documentation its not amazing, but with specific government oversight and watchdogs for the specific Florida use case I think technically it makes sense. Morally, still not a fan.
I wonder what this would affect culturally. There is a LOT more to this that will happen than just keeping children off of social media.
The USA exports its culture/pop/etc all over the world. I don't follow teenager arts/music/etc sources but a lot of musicians start in middle school and have so many mix tapes online and get known around their cities from using social media. Artists find other artists, learn other styles, etc.
I got into programming through IRC as a kid, which maybe that's like tiktok nowadays, I don't know a good comparison. I learned so much through sites/apps I could "upload content" and "tracks user activity."
So, what happens when every kid-teenager in a nation thats the worlds biggest culture exporter isn't getting their culture out?
I can't believe this got 106 to 13 with the "Regardless of parental approval."
I'm willing to buy an argument that certain kinds of "social media" have negative impact on kids under 16; but I'm absolutely not willing to buy an argument for a world in which the government is able to ban your communications with other people because you're under 16.
One possible result of this that sounds dystopian cool - script kiddy kids < 16 spinning up mastodon server instances and creating their own very leaky insecure rolling social networks. Unless it suddenly becomes illegal to run a server without a license.
Kids could use a social network hosted in another country. Imagine how quickly the US will erect a national firewall, especially since they can plausibly say it's to protect the children.
I would like to see regulation on notifications to address reaction driven addictions as opposed to an outright ban. Classical conditioning is clearly the issue at hand but opponents are not referencing the proven science enough.
If we don’t teach children how to use these platforms in moderation now they will certainly not be educated on how to use them responsibly in adulthood. I’m not against an outright ban totally but we are missing educational opportunities with what is likely an unenforceable attack on the problem.
My impression of these bills was that none of them had survived contact with SCOTUS. What is going to make this bill any different?
And why aren't we just passing a proper damned privacy law already? All of the addictive features of social media are specifically enabled by their ability to collect unlimited amounts of data. Yes, I know, the NSA really likes that data thank-you-very-much, but SCOTUS isn't going to be OK with "censor children" to protect that data flow.
This is a very complex problem. Its not just social media, but porn and other content that is not intended for young eyes.
One issue I have is with the age verification system. This will either be totally ineffective or overly effective and invasive. I feel legislation is drifting towards the latter with the requirement of ID.
One idea I had is a managed dns blacklist of inappropriate content. The government can have a requirement that a website register their site in this list to operate, otherwise they are subject to litigation for their content. At the same time have isps and network gear support this list in a 1 click type fashion. I have multiple dns blacklists I use at home. I know this may be a little more technical for the parents and guardians, but that is the world we are living in.
Limitations being:
Section 230 - user posts explicit content and the site isn't in the blacklist.
Network scope - This blacklist will have to be added to all networks accessed by children. What about public wifi? coffee shops?
IDK, I love being able to be anonymous online, but I do see the negative effects of social media, porn, and explicit content on our youth. I don't really trust the government to solve this effectively.
In a few states, you'll imminently see any information about LGBTQ+ people, include mental health resources, on that blacklist. (This has already been the case for decades in various school districts.)
And I'm not even trying to exaggerate. Ohio is working to eliminate any transgender medical treatment from the state. They've already succeed in making it nearly impossible for minors and now they are working on preventing adults from receiving hormone treatments.
What qualifies for the blacklist? It's a moral question. What happens when the blacklist maintainer's morals differ from your own? Sure, in the U.S. it seems fairly uniform that most people do not want children having access to porn. But what about women having access to information about abortion? Or information about suicide? The use of drugs on psychological conditions? Vaccination efficacy?
Really sucks when someone that controls the blacklist decides you're on the moral fringes of society.
Agreed, some are easy to classify like nude content, but what about a website about war history, that content is simultaneously factual and explicit. This is why I say its up to the website owner to register for the blacklist. That in itself has an incentive to reduce the surface of liability of the website.
Social media is a hard problem. What is exactly the issue? Is it creating a larger social hierarchy than children can cope with? Is it meeting and interacting with strangers? Is it reinforcing dopaminergic pathways from superficial digital content and approvals?
I think that online anonymity is overrated (and yes, I'm aware of my username). Social media platforms ought to require traceability and age verification.
It's crazy how in 2017, YC was proposing a social network for kids as a startup idea:
> Social Network for Children: This is a super tough nut to crack. You have to be cool and offer a safe and private environment for kids to communicate with each other while enabling trusted adults to peer in/interact, etc… The company that can build something that is used and useful for all parties can build something of great value throughout a person’s entire life. - Holly Liu, Kabam/Y Combinator
There was very little notion that a social network, no matter how safe, was inherently detrimental to childhood development. Like cigarettes initially, it just seemed to be mostly positive with some niggling annoyances. I wonder what other current YC ideas will be considered horrible 7 years from now.
Is the feed algorithm the only problem that is harming children? Not the concept of a social network in general, the entire point of whose is to publicise lives and keep its users stuck onto their screens for as much time as possible?
The entire fault of social networks is that it is hampering children's development by keeping them online. Trying to improve those services by making the algorithms better will worsen the situation. You want to make children lose the appeal of social media, not increase it!
I agree with where you're coming from, but publicly documenting their feed algorithms (which is what a call for "open source" effectively is) wouldn't change much. What is actually needed are open API access to the data models, so that competitive non-user-hostile clients can flourish.
I believe this would be legally straightforward by regulating based on the longstanding antitrust concept of bundling - just because someone chooses use to use Facebook's data hosting product, or their messaging products, does not mean they should also be forced to use any of Facebook's proprietary user interface products.
This would not solve the collective action problem where it's hard for an individual parent(/kid) to individually reject social media, but I also don't see this bill doing much besides making it so that kids have to make sure their fake birthday is three years earlier than the one they already had to make up due to COPPA. Of course the politicians pushing this bill are likely envisioning strict identity verification to stop that, but such blanket totalitarianism should be soundly rejected by all.
Unfortunately the larger problem here is that the digital surveillance industry has been allowed to grow and fester for decades with very few constraints. Now it's gotten so big and its effects so pervasive that none of the general solutions to reigning it in (like similarly, a US GDPR) are apparent to politicians. It's all just lashing out at symptoms.
I’m no fan of some of the things that have been happening in FL, and I’m not sure that a simple outright ban on “social media” for <16 is the way to go, but at the same time, I think it’s a good thing they’re pushing this because the conversation needs to happen and with more urgency.
> The social media platforms the bill would target include any site that tracks user activity, allows children to upload content or uses addictive features designed to cause compulsive use.
This is such a silly issue on a governmental level. Shouldn't the parents, who spend more time around their children than the Florida House of Representatives does, worry about and monitor their own children?
Off the top of my head, I can think of two reasons why it might be preferable to have the government intervene.
1) We are happy to have the government intervene in other cases for the sake of children; I would be pretty upset at any politician who espoused removing age restrictions on cigarettes. I don’t know that social media is as bad but it certainly has some of the addictive properties
2) An argument from tragedy of the masses: if all kids’ social lives currently revolve around social media, unilaterally disallowing one child to use it could result in alienation from their peer group, which might be worse for the kid than social media. A government mandate would remove this issue.
The bill is pretty broad, with oddly specific carve-outs. I did not think that I would need to pay for a VPN service to avoid weird legislation in my state, but here we are.
The sad thing is that, with the exception of the occasional comment on Slashdot / HN and my LinkedIn profile, I'm barely active on social media anymore. Still, I'm not giving PII to sites that this bill considers "social media" sites, just to prove that I'm over 16. That's absurd.
A warning signal for what? To need a VPN? I was hoping that our current legislators would be unable to spell "Internet", to be honest.
Unfortunately, Florida has gone crazy over the past twenty years. I'd vote with my feet, but this is where my family lives. So, I have little choice but to hunker down and contribute what I can to destabilize the dominant QAnon aligned caucus in the legislature, and hope that in 2026, we get a governor who doesn't play to this nonsense.
I am all for it but the question is how will this be enforced? If they use this to require government ID on every other website now and crack down on semi anonymous accounts and go full surveillance I do not like it.
You might say "they do know you are when you make any account" and it might be true, well if I would use a VPN all the time and really no let any info slip maybe not. I just to not like the total deletion of privacy.
I'd like to see a federal law mandating that any future law or regulation that restricts what children are allowed access to makes the parents solely liable if the children gain access. There should be a standard way to declare pages are age restricted, and the browser on the child's phone or computer should check it. But if the child bypasses that somehow, that's on the parents.
So some loon bans Call of Duty for anyone under 18, but 16-year-old Jonny figures out how to log in anyway at his friend's house. You're championing for a way to indict Jonny's parents on federal charges?
If I'm off base, can you clarify exactly what you would like to see instead?
I don’t want to verify myself and I don’t trust the government to not work with Facebook to say they are doing some “zero knowledge proof” and still map us all. Implement a fine for parents in Florida. Let it stay Florida’s problem. If the Amish were in charge we would all be in trouble. Raise your own kids.
I'm calling it now, this is like the War on Drugs. Banning it will only create an underground environment with even less oversight, not to mention the burning desire kids have for doing things they should not be doing, which is guaranteed because this is not even remotely enforceable.
For this bill, no. It does not meet conditions (d) and (e) of the 5 conditions a site must meet in order to be a social media platform. I've quoted the relevant part of the latest bill text in this prior comment [1].
this is a strategy that works under any governance system on the planet. so to actually make an enforceable law don't try to impose restrictions on the action, provider, or users of the action, instead you should think about things they rely on and restrict those.
this law doesn't do that. but it would be fun to think of things that would. can we take something away from social media users? can we incentivize non-social media users? maybe we can leverage our disdain of data brokers into a partnership where data brokers can't serve companies that have children's data
just spitballing, I don't actually care about this law or any of those proposals, just noticing the current state of the discussion lacks... inspiration.
I don't agree with any type of "outright ban". However, having EXTREME restrictions on these social media sites for children seems so obvious. I'd prefer a complete restriction on any content outside of friends groups, any algorithm restriction, etc.
I feel like there is no good way to market the fact that some school districts are banning the dictionary to comply with all the "word" and "thought" ban laws.
Like how do you spin that so it's a good thing that some teachers can't even have books in their classrooms to avoid running afoul of the law?
How do you spin book bans as a good thing in any circumstance?
How do you spin the laws that remove parents rights to make medical decisions for their children when the medical decisions conform with the current state of the art and evidence based treatments?
Other than that, I think most people just treat Florida as a meme thanks to the Florida man stereotype.
Not sure how I feel about this, but I don't hate it, theoretically. I'm sure it's hopeless in practice. It might be a worthwhile experiment if nothing else. But a piece of legislation will never be an adequate substitute for good parenting.
I'd like to see more science proving that social media is bad for kids before I see lawmakers enacting laws around the theory. That said, I think 16 is too high an age. Kids go into high school at ~13, so I think that would have been a more reasonable age to consider. The hard truth is that most kids don't seem to have access to "good parenting" by any reasonable standard, and parenting is hard with good parenting being harder (not an excuse).
> I'd like to see more science proving that social media is bad for kids
Smoking tobacco was allowed for children for hundreds of years before it was regulated. Today we can hardly fathom how stupid it would be to allow children to smoke. Social media is much more accessible than cigarettes, far more addictive, and rather than messing with your lungs it messes with your brain and personality.
There are many problems with "waiting for the science". One is that it takes many, many decades to get reliable longitudinal studies on things like addiction and how the brain is affected.
There are so many indications that social media is bad for children (including scientific studies), in many different ways. It really should not be controversial to limit use for children. It's not like it is something that they need.
This book, "Smartphone brain: how a brain out of sync with the times can make us stressed, depressed and anxious" by Swedish MD psychiatrist Anders Hansen, brings up one of the aspects (unfortunately I don't know of an English translation): https://www.amazon.se/Sk%C3%A4rmhj%C3%A4rnan-hj%C3%A4rna-str...
> I'd like to see more science proving that social media is bad for kids before I see lawmakers enacting laws around the theory
Is there science proving that porn is harmful for children? If not, do you see that as an argument for legalizing it?[0]
The first restrictions on tobacco purchases for children were in the 1880s, long before the science was settled on the harms tobacco use causes.
Science is slow, contentious, and produces limited results[1]. If we can only ban things that are scientifically proven to be harmful, what is stopping TikTok from slightly modifying their app and rereleasing it?
I can easily buy products that are BPA free but that just means clever chemists have come up with even worse platicizers like BPS to use as substitutes.
And software can be adapted way faster than chemistry.
[0] I don't.
[1] It's definitely still worth studying.
Definitely not as a substitute, but it might be something that helps push more parents to consider preventing their children from using social media. It’s much easier for a parent to explain to the child that the reason they’re not allowed to use it is because it’s illegal, instead of trying to explain how social media negatively affects their brains.
I think that instead of banning it all together, they should ban practices within those platforms.... Wtf Florida, this sounds life something CA would do... It's almost as if they banned all speech because some speech is hurtful
Are you trying to suggest that the warning label that CA added is more similar to the banning of social media for students than banning books in schools, and banning the discussion of certain topics in schools?
The current generation is already heavily manipulated by it, which means the next generation - their children - will also be heavily manipulated by it regardless of consumption or lack thereof.
As the parent of an 11 year old, I wholeheartedly agree. The science is pretty clear that social media has had a very detrimental effect on teens' mental health. We should treat it like we do other substances that are harmful for teens; once they're older they are better able to make wiser decisions as to if, when and how they want to consume social media.
It may be impossible to enforce outside school, but so is the 13-year old limit on opening accounts (my kid's classmates all have accounts; they just lie about their age). But that's not a reason not to have it on the books, as it sets a social standard, and more importantly puts pressure on social media companies.
The evidence is not all that solid. The most demonstrable link is between use of portable devices at bedtime and poor sleep quality. Everything else has mixed evidence.
Social media exclusively uses a predatory pricing model and the companies should be forced to stop subsidizing their products with targeted ads. The algorithms drive max engagement because that drives max impressions and cpcs. The algorithm doesn't care that its' making people angry and divisive - it's optimizing revenue!
All of the other evils stem from this core issue - Meta, et al. makes money off of their customers misery. It should hardly surprise anyone that children are affected much more strongly by industrialized psychology.
> The social media platforms the bill would target include any site that tracks user activity, allows children to upload content or uses addictive features designed to cause compulsive use.
That does not appear to be correct. It says it applies if any of 3 conditions hold. The bill text says all of the conditions must hold (and there are 5, not 3). Here is what the current text says social media platform means:
< Means an online forum, website, or application offered by an entity that does all of the following:
< a. Allows the social media platform to track the activity of the account holde
< b. Allows an account holder to upload content or view the content or activity of other account holders.
< c. Allows an account holder to interact with or track other account holders.
< d. Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.
< e. Allows the utilization of information derived from the social media platform's tracking of the activity of an account holder to control or target at least part of the content offered to the account holder.
There's also a huge list of exceptions. It says that it:
< Does not include an online service, website, or application where the predominant or exclusive function is:
< a. Electronic mail.
< b. Direct messaging consisting of text, photos, or videos that are sent between devices by electronic means where messages are shared between the sender and the recipient only, visible to the sender and the recipient, and are not posted publicly.
< c. A streaming service that provides only licensed media in a continuous flow from the service, website, or application to the end user and does not obtain a license to the media from a user or account holder by agreement to its terms of service.
< d. News, sports, entertainment, or other content that is preselected by the provider and not user generated, and any chat, comment, or interactive functionality that is provided incidental to, directly related to, or dependent upon provision of the content.
< e. Online shopping or e-commerce, if the interaction with other users or account holders is generally limited to the ability to upload a post and comment on reviews or display lists or collections of goods for sale or wish lists, or other functions that are focused on online shopping or e-commerce rather than interaction between users or account holders.
< f. Interactive gaming, virtual gaming, or an online service, that allows the creation and uploading of content for the purpose of interactive gaming, edutainment, or associated entertainment, and the communication related to that content.
< g. Photo editing that has an associated photo hosting service, if the interaction with other users or account holders is generally limited to liking or commenting.
< h. A professional creative network for showcasing and discovering artistic content, if the content is required to be non-pornographic.
< i. Single-purpose community groups for public safety if the interaction with other users or account holders is generally limited to that single purpose and the community group has guidelines or policies against illegal content.
< j. To provide career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.
< k. Business to business software.
< l. A teleconferencing or videoconferencing service that allows reception and transmission of audio and video signals for real time communication.
< m. Shared document collaboration.
< n. Cloud computing services, which may include cloud storage and shared document collaboration.
< o. To provide access to or interacting with data visualization platforms, libraries, or hubs.
< p. To permit comments on a digital news website, if the news content is posted only by the provider of the digital news website.
< q. To provide or obtain technical support for a platform, product, or service.
< r. Academic, scholarly, or genealogical research where the majority of the content that is posted or created is posted or created by the provider of the online service, website, or application and the ability to chat, comment, or interact with other users is directly related to the provider's content.
< s. A classified ad service that only permits the sale of goods and prohibits the solicitation of personal services or that is used by and under the direction of an educational entity, including:
< (I) A learning management system;
< (II) A student engagement program; and
< (III) A subject or skill-specific program.
I hope they add 8 more exceptions. I want to see what they do when they run out of letters for labeling the exceptions.
This might be the first bill approved by the Florida legislature in the last 8 years that I agree with. I like it in spite of all the reasons these people voted for it. And I like it in spite of the absolute horror show of an enforcement dilemma it's going to impose on the residents of Florida. Will it force all Florida residents including children to use VPNs to use the internet? Yes it will.
I have no idea how this would be enforced but I agree with the spirit of the law.
We either live in a world where children are hopelessly pressured into joining social media early in life and suffering its effects, or we ban them from it all together and allow them to have something that still looks like a childhood.
Perhaps a better solution is to allow children on social media, but drastically limit how companies are allowed to interact with such accounts. Random ideas: no ads, no messages from people they don't follow, limit ability to follow other accounts in some way.
Children are not as much at danger from falling for online advertisements as they are for the overall detrimental effects of social media in general. Social networks are inherently bad for kids; they are addictive and directly harm children by constantly dosing them with dumb entertainment and cutting off their attention spans.
I fail to see how anyone under 18 can legally agree to any kind of contract without a parental co-signature. This should be enforceable without new laws, but I'm glad to make it explicit: progress over perfection.
An extremely transparent attempt to ensure Florida children do not have access to news and information that goes against a very particular narrative. It's all about the control of information, which sure is ironic coming from "the party of small government" and self-proclaimed "free speech absolutists." Absolutely disgusting.
It would be fantastic if social media companies instead simply blocked all access to IPs located in Florida regardless of age.
That's true, however I'm not sure I trust "Big Tech" to self-manage this system. After all, "Big Tech" are the ones that forced us into needing this legislation in the first place. And they don't have a great track record at protecting PII.
The federal government already has all the info it needs to run an ID program.
First, we need privacy regulation (eg a US port of the GDPR) that stops the existing widespread abuses of identification and personal data, especially the abuses being facilitated by the current identification systems. Only after this is fixed does it make sense to talk about increasing the technical strength of identification.
This is idiotic, they will find a way to watch and be on social media regardless. Also, I don’t see how social media is bad but MSM that brainwashed generations is any good, are they going to enforce the same rules on other forms of media? Or is it because “we can’t censor XY social media” so we are gonna ban them all?
Its impressive how people here, that should know better, are cheering this on - just because they are parents themselves. Even disregarding the legality, its not going to work technically. The perceived safety of ones kids really crosses some wires in parents.
It’s good to have this conversation. I don’t think anyone here wants the bill enacted as-is (did anyone actually read it? I didn’t, and I don’t live in FL).
Of course this is technically unenforceable. There will always fbe workarounds. You could smoke as an 11 year old if you were determined enough.
But we need more pushback and dialogue on social media’s role in the common discourse. For a while, nobody talked about smoking being bad and it became normalized while killing a lot of people. Seatbelts. Dumping waste in the river… most people go with the flow of common consensus until that consensus is scrutinized.
It would definitely work if a parent could report the accounts of all the kids involved and their tiktok/whatever accounts got deleted + their phone numbers and emails got block-listed for the service.
Even more if the system worked in a centralized way: this email/phone is used exclusively by a kid, so now all on-boarded companies must delete their accounts and not allow them to register again.
Herein lies the rub, every time I've seen this happen in the past companies just applied blanket wide bans to accounts. Sometimes retroactively for accounts that were illegal at the time of their creation regardless of the users current age. Both Google and Twitter did this to users who created accounts before they were 13 but were then adults.
If you're going to introduce legislation like this, then it needs to include provisions that it will not permanently bar those users access to the services once they are of age. I manage my children's social media interaction (near 0 with the exception of YouTube), if their accounts get permanently disabled that would be unfortunate in the future when they are old enough.
I’m a parent and think this is bullshit. It won’t work and it isn’t designed to work, it’s just political posturing destined to be struck down by courts.
I agree it's the parents' responsibility, but I don't know that the government can't prevent things it deems harmful just because parents are okay with it. It also often makes things easier for parents by being able to require service providers to work with parents and limit their sales to children.
I'm fine with the government enforcing curfews, smoking, and drinking laws on children even if their parents disagree.
I don't think any of the things you listed, except probably Facebook would qualify as a social medium under this law. I'm honestly very thankful more manipulative social media didn't exist when I was a teen. My life probably would have been better if I didn't have access to as many as I did even.
All those things were horrible for us. I can't imagine the nightmare of combining that with the use of my real name. Like yeah, I want a free for all, but how bad does something have to be before we say no way?
Except that they weren't. My life would actually be substantially worse now without most of those apps / websites. I was just another loser growing up in a rural trailer park with no real prospects before I got interested in programming and taught myself employable skills in those online forums and chat apps. It's insane to me that anyone could call themselves a part of a "hacker" community and complain about kids having access to information and wanting legislation to restrict it.
I support this. Normally, I think government intervention is bad and parents should be in control. But, as a parent myself, it was hard not to allow my child to have a phone when she kept saying, "Everyone has one. I'll be the only one without a phone". No one wants their child left out or left behind. This will remove that rationale.
Just to be clear, because parenting is hard you want to legislate how other parents are able to raise their children so that it's easier for you to get the behavior you want? I think all parents should take this approach.
* Having trouble getting your children to go to church because their friends don't? Let's just legislate mandatory church attendance so that will remove the rationale for kids whose friends don't attend!
* Having trouble getting your kids to eat healthy because all their friends get to eat and drink whatever they want? Let's outlaw sodas so kids won't have to feel peer pressure!
* You think your kid is playing too many video games? Why not just pass legislation that restricts all video game usage so your kid doesn't feel left out!
Telling your kids no is part of being a parent. Explaining to your children why they aren't allowed to do some things that other kids do is part of being a parent. It seems we have an abundance of parents who don't want to actually be a parent and would rather legislation was passed so they don't have to say no to their kids.
It's not my deficiencies as a parent that make me support this law. It's the need for a reverse network effect. That's how social media works. If everyone else has it, your kid wants it. If no one else has it, your kid doesn't want it. Social media has been found harmful to children, like smoking or alcohol. For many reasons, it should be limited for children.
This wouldn't stop your kid from wanting a phone. They'll all still want phones for games, chatting, cameras, videos, comment sections, forums. Of course many parents won't care if their kids sign up to social media sites through some workaround, so the pressure will still be there to join social media sites.
I often draw the parallel with cigarettes and alcohol. Kids need to produce an ID to purchase them. Sure they can fake it, but then they are breaking the law, and that still raises the barrier.
But that's likely not enough. In addition, there should be public health campaigns to warn against the risks.
Cigarette use has plummeted since the 90's, so something must be working.
I like to compare it to gambling — another real gated isdustry, due to using professional psychology to engineer super stimuli for the purposes of addiction.
Social media should be regulated for the same reasons gambling is regulated.
The tech industry needs an association similar to how doctors join the American Medical Association, where they can collectively agree on ethics and guidelines that must be followed. Any person in tech is behaving unethically if they assist in implementing software to restrict children in Florida from accessing information on the internet that their peers in other states can access. Florida shows little concern for the potential harm to children resulting from information restrictions. Kids in abusive environments greatly benefit from the social connections online communities provide, as well as the diverse information and perspectives from other people. Florida has created this bill as a means to censor content it deems immoral, whether it be abortion information for girls, understanding sexuality, the existence of trans kids, or any other topic arbitrarily designated as immoral by the ruling political party. It is disconcerting to target the rights of children, who have the lowest chance of having the resources needed to challenge something like this bill in court, which should happen under the first amendment.
The AMA has many problems but Ill just mention one - they artificially restrict the number of graduates each year WHILE reporting nation wide shortages. Its not a silver bullet for ethical behavior or efficient economics.
It is not reasonable to expect parents to spontaneously agree on a strategy for keeping kids off social media- and that kind of coordination is what it would take, because the kids + social media companies have more than enough time to coordinate workarounds. Have the law put the social media companies on the parents side, or these kids may never be given the chance to develop into healthy adults themselves.