Hacker News new | past | comments | ask | show | jobs | submit login

This author is seriously suggesting that governments ban children's use of social media, and that can't really be done without completely destroying internet anonymity.

Any policy that actually achieved this, without being trivial to circumvent, would basically need to replicate the great firewall of China.

Doing this in a half-assed way is even worse than not doing anything at all. If you just require ID checks for all users to do age verification, you create a privacy nightmare for the adults. Meanwhile, children will circumvent the restrictions with VPNs, so you need to ban VPNs too. Foreign companies, who have no incentive to play by the rules, will surely capitalize on this, so you also need a comprehensive website blocking system. As they say, there's nothing more dangerous than a teenager with very little money and a lot of time on their hands, so a simple DNS-based block definitely won't suffice, you probably need Chinese-style deep packet inspection and such.

The only middle ground I see here is enforcing this through the App Stores, perhaps with an extra ban on sideloading without a developer certificate, guarded byID checks. Losing the ability to sideload would be a shame, but this is the "worst solution, except for all the others" kind of situation. Sure, this is trivial to circumvent by using the web, but the extra friction required due to web apps being worse than native might be a good enough deterrent.




Of course people think of solving this problem again by constraining the consumer, instead of the producer.

What we should do is make rules for social media platforms to disallow them to develop algorithms that make people addicted. You might think how do you define that, but the companies have already made a whole science out of it so it's not that abstract anymore. It would sure be an elaborate task and will surely result in a cat and mouse game, but at least the issue would be taken seriously and people will understand better that engaging with these platforms that try to push the edges is playing with fire.


> You might think how do you define that, but the companies have already made a whole science out of it so it's not that abstract anymore

Hardly. Addictiveness does not exist in binary. There are many people who obsessively check their email or refresh news websites. There is no doubt that social media companies choose the algorithms that maximize engagement and so most probably they also maximize addiction, but _any_ algorithm will cause addiction to some extent. What's the limit? How do we even measure this?

Something that is maybe a little more interesting is banning the practice of recommending "negative content" because it produces more engagement than "positive content". How this is defined is also somewhat squishy, but we can at least try to define it -- content that is likely to provoke negative emotions, like anger, fear, aggression, etc.

I think there's a much clearer through-line to argue that recommending negative content on social media produces a substantial negative externality, and that moves this into the category of things like environmental regulations.


I agree with you that a pos/neg divide is clearer and more straight forward to construct.

Reading your comment also evoked in me imaginations of political repression. Anger and fear are really important emotions signifying "this situation is not meeting my needs". Social Media can abuse these for profit probably precicely because they play this important role. In this light, a ban on recommending negative content seems really dangerous. Any content that expresses dissatisfaction with the political status quo is likely to contain some 'negative' (I suggest the term 'challenging') emotions.

So while 'addictiveness' is, like you say, really difficuly to measure - I prefer we try.


I’m fine with the deciding body being an independent, literate group (easier said than done) who observe allegedly addictive platforms and make judgements based on the spirit of the law. We don’t need to reduce this to some kind of automated decidability machine.


Absolutely this. It’s not about banning content - to a certain extend parents are responsible for what their kid has access to. In many cases proper education allows kids to self regulate and consume “adult” content appropriately with no harm. Unfortunately that’s not possible everywhere as many countries lack the resources and/or mentality to achieve this.

Regardless, dangerous content and services (just like dangerous substances) should be hard to make and very visibly marked, leaving no doubt about what it is and how it works. I love the EU’s notion of “algorithmic transparency” [0]. I would go a step further for systems attempting to increase engagement by exploiting behavioural sensitivities to be marked and even opt-in (think cigarette packaging).

[0] https://algorithmic-transparency.ec.europa.eu/index_en


The internet makes this really hard to enforce.

To sell drugs, cigarettes, alcohol etc, you need somebody to do the selling, and that person needs to be located wherever your customers are. If you break that place's laws, well, they probably have police who can put you in jail.

Social media is different, you can run a social media website targeted at Americans without ever stepping foot in the US, having a server in the US, having a business entity in the US etc. It's just some random Russian website, following Russian but not American laws, that some Americans like to visit. Sure, the US can try playing the cat-and-mouse game with you and force ISPs to block your site and all VPNs, proxies and TOR nodes that might presumably give customers access to it, but that's still a game one needs to play, and playing it isn't without consequences for the privacy and freedom of others, consequences most democratic governments aren't willing to bear.

Even if the government wins and somehow manages to block you completely, or put enough obstacles in your path that doing further business doesn't make sense, you still don't lose. You don't go to jail or face legal action, you just quit and focus on other countries instead.


There are two avenues to deal with a hazard. You can try to manipulate the environment to eliminate the hazard, or you can try to strengthen people to make them immune to the hazard. I think we should prefer the latter over the former whenever possible.

For one thing, it's more robust. The environment is messy and control is often illusory at best. Control limits freedoms and introduces centralized points of failure that can be manipulated by bad actors. Making people strong and free creates more opportunity and innovation, even though it scares the people who long to be in charge of the centralized control.

What does it mean to strengthen people to make them immune to the harms caused by social media? I don't know exactly, but I bet we could find out.


The brain has some flaws that are very hard to overcome. Addictions are some of them.

There's a reason almost every country in the world regulates and restricts gambling.


>The brain has some flaws that are very hard to overcome. Addictions are some of them.

The majority of people don't feel victim to addiction, it's a minority of people who are prone to it. Everyone shouldn't have their freedoms restricted just to cater to the more weak-minded.


A popular position perhaps, which is why you can still go gambling while 'weak-minded' children cannot.

Personally, I hope we come to protect the vulnerable regardless of their age.


You don't really seem to understand how the legal system works, don't you?

Laws are there to protect powerful people (in many cases), to protect the majority (hopefully most cases) and to protect the vulnerable, sometimes from themselves (in a decent number of cases).


Aside: The people who like prediction markets get quite annoyed at that.


"prediction markets" -- what a loaded term. Do you consider financial futures markets to also be "prediction markets"? As I understand from financial research, it has been shown time and time again that financial futures do not predict future performance. In my mind, "prediction markets" are nothing more than legalised gambling on an (election) outcome. To be fair, most retail (non-institutional) traders of highly leveraged financial products (futures & options) are the same: They are gambling on an outcome, not investing or hedging risk. Finally, I am not saying it should be outlawed, but there should be some very strong warnings before trading the product -- as there are for futures & options.


I think we should start with removing the immunity that large platforms possess against relevant criminal prosecutions. So let's take for example suicide, in some jurisdictions driving another individual toward suicide is a criminal matter. If evidence can be put forward in a court that they used a lot of social media and that the algorithm contributed to that suicide, well maybe the publisher should be getting prosecuted.

Do I expect that some social media companies are really going to struggle to continue to operate at their current scale because of these changes? Yes I 100% expect it and I think it's great. It may lead to a smaller and more personal web. Your business model has no inherent right to exist if it harms people. Maybe, for example, you will need to hire more humans to handle moderation so that you stop killing people, and if humans don't scale, well, too bad, you're going to get smaller. We regulate gambling, tobacco etc. to limit the harm they do, I don't see any difference with social media.

To have the biggest impact without stifling innovation we can start by applying this rule to platforms which are above a certain revenue level. There is likely a combination of legislative and judicial action here in that there may already be crimes on the books which these platforms are committing, but the judiciary has not traditionally thought of a corporation being the person who committed that crime, certainly not at scale against thousands of victims. In other cases we may need to amend laws to make it clear that just because you used an algorithm to harm people at scale, doesn't make you immune to consequences from the harm you caused.


No, there's not. There's any number of ways to deal with a hazard. Your two avenues are not even distinct. Both require exerting control. Any scheme can be manipulated by bad actors. Scheming to not scheme is still scheming.

You cannot make people anything without limiting their freedom. How do you make people stronger? If you have an idea, there is a centralised point of control/failure. Bad actors will be more strong and free as well.

There's plenty of examples of successful measures to reduce harm by controlling the "environment" see: cigarettes, alcohol, gambling, being old enough to drive on public roads, being old enough to take on debt, child labour, etc.

It's weird to use the words "environment" and "hazard" on one hand and "people" on the other. The discussion is about hazards designed, created and maintained by people. The environment to manipulate is people, organisations, and law.


As a friend of mine said "If you can't kid-proof the farm, you have to farm-proof the kid". Watched said kid drink from farm puddles, and lick feed bowls. Seems to have worked: She's headed to tech school now.


I agree that we can't effectively manipulate the environment to eliminate the hazard, but I also worry that we can't effectively strengthen people to make them immune to the hazard either.

A common thread through all human history is people being misled on-mass. Before social media we were slaves to the tabloid headlines. Before widespread papers we were slaves to the pulpit. Etc.

For the last 10 years social media has been the tabloid but personalised. Outrage = engagement so the algorithms have pushed outrage, and personalized in the sense that they have searched for the thing that outrages each of us individually.

I fear that the next 10 years of social media is very basic generative stuff (LLMs don't need to get better; social media companies just have to apply what the current art), turning them into tabloid with intimacy. By turning into your friend in how they communicate with you, they get engagement x10.

The way to change someones mind is through intimacy.

And humans are suckers for it. We can't strengthen the masses against outrage, and we can't strengthen the individual against intimacy.

Sorry for being so pessimistic.


If the hazard is just me skulking about and punching you in the back of your head every time you let your guard down, you could strengthen yourself and make yourself immune by never going outside and always keeping your door locked and barring up your windows.

How is that inherently preferable to the addressing the harm itself? By what principle do you conclude that we should prefer one over the other whenever possible?


Or we could have the government require that everyone wear a loudspeaker that constantly announces their presence so that nobody can sneak up on anyone. Citizens are required to purchase their own loudspeaker and anyone caught not wearing one will be fined or jailed.

Is that what you prefer?

On the other hand, if everyone around you was a black belt in jiu jitsu and you knew that there was a good chance that they’d break your arm if you tried to sneak up and punch them, you probably wouldn’t want to do that, would you?


> Or we could have the government require that everyone wear a loudspeaker that constantly announces their presence so that nobody can sneak up on anyone. Citizens are required to purchase their own loudspeaker and anyone caught not wearing one will be fined or jailed.

A much worse solution than e.g. jailing me. It's not everyone who is the problem, neither in my scenario nor in the case of a handful of gigantic social networks exploiting insecurities and addictive tendencies in children.

> On the other hand, if everyone around you was a black belt in jiu jitsu and you knew that there was a good chance that they’d break your arm if you tried to sneak up and punch them, you probably wouldn’t want to do that, would you?

Is that a better solution than jailing me?

Seems we've ended up with three different potential solutions and your principle already isn't holding any water IMO. To bring us back to the problem at hand I can think of an analogous set of three solutions:

1. Make everyone announce their age when they use a social media website so that they know not to exploit children, which is kind of like making everyone wear a loudspeaker. 2. "Harden" the children, which is like teaching everyone jiu jitsu 3. Remove the profit incentive to exploitation of children or otherwise by banning ad funded social networks. This is like jailing the culprit.


We spend enormous amounts of time and energy trying to pick up the pieces of the destruction being left in the wake by social media giants as they get absurdly rich. How about we instead simply make the giants liable for what gets posted on their platforms? Let the “move fast and break things” crowd that is so certain of their own genius spend their billions on figuring it out instead of on how to get you to click on another ad.

They will figure out a solution very quickly or they will simply cease to exist, and either way the problem will be solved.


> How about we instead simply make the giants liable for what gets posted on their platforms?

Because social media is a tool for global social influence and global intelligence that the powers that be do not want to give up. Because those same powers are often invested in social media companies or don’t want to get on their list of enemies. Because it might look bad politically if it proved unpopular. Because they are all social media addicts. Take your pick.


The largest social media platforms should be required to federate and open their data through powerful APIs.

Once they control a significant part of society's communications they own society something in return. Let society access our communications how we choose.


the fact they are exceedingly rich seems to indicate they are providing things people like.


Fentanyl, human trafficking, and identity theft can all be quite lucrative as well. Externalities matter. A high revenue stream does not equate to high benefit to society.


It’s ridiculous to suggest that you destroy all of internet anonymity by requiring ID for mass-scale ad-funded social media

It’s the commercial side of the equation that’s the problem. It’s what gives these social media companies perverse incentives when it comes to engagement. So any social media site that can’t effectively tap into the US ad market at a significant scale will not have as much of a problem. I don’t think the pre-Facebook forums was as much of a problem even though they might have had some ads here and there.

So VPNs and all that just isn’t a concern. You don’t need a great firewall. You just need to regulate commercial business, which is not at all a crazy proposition.

The other side of this is that the US really, really should implement an effective federal ID system with two factor authentication. This is becoming commonplace almost everywhere else in the world, and not having it creates very serious security and privacy risks.


Facebook already knows how old its users are with a great degree of accuracy. The block does not have to be perfect to be effective at a societal level. Some curious kids may circumvent it. But it would prevent the massive network effect whereby all teen social life is online on platforms that monetize them. Some very large fines for social media companies found providing services to minors would make it impossible to advertise to teens there, and would make a huge difference.


Even if Facebook implements these controls, what guarantee do we have that another social app won't come along without these controls? Do we regulate TikTok, Youtube, Snapchat, GroupMe or whatever the latest flavor of the month is as well? There are probably thousands of startups that would jump at the chance to monetize teenagers even if FB were to step aside.


There really aren't that many platforms that 'everyone' uses. So yes you regulate them. Just like we regulate the thousands of bars in the country from allowing minors in


Social media != the Internet. It would make it harder to sign up for an account with the large social media services covered by the law. They would need to check ids, or outsource to someone who does.

It would be like creating a bank account.

You could do plenty of other things. Anything you don't need an account for isn't covered. Depending on how the law is implemented, perhaps many forums wouldn't be covered?

I think it would result in kids going to websites that their parents haven't heard of yet and don't check ids.


> Social media != the Internet.

Technically true, but social interaction is the great draw of the internet. Even Zawinski's Law noticed that every program expands to include social features or is replaced by those that do.


A similar form of middle ground without touching the app/web level may be to enable device parental controls by default on new Phone and Tablet purchases and rationing its removal with some kind of privacy-preserving ID hashing protocol that's also rate-limited per ID. 90% of the problem is mobile device ease related, it doesn't need to extend to PCs. Still a ton of burdensome consequences.

But really, the best policy would be constant social and educational emphasis on device parental control feature awareness - similar to drunk driving campaigns. Get parents and guardians in the habit of taking 15 min to set up basic parental controls BEFORE handing devices to kids. Rather than the all-too-common mess of reacting to a problem by taking the kids phone or making them manually show everything after the damage is already done. Maybe also compel device manufacturers to incorporate a first-time-setup flow that has a specific soft ask of "Will this device be given to or borrowed by a child?" that then handholds the owner through setting up controls.


>that can't really be done without completely destroying internet anonymity

I am skeptical of this push to elevate internet anonymity to a new fundamental principle for organizing society.


>elevate internet anonymity to a new fundamental principle for organizing society

You mean privacy? Internet anonymity is a downstream byproduct of our right to privacy, not some new concept devised in the internet age. We've had the fourth amendment for quite some time.


OP hit the nail on the head. Privacy and anonymity are not the same thing. You can absolutely provide privacy without requiring anonymity.

I can have privacy in a conversation in my house without anonymity. I can have privacy in the woods. I don't have anonymity while walking down the street, but have pseudo-privacy. If I begin preaching on a street corner should I expect anonymity? If I join a members only fraternal hall that meets monthly, do I have anonymity? Do I have privacy? To what degree? Those are the scenarios we should be focusing on achieving.

As OP said [we should not]... > >elevate internet anonymity to a new fundamental principle for organizing society

This is probably the most important comment on this topic. You can't build a society on top of anonymity. The problem people should be out here solving is how to provide privacy without requiring anonymity. And when you have the itch to respond with a quick thought of "it's impossible," pause and think about how we accomplish it in the "real world" and then revisit.


I'm not saying they're the same thing, I said explicitly anonymity is a downstream byproduct of privacy.

I have yet to see a system or proposition that can maintain privacy AND kill anonymity.

Every corporation could implement zero knowledge proof schemes for everything I guess. That's what some schemes like crypto identification programs aim for.

>The problem people should be out here solving is how to provide privacy without requiring anonymity

>How we accomplish this in the real world and revisit

Maybe I'm simply ignorant. How do we achieve digitally the idea of privacy without the ability to mask your identity?


> I said explicitly anonymity is a downstream byproduct of privacy.

Yes I agree that this is your assertion, and what I'm saying that that I fundamentally believe that this is wrong. Who knows, I may be wrong - but I listed a number of instances IRL where people have privacy without anonymity.

The fact that people only think of privacy as downstream consequence of anonymity is the false conclusion. And to fix the internet we need to all realize that it's false. It's what we're used to, and is the only way privacy has been provided in the digital world (which is why so many including me have thought that way), but it is wrong. We first need to realize and acknowledge that it is wrong and then pursue a path the matches - just like we do in the real world.


There's no right to privacy in the 4th Amendment.

But more importantly, there are huge differences between internet anonymity and the older comceptualization of a right to privacy, which has mostly to do with shielding conversations between people who know each other's identities (real names and addresses) from prying government eyes. Shielding such conversations from prying government eyes is not incompatible with preventing teenagers from using social media.


It's interesting that, historically, privacy didn't need protecting because it was trivially available and people won't have thought much about the possibility that in the future it would not longer be technically available.

Privacy didn't need protecting in history before bugging. If a founding father wanted to talk to someone privately they just went stood apart from everybody else.

50 years ago, standing apart no longer provided privacy because of long range microphones etc, but those were targeted attacks on diplomats etc.

Nowadays, you can probably do a nice undergrad project to recreate conversations from lip reading streetview video clips.

Privacy was defacto available hundreds of years ago, but is technically impossible now. (Imagine the kinds of body checks that will be introduced before you enter a US SCIF post the chess cheating scandal!)


The recent emphasis on internet anonymity is mostly not a response to better tech for eavesdropping.


> There's no right to privacy in the 4th Amendment.

The Constitution is not an enumeration of rights that the people have. It even says so itself, in the ninth amendment. So there doesn't need to be text which says you have a right to privacy: you have that right even without it being listed.


I would not feel safe painting a target on my back if I was required to attach my legal name to my comments, especially the ones advocating for queer rights.


Have a government agency do the age verification, then it tells FB your age but no other information. Maybe the agency gives you a unique token that it also gives to FB, and you can use it to make a unique account.


That would be great if voters and politicians already cared for privacy, and if it was auditable.

Anyway good thing I'm not on Facebook. If I can hang out in a community of about 100 without anyone calling it "media", and we just keep out kids by not bringing any in, I'll live I guess


> This author is seriously suggesting that governments ban children's use of social media

Where did it do that?


Just shut down Meta and something like 75% of the problem goes away. One neat trick, etc.


That works until the next ten competitors take up the space. It's a wider cultural problem.


Achieving 90% of not is simpler than you think: ban smartphones from schools during school bours. If the parents want the kid to have a phone, then the parents can get a flip phone.


I think we should go even further: ban all phones during school hours. There is absolutely no reason that kids need to have a phone in school. If the parent needs to reach them, then call the school office who can get your kid on the line. There is no emergency so dire that a few minutes' delay in talking to your child will make a meaningful difference.


> There is absolutely no reason that kids need to have a phone in school.

I commuted one hour from and to middle school and high school, I definitely needed a phone to communicate with my mother when stuff happened (and occasionally happened), such as missing the bus or being late for lunch.


Lots of kids did that before there were mobile phones.


So, pay phone booths for school kids?


When I was in school, you'd just go the main office and ask the secretary. They had a phone that students could use (for free) when they missed the bus or had to reach their parents for something important.


> There is no emergency so dire that a few minutes' delay in talking to your child will make a meaningful difference.

You're neglecting emergencies that are happening in the school itself. School shooters, for instance.

I frankly don't see the problem with kids having the phone with them as long as they're not actually using it outside of an actual emergency.


Would this be a federal ban, a state-level ban managed by education boards, or something else?

And how is it enforced exactly? Are parents held responsible, or are state education funds impacted somehow based on smartphone use?

Would we need a federal mandate to require flip phone / feature phone support? The last time I tried to find a feature phone it wasn't easy, many depend on 2G/3G networks which are losing support and carriers have absolutely no incentive to carry feature phones when smartphones are all that sell.


State bans are already occurring. Indiana just banned phones from schools a week or two ago.


Well I can't really complain about that at least. States have a lot more leeway and are explicitly given the power to manage their public schools.

A quick look at Indiana's law and the news articles are interesting. The law requires schools to implements rules that ban phone use during class, but the actual rules and implementations are left for schools to decide. The articles I found make that sound like the law is toothless and passes the hard work off to school systems, but in my opinion that's a great law as it let's every school do what works best for them without prescribing a single solution for everyone.


> that can't really be done without completely destroying internet anonymity.

This is why I can’t take any calls for banning social media for kids on HN seriously: The moment anyone introduced any legislation to limit social media access by age, this creates a de facto requirement to verify ID. The people in this thread would be up in arms as soon as the government tried to force companies to collect their ID to use social media.


Oooor, hear me out, that part could be a government run ID validation service (as in SaaS). Crazy, right?


> government tried to force companies to collect their ID to use social media.

I'm strongly opposed to ID collection of any kind. But I think age bans are a good idea.

Why not sell age verification codes at physical stores? One code per account per website. It's good for 2 years and costs no more than $5. You can pay cash at the store and the sales clerk may only check your ID, as though they were selling you tobacco or alcohol. They cannot record anything.

There will still be straw purchases, just like booze and cigs. But it makes it easier to police the ban on minors joining social media without seriously compromising anonymity for adults. Few kids in my high school smoked or drank. Most couldn't access those things. And social media has network effects. If most kids can't join it, the rest probably won't bother.


With the rise of IA, identity verification on the internet is going to be inevitable on the internet anyway, to make sure people are actually humans. But it doesn't need to break anonymity, cryptography can be used to allow for a zero-knowledge humanity or age proof.

Fighting against regulations in the name of anonymity is the best way to actually harm anonymity. We can have both (while we currently have neither in practice …)


It doesn’t have to be a technical ban. Just make it the law and let companies, schools, parents, and kids take their punishment when found to happen.

This should be enough for the risk averse to take open devices away from children and bolster parents.

It’s not important to stop every possible access, simply that adults have the authority to say no and be supported by society instead of undermined.


> It doesn’t have to be a technical ban. Just make it the law and let companies, schools, parents, and kids take their punishment when found to happen

You’re describing a ban.


It doesn’t have to be a technological ban. It can be a social ban


, without the ID requirements.


Now, define 'social media' clearly and we're all good.


Social media has the following elements (which would be prohibited):

Recommendation based on previous views. Recommendation based on what is currently being viewed is permitted. Recommendation based on current view that is customized to user is not permitted.

Ratings up/down.

Sorting based on rating or based on users' interest. Chronological sorting is permitted.

Suggesting content is forbidden. Specifically subscribed content is permitted to be suggested - but only chronologically, or in response to a search.

No public comments. Private comments permitted. Comments in room/forum/group permitted. User must specifically subscribe/join to see the comments.

Comments sorting/notification rules same as above.

"Reactions" to messages show up as additional/new replies, and are not attached to the original message.

Discussion:

The idea is to reduce addictive methods, and to modify discussion/views to reflect ordinary human behavior: No one rates your sentence spoken in a group, no one goes back and promotes certain things you said, words are said in a group chronologically.


So you want to ban all kinds of software forges, like for example GitHub? Or other regular coordination platforms for business?

I guess the definition of "social media" is not as simple as outlined above.


Explain please how my suggestion bans GitHub.

GitHub does not show me (on my home page) "recommended" repositories, rather only the ones I specifically searched for, and the information on it is organized in a chronological fashion, which is permitted by my suggestion.

Is your issue that I can see public comments regarding the repository? That's permitted - comments on random repositories do not show up on my home page, only if I specifically go to a particular repository, which is effectively joining it for the purposes of what I wrote. (Although I guess that could be clarified.)


How about Discord? Or reddit? Would those be limited to kids as well? You can be sure that even if FB/Instagram were to ban kids, there would be hundreds of companies jumping in to scoop up all that teenage DAU.


Yes, reddit would be limited, unless they made a mode that ordered submissions and conversions chronologically and removed all voting.

Forums have existed before reddit that didn't have those things, and those forums worked just fine - and continue to work just fine. The way reddit does things is not necessary, they do it to try to make it more addictive, and that's exactly what we are trying to stop.

Discord I'm not sure - does it have votes/rating that kind of thing? From what I saw in my limited usage it's pure chronological chat, but I haven't used it much.


Sorry to be so direct, but your rules just don't make any sense to me. Let's go through them one by one:

> Recommendation based on previous views. Recommendation based on what is currently being viewed is permitted. Recommendation based on current view that is customized to user is not permitted.

This would ban all kinds of news aggregators, or even just simple help-desk / support ticket systems.

> Ratings up/down.

This is one of the most important features of an internet forum! Just imagine HN without votes. It would be flooded with nonsense!

> Sorting based on rating or based on users' interest. Chronological sorting is permitted.

I don't want a chronological information thread most of the time. I want to see the things that are relevant to me. A news ticker full of stuff you don't care about is just a big waste of life or work time.

If I'm working on project X with technology Y I want to get relevant information. Without the need to search for it explicitly. The computer knows anyway what I'm working on, so it should show me the relevant information. That's the whole point of a computer: It processes information for you so you don't have to go through it manually.

> Suggesting content is forbidden. Specifically subscribed content is permitted to be suggested - but only chronologically, or in response to a search.

How do you discover interesting things you don't know about already? Should we ban the "see also" section on Wikipedia, too?

What again about work organization tools? Should they be kept dump instead of helpful?

> No public comments. Private comments permitted. Comments in room/forum/group permitted.

I don't even know what this is supposed to mean.

Is a blog post a public comment?

Spam email is a private comment, right?

Reddit or YouTube comments happen in a room/forum/group I guess?

> User must specifically subscribe/join to see the comments.

Yeah, sure, you need to be logged in to read Stackoverflow comments. But you can only see them when you joined the discussion of some question, otherwise no comments for you. Do I get this right?

> Comments sorting/notification rules same as above.

Sure. Your inbox full of trash notifications about "chronological events"—that don't matter to you.

> "Reactions" to messages show up as additional/new replies, and are not attached to the original message.

OMG. Back to "+1" comment threads on GitHub & Co…

---

The whole point is: Something that works like "social media" is just a communication tool. A tool by itself is not good or bad. The difference is in who's interest the tool is applied. When I install something that works like Reddit on my own servers and use it to discus internal topic related to my workplace this tool is likely great. The same Reddit-like software operated by an company that seeks out to sell ads to people is likely dangerous to the public.

The problem now is, the whole internet is run on ads. So more or less any communication platform on the internet is potentially malicious.

Of course there are exceptions, like for example HN. But these are rare cases. Also note that something like HN ticks a lot of the "not permitted" boxes outlined above. This just makes no sense as I don't think HN is harmful. More the contrary. So the stated "rules" just don't work.


I don’t think kids should be on the internet (public WAN) alone at all, so easy for me. They could get larger whitelists over time as they approach 18—no sites where they interact with adults.


So 14 year olds can't learn form old hackers to grow up programming/coding and such? That's really sad.


Sure they can, in person, school, or with a book. I became a hacker just fine w/o internet as a kid.

People did things just fine, just took a bit longer. Thankfully kids have a lot of extra time.


Without a working internet connection, you can simulate network setups, but not the real deal. A teen at SDF could learn much faster with people with wisdom than by themselves. They can be guided in a much easier way. Hint: I didn't got internet at home until very late. And, back in the daw I knew a lot in some areas, such as drivers under GNU/Linux, adapting basic BTTV drivers and so on, but severely lacking in others, because there was no proper information to start with.


I used LANs half a decade before connecting to the wider net. SBCs and VMs are much greater resources than I learned on. Routers are cheap, I just set up a dynalink with openwrt for $75.

No one is asking teens to set up a production kube cluster. There’s so much to learn—they’ll be fine.


That will look real good, a parent decides certain reading and social activities are okay for their child, and now it's time for the government to punish everyone involved.


> there's nothing more dangerous than a teenager with very little money and a lot of time on their hands, so a simple DNS-based block definitely won't suffice, you probably need Chinese-style deep packet inspection and such

Or just give them some money and something to do? Why fight fire with fire, just makes bigger fire


You're going way too far in your reasoning, acting as if the state had to directly enforce it itself. But it doesn't have to: social networks are run by companies that makes profit doing so and that can be strong-armed into doing the control by themselves or be fined if they fail to comply (and we're talking about company whose entire business is about profiling their users to maximize their ads revenues, so they have zero difficulty recognizing teenagers, and more importantly content that is targeted at teenagers).

And even more importantly you're missing the point of why people go to social networks in the first place: because everybody they know is there! If it becomes cumbersome to access, most people won't go, and then there's no more appeal. It's not as if it was porn or stuff like that, that has a purpose on its own that makes people willing to circumvent the restrictions no matter what. Social networks are “networks” and if you break the network effect, you've broken the system. American people don't go on VK not because it's less good than Facebook, but because there's no point in doing so.


Is anything with a share button social? What are the lines?


I have to say, I still just don't believe him. I think Nature's criticism is accurate. He's cherry-picking data that shows the rise in depression started in 1999 and only somewhat accelerated around 2010-11. And I think there is another obvious culprit and it's the rise of authoritarianism and particularly right-wing media. Something that has been a major trend since about 1999.


> and that can't really be done without completely destroying internet anonymity.

doesn't have to in the general case

maybe just have it at the OS level, have parents set up the phone, prompt on setup whether it's intended for use by a minor or not, use that flag to enable/disable access to social media

that's not really the same as banning social media for all minors and imo makes too much sense, more likely that congress will push something that fucks over anonymity more


No, no, no, that's nonsense.

Kids are given a phone and access to social media by their parents (who are also users most likely). I don't see parents saying oh please block this out of my kid's hands with deep packet inspection or ID checks. These people can just install Family Link or whatever and set limits, and be parents. But they just won't do it. Some don't even know it exists. Kids are clocking 5h+ of mobile use / day, with poor sleep patterns and digital hygiene. No limits. That's the real issue.


> completely destroying internet anonymity.

Stores that sell age restricted products already can also sell "adult passes".

Each adult pass costs less than $5, and contains a single use scratch off code that you can use to prove you are of age. When you want to sign up for social media or porn, you need a code. Mutiple companies can implement and sell them.


There would be a huge black market of selling to the underage immediately defeating the purpose.


That would limit the total addressable market from 100% to, say, 1-5%. Statistically, legally, socially, that's not "defeating the purpose", that's "job done", instead.

Most people here should be engineers or at least analytical. No solution is 100%.


Most minors don't smoke or drink, even though straw purchases of booze and cigs are a thing. Social media is even more amenable to this type of gatekeeping because if most of your social circle isn't on it, you have no reason to be there either.


Checking ID to prevent the sale of tobacco to children may not prevent them from ever smoking but is considered better than nothing. This product would be sold in exactly the same manner at the same locations.

Prevents kids from stumbling onto it by accident at very least.


I don't think you have to destroy internet anonymity. Just fine parents like $25 or $50 bucks every time anyone can link their kid to social media use. Give people rewards for reporting it or something. Even as little as driving kids towards platforms that incentivize anonymous interaction over real name and face stuff is probably enough move the needle. That said, I think social media use has a very overlapping relationship with allowing kids unsupervised and/or frequent and lengthy use of tablets and phones from early ages, which I suspect is also destructive.


But then they will contest the claim, it will then have to be investigated, and that will cause huge amounts of pointless busywork that will amount to no clear evidence. Not worth it for such a small fine


That would be immediately framed as an assault on poor families.

You can't force people to not hurt themselves.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: