Hacker News new | past | comments | ask | show | jobs | submit login

HN is a bit schizophrenic on this topic. On one hand, it seems everybody wants more freedom and less surveillance, but they give FB a hard time for not monitoring content better. Hopefully these aren't the same people expressing these contradictory points of view.

I'd be curious to know what both side think the ideal social network should look like.




> On one hand, it seems everybody wants more freedom and less surveillance, but they give FB a hard time for not monitoring content better.

I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts


How do you look at spamming private messages to many recipients, resulting in a similar effect to posting publicly?

I know of bullying on FB where the harasser sends the same message to dozens of friends of the harassed. FB makes this easy to do since a list of someone's contacts are often easy to find online and there is no recourse to find or report these messages (as with a public post).

To me this presents a particularly tricky double-edged sword. E2E encryption is good in many cases, but tied to an easy way to send many messages and easily-accessible lists of people to target a message to, can result in a similar but more hidden version of public posts.

My guess is that this is being used today to disseminate similar content that is being restricted on public posts.

As far as I can tell, restrictions to limit the number and speed of private messages have not been effective against this kind of approach, and new accounts can always be created. In some cases, these messages go to a different "inbox" for non-contacts, but not always, and this just delays the receipt of the message since, again, they cannot be found or reported.

I don't know a good solution to this problem, but it's not one I've seen talked about.


There is no solution. Either you give people e2e and let them choose to do horrible things with the privilege or not.

Maybe a middle ground is that every e2e message is hashed and sent once, and if duplicate hashes are detected at scale (of the hashed message) you slow the propagation to 1 user per day.


Perhaps our expactation of privacy should depend on what platform we use, no e2e on public platforms (i.e. Facebook) but e2e on other platforms where an username/id/phonenumber is required, that can not be found easily.

I think the main problem is users mainly using only one platform for their communication instead of choosing it on a case by case basis.


The limits aren't always visible. In particular, its a good idea if new accounts get heavily limited in invisible ways, and it's a moderate challenge to create mass amounts of them that don't start off shadow-banned.


My question is what constitutes a public post? I feel like that definition is evolving. Is a WhatsApp group with 1k members spreading disinformation still a private message?


> Is a WhatsApp group with 1k members spreading disinformation still a private message?

Yes it is.

A “public post” is when the message was directed towards anyone who cares to listen.

A “private post” is when the message was directed to a specific group of recipients. The length of the list of recipients doesn’t grant a non-recipient access rights to the message.

This is easy stuff.


The distinction between "public" and "private" is gray and messy when it comes to corporate social media platforms. Is anything really private? How do you send a message to someone in Messenger? You're not sending the HTTP request to person X. You're sending it to Facebook with metadata that says "please make this visible to person X and consider it private". Then Facebook keeps the message and decides whether or not to publish it onward to person X, and whether or not to display it to anyone else (internally, externally, in logs, etc.). It's not like a package that gets sent over to person X's house. It stays on Facebook's property at all times.

When you post something to "your" feed, you're sending it to Facebook with metadata that says "please post this on my wall" or whatever.

To strain an analog analogy, this is not like the telephone or even the post office, where you hand them something or send out voice packets and they just look at the recipient and forward it on to the actual person. Everything you send is sent to Facebook and kept at Facebook.

Replace "Facebook" above with any social media platform or cloud service. They're fundamentally the same. I don't consider iCloud photos private. You're sending your content to Apple, not putting it in some safe that you alone control.


> Then Facebook keeps the message and decides whether or not to publish it onward to person X

An interesting weird angle: If X deletes their whatsApp account messages fail silently. FB stores the message under the false pretense that they are able to forward it to X. I think imessage does the same?


Furthermore, aligning on a definition is only half the battle. If you create some arbitrary line (100+ = public), forums will break up in into many groups of 99. Or it will switch to viral messages that are forwarded in 1:1 threads. There is a the fundamental asymmetry between bad actors (who are highly sophisticated) and regular users (who are generally unsophisticated) that makes it really hard to roll out tools / rules that have both high precision and high recall.


It really does not seem like a terrible idea to set off an arbitrary point at say 50 or 100 messages. Not everyone is going to be happy, sure, but it's clear.

More important I think is the kind of "control" that's exerted: it of course should be around developing factually accurate sources and promoting those instead (maybe a warning "This post contains keywords detected anonymously that suggest misinformation. Here is an alternative position"

The fundamental problem is that we are building our entire lives around a few systems that are completely opaque. Facebook, Google, and many other algorithms are still closed source, and to their own detriment they cannot reveal details of their inner workings. That's why we need a way to move to a radically open society that still allows for innovation and advanced technology.


As far as I know, legally speaking it would be a private message if the members were all members of a legal entity like a business. I.e. subject of this HN article is a private message to the Facebook organisation which clearly has over 1000 members.

I also think legally, but this may vary from jurisdiction to jurisdiction, a message to one thousand people that you do not know personally and you are not in an organisation with, or that organisation has an open access policy (even if there's an entrance fee) would be a public message.

Of course, IANAL and just learned this from reading the news.

More interestingly however, is what Facebook or other media should consider public. In my opinion a WhatsApp group with 1k members should not be considered public, even if there's absolutely no bar to entry. Private companies should have no business policing private communities. If they've got concerns they should invite law enforcement to decide if there's any laws that are being broken.

The moment when communication becomes public is when the communication goes outside the boundaries of the group. If my antivax aunt posts an edutainment video about how vaccinations cause autism and it is clear that it is misinformation, and that edutainment video is not just in her group of weirdo's, but actually shared on her public timeline, in my opinion Facebook should definitely come down with the ban hammer. There should be a little "report" flag that I'll use to snitch on my aunt. Even if it's on her "friends & family" timeline I wouldn't consider it private for the purpose of culling misinformation. She should specifically select a group of people who have self-selected to be in that group for it to be considered a private message.

Also, if a private group on a platform like Facebook/Whatsapp has members that are underage, and not all members are in a complete graph of direct friends/family, Facebook should require that group to have active moderation that conforms to some sort of platform wide code of conduct.


Yes, that is definitely a public post.


There are plenty of private email lists with 1000+ members. Should email providers be censoring misinformation in those messages?


And what about 100? 25? 10?


Depends on the type of invitation.


The invitation type can change at any time.


Yes.


How about a WhatsApp group with 20 family members?


That is also a private message.


> I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts

The same people argue that FB needs to censor more public content and wringing their hands in public over just how awful it is that in places like India, WhatsApp message forwards can spread memes that respectable people like western journalists and tech activists don't like.

It's a "who, whom" thing. There's no principled stance differentiating private from public with respect to control and censorship. It's become respectable in western political discourse to demand totalitarian control over the spread of ideas. Today's activists will do or say whatever it takes in the moment to bring the boots closer to human faces forever. Anyone who values human dignity needs to oppose this movement.


While I agree the elites are upset about losing power, there absolutely are issues of principle here.

There is such a thing as legitimate authority.

For example - there are people who do actual science, and other people who actually can read scientific papers and make assessments, there are people who have legitimate basic understanding of science, relationships with scientists, and consistently communicate reasonable information about that, as it relates to our world.

And there are people who make stuff up.

And very influential actors who will use a system without information integrity to their advantage.

These people wield enormous power and fundamentally shape outcomes for everyone.

Both the Truth, and the Public Good matter. While the later is more ambiguous, it's also material.

The question the becomes - how do we allow yahoos in their basements to say anything they want publicly (i.e. aliens invented COVID, the vaccine will kill you), how do we allow legitimate detractors to question classical authority (i.e. ivermectin might work), how do we try to unbias information when it's politically contentious (i.e. mask policies), how do we allow politicians to speak their minds, but to not destroy their communities with irrational or completely irresponsible information (i.e. 'the vaccine isn't safe, you don't need masks, just eat healthy )'.

I'm afraid we don't have the answers, but maybe some degree of 'proportional oversight' on the most public bits of information might be reasonable i.e. statements about the vaccine, when they reach a certain threshold of popularity, must have references to actual studies, or something along those lines.


> The question the becomes - how do we allow yahoos in their basements to say anything they want publicly (i.e. aliens invented COVID, the vaccine will kill you), how do we allow legitimate detractors to question classical authority (i.e. ivermectin might work), how do we try to unbias information when it's politically contentious (i.e. mask policies), how do we allow politicians to speak their minds, but to not destroy their communities with irrational or completely irresponsible information (i.e. 'the vaccine isn't safe, you don't need masks, just eat healthy )'.

How do "we" this? How do "we" that? Who determines what constitutes the group of "we"? Power to censor inevitably, invariably, and irreversibly gets used to simple deceive and propagandize people in favor of what the censor wants. Safety is no excuse either --- every tyranny begins with the excuse that unusual powers are necessitated by some emergency.

There is no such thing as "irresponsible information". I reject the entire concept. There are only competing truth claims, and some nebulous "we" made up of journalists and tech activists has no more legitimate basis for policing speech than anyone else. There is no "legitimate authority" over people's minds.

You're right here that there is an issue of principle underlying the double standard: the principle is that some people think that they ought to control what other people thing, ostensibly for their own good. I wish they would at least be open about this principle.


'We' work together as a community and groups of communities.

Traditionally through Classical Liberal institutions like 'Executive, Legislative, and Judiciary' with elected representatives for oversight, but more realistically also through the '4th Branch of Government' i.e. the Central Bank, the security apparatus (Military, FBI, NSA, CIA), a Free Media with integrity (believe it or not, they don't just publish whatever, there are norms and standards), the '5th Estate' i.e. people with voices outside the norm, the Academic community, Industry, NGOs, Faith Groups, Cultural Institutions. Other nations and international bodies.

You're confusing a bit the legitimate motivation for regulation, with the means by which bad actors take power (i.e. we can't use security as a measure because then Stalin will come along).

'Security' is 100% a material issue, it's not even an argument - there are bad actors trying to do bad things all day long from petty violence to terrorism to invasion etc..

What that means is we have to take special care in those scenarios, usually by means of oversight and proportionality.

For example, the police can't just go into your home, they need a warrant, signed by a judge etc.. The laws the security apparatus use have oversight by elected officials.

There are no rules for this FB issue, it's the Wild West, and because it touches on issues of censorship, security, politics and now Public Health ... it's a tough one.


> 1) Less monitoring on private messages

The Facebook leaker is explicitly arguing against this though. She cites Facebook’s push for end-to-end encryption of private messages as a problem.


I'm merely clarifying HN's common opinion.

As for the whistleblower, I'm very skeptical of her — to be a tech PM against encryption, and somehow linking e2e encryption to making the platform less safe, is dubious at best. Removing misinformation and calls to violence on the Facebook platform doesn't need to include monitoring private messages.

The idea that she's been a PM at large tech companies for 15 years and doesn't understand that Facebook monitoring messages will mean China can monitor those messages is almost too suspicious to believe.


How do these two align? Why would FB sending messages that are sent encrypted (not e2e) and stored on US servers allow China to read messages? If you allege hacking then why wouldn’t they be able to hack the devices?

Re misinformation: why would misinformation not simply happen in e2e group chats like it is already happening in e.g. Brazil or India? What’s the difference between posting to a group of friends on Facebook vs sharing a group message to those friends?

I do think messages should be encrypted but the trade off isn’t as straightforward as you make it sound.


The idea is once a company has some power over its users, that power will be used by some government somewhere as well. The latest example: https://www.nytimes.com/2021/09/17/world/europe/russia-naval...


> Why would FB sending messages that are sent encrypted (not e2e) and stored on US servers allow China to read messages?

Not parent, but I think the idea is that if BigCo does business in CountryA , then CountryA's government invariably forces BigCo to spy on their users who are residents.

Obviously compromising the user's device is a workaround open to governments but hard to achieve in bulk.


FB does not do business in China. This is rather a risk for Apple given they store keys in the cloud and do business there fwiw. I agree it’s a risk for almost all other countries.


I think you're missing the context of what this is about:

> Strangely, the example she gave suggested that Facebook needs to have looser encryption in order to somehow protect Uyghurs in China from government attempts to implant spyware onto their phones. [1]

And Facebook responded, sticking up for e2e encryption:

> A Facebook spokesperson responded to The Telegraph with what we all should realize at this point is the responsible approach to encryption: "The reason we believe in end-to-end encryption is precisely so that we can keep people safe, including from foreign interference and surveillance as well as hackers and criminals." There is no such thing as encryption back doors that only the "right" people can access. If they exist, they can eventually be found or accessed by others. [1]

[1] https://reason.com/2021/10/25/whistleblower-absurdly-attacks...


> Facebook leaker is explicitly arguing against this though. She cites Facebook’s push for end-to-end encryption of private messages as a problem.

One doesn’t have to agree 100% with an ally.


If someone is going to Congress and lobbying against end-to-end encryption of private communications, how are they an ally?


> If someone is going to Congress and lobbying against end-to-end encryption of private communications, how are they an ally?

Because they're also lobbying for other things you care about. And those things are more likely to be passed into law than the E2E encryption pieces.

Taking a puritanical view on an issue is a high-risk high-reward gambit. Nine out of ten times, it ejects you from the room. One out of ten times, you will organize sufficiently to make it a wedge issue (e.g. the NRA on guns, NIMBYs, et cetera).


There's a gambler's fallacy at work here, though. Our Fourth Amendment right to encrypted private communications is so important that if we lose it (or give it up), any future wins in areas like corporate transparency, monopoly regulation, and net neutrality won't ultimately matter. We won't have the freedom needed to benefit from them.

To the extent Haugen disagrees, she's not on "our side."


Sorry, what clause in the Fourth Amendment talks about encryption? Thanks in advance.


The plain text reads, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated."

It's not reasonable to outlaw the tools needed to exercise a fundamental right. Not OK for the 1st, not OK for the 2nd, and not OK for the 4th. Encryption was already an old technology at the time of the Constitution's authorship. If they had wanted to regulate or outlaw it, they were free to say so. They didn't.


> Encryption was already an old technology at the time of the Constitution's authorship...

...so if they had wanted to guarantee a right to it, they were free to say so.

But they didn't.

Despite clearly contemplating privacy.


> One doesn’t have to agree 100% with an ally.

Depending on how terrible the bad ideas are that they're pushing, they may not be an ally at all in fact. A multi-purpose trojan horse may be more accurate.

In this case, promoting the abolition of end to end encryption is quite heinous. She's providing the authoritarians a potent argument that isn't yet well established in the public mind (we have to be able to see all of your data so we can keep you safe from the Chinese trying to see all of your data).


Nor does one have to disagree 100% with an enemy.


Nor do you have to support an ally you agree with a majority of the time if they get some big things wrong.


How about (3) Less consolidation of power

Personally as someone who doesn't use FB and never will, I couldn't care less if Facebook wants to track and monitor every one of their users, monetize their every movement, and ban any message they want. In a free market you'd have thousands of social networks to provide competition with all sorts of different policies and ToS. The real issue is that one company is in a skewed position of power due to a broken marketplace. Fix that problem and all the other problems are irrelevant.


> The real issue is that one company is in a skewed position of power due to a broken marketplace

Nobody seems able to identify what the unfair advantage is.

The truth is that this is the nature of social networks: the successful ones tend towards Monopoly. Why? Because more people attract more people. Access begets access. The value of a network grows with the square of its size [0], and higher-value networks attract more users.

You can't break up a social network without starting to make rules about who can associate with who, which is a fundamentally anti-free position.

The problem is not Facebook. In its absence another would take its place. The "problem" is human nature, and that we were not designed cognitively for the types of networks that technology now enables for us.

We should focus on education, friendship, and real-world experiences. Legal fights against social networks in general or Facebook in specific are futile.

[0] https://en.m.wikipedia.org/wiki/Metcalfe%27s_law


How would thousands of competing networks deal with a government-enforced “no E2EE; wiretapping API required” law?


Badly, and that's the point.

I'd like social media to be run by everyone having a social media server linked to their home network (imagine something like a Raspberry Pi), with it being totally decentralised and every user controlling their own server.

Then if the government wants to shut it down, they have to raid everyone's home.


Or, you know, just work with the ISPs to make it illegal, like with BitTorrent.

Also, what you want already exists - you're free to go use Mastodon and run your own node. You can't possibly think that's a reasonable product your grandparents would be able to use.


> Or, you know, just work with the ISPs to make it illegal, like with BitTorrent.

BitTorrent certainly isn't illegal in the UK; it might be in other jurisdictions.

> you're free to go use Mastodon and run your own node

Would it work behind a NAT'ed router on a dynamic IP? I suspect it might not.

> You can't possibly think that's a reasonable product your grandparents would be able to use.

What I envisage is an SD card containing the OS + apps, you put it in the Pi, plug it into your router by ethernet, configure it via its web app, and you're ready to go. I think it ought to be possible to make it easy enough for the average person to install (certainly anyone who could install a router + internet would be able to).


> In a free market you'd have thousands of social networks to provide competition with all sorts of different policies and ToS.

You know, I want to believe that, but I don't think it's true.

Because of the "network effect", the value of belonging to a social network is mostly dependent on how many people are on it. This dynamic very strongly favors a few big social networks.


But the network effect of a social network is a status quo we kind of just let happen, and maybe that needs to be rethought. You could equally imagine a case where email was a walled garden — where Gmail users could only send messages to other Gmail users, Hotmail to Hotmail etc — and someone said "well, that's the nature of email platforms".

It doesn't have to be this way — if we had a common protocol for 'social' (or used & improved the ones that we already have), you could have interop of posts/comments/media between platforms and then the platform's 'secret sauce' be curation of messages, discoverability of people you might want to connect with, etc.

Facebook could focus on connecting with friends & being free but ad supported. Instagram could be only media posts & supported by sponsored influences in your timeline. Twitter could be focused on current events rather than your friends and be subscription based, etc.

Similarly with IM, we had lots of competing IRC clients when that was a thing. As products, you could easily create competing messengers that could speak to each other. You could even have additionally things on top e.g. if you have a FB<->FB message it can support 'stickers' or whatever that are outside the spec, but similar to browser development they _do_ get added to a shared spec that isn't controlled by one company.


> Less monitoring on private messages

As pointed out on The Last Week Tonight show, private messaging apps are a cess pool of misinformation [0] especially in the developing world.

But: Facebook can and does monitor private messages whenever any user flags / reports them [1]. The problem is, how effective is the mechanism given not many know it is even there. Of course, e2ee mustn't be compromised but it should also not be used as an excuse to let misinformation run amock. May be homomorphic encryption gets us there, may be UI changes do. I hope Facebook does act swiftly and decisively whatever the case, since e2ee (backdoored or not [2][3]) seems like the scape goat here.

[0] https://www.youtube-nocookie.com/embed/l5jtFqWq5iU

[1] https://news.ycombinator.com/item?id=25211185

[2] https://news.ycombinator.com/item?id=13389935

[3] https://news.ycombinator.com/item?id=25685446


If someone has their Twitter set on private (only followers can see content), but they accept most/all follow requests, would you consider their content in category 1 (private) or category 2 (public)?


The obvious solution to #2 is encrypted private social networks that aggressively lock out the monitors. To continue the monitoring it'll require abolishing encryption, that's where the views in question (more freedom + less surveillence and more content monitoring) inevitably are going to end up conflicting and must always end up conflicting.


How do you ban monitoring of public posts?


> How do you ban monitoring of public posts?

I'm not suggesting you should or can in any practical way (how heavily - or not at all - that public posts should be monitored by the government is a different debate from what I was saying).

I'm saying that the parent comment claiming the views are not necessarily conflicting, is incorrect.

This must always conflict in the end:

> I don't think these are conflicting views: (1) Less monitoring on private messages (2) More monitoring on public posts

More aggressive public monitoring (along with the follow-on laws to regulate & punish a lot more things said in public) will inevitably result in a drive toward more encrypted private social networks that can't be easily monitored. Those private social networks will rely heavily on encryption. The aggressive public monitors will have to abolish encryption to then regain the high degree of mass content monitoring they used to have. Call it networks going underground, or dark; the authorities will come up with a negative naming scheme for it as they seek to castigate the shift.

You can bet on the rise of mass popular encrypted private social networks (likely built around subjects/topics/ideology/x thing in common; more like groups or subreddits than mass social media today, in other words). It's coming this decade. And the response from the government toward that is quite predictable. They'll use it as another argument against encryption.


If there is a drive towards more end to end private messaging, I'm OK with that. But I wouldn't call that "social networks"; in my mind there is a huge difference between an encrypted end-to-end message between two users, an encrypted message which is sent out to a large group of users, and a public post.

You can make an end-to-end message between two users perfectly secure, to the limits of engineering and the security hygine of the two users. No problem there. If you have an encrypted message sent out to a group of users, as the group of users gets larger and larger, it's more likely that one of those users might be an informant to law enforcement, or will be sloppy with their message hygine, so that after they get arrested invading the capitol on January 6th (for example), law enforcement gets access to all of their information on their phone with the pin code 1234. Still no problem as far as I'm concerned. Criminals tend to be stupid, and that's good from a societal point of view.

Public posts are a different story altogether, because social networks have an economic incentive to promote "engagement". And if that engagement happens to emphasize messages that incite hate, or anger, or causes people to think that vaccines contain 5G modems, or whatever, hey, that's just good for shareholders, and in the captalist system, shareholder value (especially VC value, for startups) is the highest good, right? Well, I have some real concerns about about that. I think that corporations which are prioritizing engagement uber ales, even if that causes social harm, should be potentially regulated.

And that's why it is quite possible for someone (like me) to believe that end to end encryption should be allowed, and promoted, even if it gives the FBI hives --- and at the same time, argue that public posts should be highly monitored from the perpsective of trying to get insight to whether the amplification alogirhtms might be doing something unhealthy for our society's information ecosystem.


This is probably what would have naturally happened if the CDA were never passed. Instead, we've turned internet companies into geese that lay golden eggs for the government.


I would argue "HN" is not schizophrenic, "HN" is a bunch of users who have different passions and will comment/vote on different things.


We should coin this the "hive mind fallacy". Though there probably already exists a name for this.


"Distributed hypocrisy."


> Hopefully these aren't the same people expressing these contradictory points of view. I'd be curious to know what both side think the ideal social network should look like.

I see social networks (and in many ways the internet as a whole) like a new country we’ve founded. It’s a bit different from countries made out of atoms. For a start, everyone there is a dual citizen with somewhere in meatspace. And instead of community centres we build websites. But it’s a place.

How are those spaces governed? Is it a democracy? No. Each social network is its own mostly benevolent corporate dictatorship. If you don’t like the rules, your only option is banishment.

Healthy communities (in healthy society) need rules to keep bad actors in check. And freedom to explore and be ourselves. Healthy communities in real life use participatory processes to figure out what those rules should be. You need people to feel like they have a voice. It’ll never be perfect. And different rules will make sense for different groups.

Facebook’s problem is they’re trying to be the government, the judiciary and police for billions of people from every country on the planet. There is no single set of rules and policies which will work everywhere. And even if there was, how do you police billions of people? AIs make mistakes.

I don’t know how, but I think FB needs to eat humble pie and find ways for communities to decide on (and enforce) their own social norms somehow. It’d be messy and disjointed, but so are people. Reddit and discord do this - although they’re obviously very different products.

Tyrannies don’t have a strong history of making choices which primarily benefit their citizens. So far, facebook’s track record hasn’t been much better. To improve, they need to acknowledge the position they’re in, learn from history and give some power back to the people who populate their site.


> I'd be curious to know what both side think the ideal social network should look like.

I'd rather they didn't exist, honestly.


How do you define social networks? Should email mailing lists not exist either because those are absolutely social networks in a way as were BBS systems?


Not OP, but IMHO, the danger comes from the algorithms which lead to "engaging" content being pushed aggressively to everyone. "Engaging" can mean cat videos or "incredible looking food", but it means divisive, partisan, insincere and outright dangerous more often than not.

Inasmuch as a "social network" is just people signing up to talk about certain topics and that's it, I don't have a problem with it. Internet forums of the 2000s weren't a problem necessarily. And while HN does have some "virality" mechanisms built in due to upvoting and while it is sometimes a "problem" on very divisive topics, it's not nearly on the scale of Facebook, Twitter and Co.

So if it were up to me, Twitter, Facebook, Tiktok etc. should either disappear or they should at least have to revise their algorithms and open them up to public scrutiny. Or, you know, if they went back to their original purpose of just being about connecting with friends and family. But I guess you can't make money out of that.


I take your point. But I would need more convincing that just getting rid of algorithms/recommendations/etc. would suddenly make all the problems go away.


All? No. Conspiracy theories and disinformation existed before the internet, too.

But we can at least try to get rid of the things that make the problem worse.

Moreover, minimising hyper-addictive patterns on such platforms would have a host of other benefits too.


Another part of the problem is size. Older communities were miniscule compared to the billions of users each platform has now.


While I agree that it's a sticky problem, I think we can find a middle ground between "email mailing list" and "site dedicated to maximizing engagement by both encouraging the spread of rage baiting misinformation and deliberately maximizing the number of people who see it".


Yep, this is my point summed up pretty well. SMS can be seen as a "social network" if you really squint your eyes - clearly I'm not talking about BBS/forum software/etc.


Like any social network, HN falls into polarized, sometimes unproductive, discussion with certain topics. For HN, that's frequently discussion of big tech and their antics and discussion of censorship/moderation/surveillance.

Links about the intersection of these topics, such as apple/CSAM or facebook/moderation, are most likely to have comments that devolve into polemic without much productive discussion taking place.


HN is not a single person, it's a community with a variety of members with a variety of opinions. The fact that communities tend to form consensus of opinions, especially such as subreddits, is kind of a major issue.


Please keep in mind that some of us just want bad things to happen to Facebook and don’t have a pro or anti-surveillance agenda behind that.


> On one hand, it seems everybody wants more freedom and less surveillance, but they give FB a hard time for not monitoring content better.

That seems perfectly consistent. FB is already going all-in on surveillance and ignoring any notion of freedom; if they must destroy privacy, the least we can ask is that they actually do something useful with it.


Author of original blogpost here; I am seeing a lot of discussion here about "what constitutes a public group?" and so I wrote this to help with the discussion. https://alecmuffett.com/article/15095


In the case of FB, I actually believe having Instagram and Facebook not e2e could be even work, if people are educated and made aware about it, while Whatsapp remains e2e.

Instagram and FB are mostly "public" facing so they offer a big surface for malicious activities. (Scammers, Groomers/People seeking CSAM which are always used as a reason for more surveillance/ Trolls etc.)

Whatsapp is more private and requires knowing one's phone number which ideally should be harder to get ahold of.

Messaging on Instagram/FB could be compared to whispering in a crowded place, private...but not fully private.

In an ideal world this would not be necessary, but there will always be a fight between surveillance and freedom. And perhaps giving up freedoms in some parts could allow us regain more freedom in others, as long as people are aware about it, which might be the biggest hurdle to tackle.


> HN is a bit schizophrenic on this topic.

What if there was a way to send private message, in a decentralized manner, and where it's not even possible to tell if the recipient has actually read the message or not?

There are blockchains using ZK-proofs allowing to do that today. Not only it's decentralized so "fuck corporation surveillance/profiling" but it's also highly unlikely uncle sam and its offsprings can break the elliptic curves and ZK-cryptograhy in use by these blockchains, so "fuck state surveillance" too.

But then it's using the word "blockchain", so HN is pretty much against it.

And instead HN as a whole shall root for "secure" messengers that are leaking metadata like there's no tomorrow while explaining that they're the best thing since sliced bread "because they're easy to use".

Go figure.


A solution is requiring social media companies to verify the identity of users. They don’t have to require other users know the identity but the company has to. This protects against sock puppet armies and makes bans easier to enforce.


Maybe decentralized completely: do not talk to my kids about your BS paranoid and my kids should just see whatever kids want to see.

We dont exactly need a centralized humongous social website. Im on both sides honestly: I want facebook to subside in profit of isolated more freedom-centric micro network where we can say what we want but wont impact massively crowd thinking ?

Like that if monitoring must happen it happens in isolation, and if freedom must exist it s not on the same place as the other fucktards?


the future will not be censorship but categories, so people will not get banned, they will get labeled and you will filter tags out, those tags will be connected to a person, if you don’t like a tag you will see less of that persons content. this will be useful in keeping the peace. it’ll mess up dialogue but most people have a hard to with empathy these days. so maybe certain sections will have open discussions for people who are okay with seeing content from other tags, and are okay with engaging with that content. right now dating apps will forever ban your account if you engage a certain way with the opposite sex, so in many cases a match will get banned because the person they matched with did not like their content. but that does not mean someone else won’t like their content. so the people who have the most in common including their communication styles should be allowed to still engage with each other without getting banned by someone else’s arbitrary rules that are subjective depending on a persons upbringing.


> I'd be curious to know what both side think the ideal social network should look like.

I don't care what they look like as long as there's hundreds of them all on relatively equal footing.

I firmly believe that most of the major problems facing society today are not caused by any features of particular companies, but by the consolidation of power in a very small number of them.


Maybe the ideal social network is no one big "social network", i.e. get rid of your Facebooks and Twitters and go back to small, decentralized, localized forums and interest groups running on independent, secure, open-source platforms that are easy for laypeople to set up, maintain and moderate.


I think you need both. Small decentralized topic specific forums are important. But, without a larger community people tend to not realize the outside world may not share the same opinions. That leads to this tribalist attitude of hatred of "others".


I'm skeptical that seeing opposite viewpoints expressed by random people on the internet leads to shared understanding or less hatred.

If anything, research seems to indicate that tolerance to opposing viewpoints and revising of stereotypes comes from extended personal contact and having a shared sense of purpose, and that's difficult to achieve on the internet.


You are conflating schizophrenia with multiple personality disorder. This is a common, but harmful, mistake.


Perhaps you could provide some insight into why? I don't see how multiple personality disorder is a very useful characterization when you're describing a community made up of multiple personalities.


yodsanklai said HN is "schizophrenic" when accusing HN of being one body with two conflicting personalities: one that wants FB to emphasize freedom and the other that wants FB to police content. He said "Hopefully these aren't the same people expressing these contradictory points of view." (In my opinion, what's really happening is that there's some diversity of opinion among HNers, which is how it should be. As you point out, and yadsanklai implicitly admits, HN is not a single person so there is no self-contradiction or multiple personality disorder. Beyond that, it's a mark of intelligence for people to be able to change their minds, but I digress.)

Schizophrenia would have been appropriate if he was accusing HN of being paranoid, hearing voices, etc.

Schizophrenia is not multiple personality disorder (actually called Disassociative Identity Disorder, I just learned). I am really curious to know why people started misusing the word schizophrenia in common parlance like this.


I don't think either side has a solution. But, that doesn't mean they can't detect an obvious subterfuge from outside the community.


I think what they want is more freedom for their own opinions, and more surveillance on people they disagree with.


HN isn't one person with a single set of ideals and opinions. Its a website, with many people who's worldview ranges quite drastically. Tech, like anything else in the world, is going to have a cross section of humanity in it. You have libertarians, anarcho-capitalists, traditional GOP and Dems, European Socialists and full on communists here. You can't expect such a diverse group to have a homogenous view.


Exactly - every community has a diverse set of opinions (whether its discernible to the average person or not is a different question) and rarely do you find a homogenous group of individuals without a very strictly enforced control mechanism which suppress differences from being seen then creating a homogenous community.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: