Hacker News new | past | comments | ask | show | jobs | submit login
Social Cooling – How big data is increasing pressure to conform (socialcooling.com)
389 points by milly1993 on June 19, 2017 | hide | past | favorite | 185 comments



I was going to write at length about my concerns on this topic, and then I decided not to. Because it's entirely possible that one day many years from now, a prospective employer/insurer/whatever finds such a comment and flags me for it.

That's the cooling effect in a nutshell.


I've been saying this for a while: We as a society have to stop taking everything that is said on the internet so fucking seriously.

You shouldn't have to think everything perfectly though that you write. It's OK to say something dumb, or even offensive. It's not great, but no big deal either. I don't want to live in a world where we have to polish and double-check every thought that leaves our minds.

It used to be that a written statement was something important, with gravitas, with thought and meaning put into it. You rarely sat down and wrote a letter or a book. But the vast majority of utterances on the net are not like that, so we shouldn't treat them so. We shouldn't apply yesterdays standards to them.

----

Actually, I believe this will be all moot in a few years. With the rise of AI, and the continuing increase in storage and bandwidth, we might reach a "one million monkeys with typewriters" scenario. There will be every possible utterance and every possible embarrasing photo of everyone on the net. It will be trivially possible to fake your voice and image. (Unless we enter the cryptpocalypse in which everything is signed...)

This is currently an odd period of time, in which we can create data, but can hardly fake it. There is authencity, proof of authorship. We can hold people responsible for how they behave and what they think. Before that was, and after it will be, just hearsay. It sounds super scary, but to be honest I find the thought quite liberating.


I completely agree, and it's the very reason social media holds no interest for me anymore. I think people have forgotten what it used to be like on Myspace and the early days of Facebook. Social media was about self expression and finding out where the party is, now it's a finely tuned marketing platform for grandmothers.


> Social media was about self expression and finding out where the party is

Exactly! My next project is going to be something like that. Based on the fediverse/Mastodon, but firstly about self-expression, connecting with real friends, meeting people. You curate your own home page, share what you want to share, you are invited to interact with strangers. No social media bs. Trying to capture the feeling of local / university social networks pre facebook, or the feeling of myspace.


The kids seem to be more into picture and messaging hangouts, but those things are lost into the ether (you hope anyways). Myspace jumped the shark customization wise, but there was a unique mix of the site helping you to find people and hanging out in real life, not being the actual hangout itself. Meetup was a good idea, but it doesn't seem to drive engagement. I don't know what the answer is, but I'd love to help out on a project that could be fun.


I like it.


You sure that isn't just you getting old?

That's how it is for me.


No, it's not about getting old. In those days your social media site - or your blog - was heavily themed, altered,tailored to one's own ideas, sometimes to crazily annoyibg levels, sometimes to surprisingly clean minimalism. Today you have at maximum 2 images to make something 'unique', which doesn't even remotely scratches what it used to be during the early myspace era.


You still can host your own blog and style it however you wish. On github for free.


Sure, where it will never be seen.


> You shouldn't have to think everything perfectly though that you write. It's OK to say something dumb, or even offensive. It's not great, but no big deal either. I don't want to live in a world where we have to polish and double-check every thought that leaves our minds.

That's nice. And we shouldn't have to maintain physical appearances or be judged for it. We should just accept all of our imperfections, celebrating all forms of expression. But lookism and online-lookism are here to stay. Conventions of "good" info content are strengthened by karma/gamification, for every vote you give, including here on HN. It's noble to desire for a time when we can just all be ourselves, but with attention spans vanishing I don't think anyone will care.


Typing this comment on hn website is annoying as fuck.

The input box gets hidden behind the keyboard so I have to type blindly.

Probably going to get down voted for this and next time I'll have to censor myself to become a robot so I don't lose my points from which my future employer will judge my worthiness.

But seriously please make hn a github project so I can send a PR to fix this really annoying issue everytime I type a fucking comment.


I feel like it's more of the way you respond to people trying to guilt trip you or make you ashamed of what you said. Most people tend to back down or apologize and end up playing their attacker's game.

Some people seem to be doing just fine with saying whatever is on their mind and getting away with it. Notable example is the current president...


> Notable example is the current president

I'm no friend of Trump, but it is disingenious and maybe harmful if most of the criticism is directed against the stupid things he says, his unstatesmanlike behavior, his faux-pases. Or in the spirit of this thread - how unadapted and uncensored his behavior is. Because that can be good and bad, and it's what so many people elected him for.

That he is spewing so much hate, he should be (and is) criticized for. I wish he would receive more critique for his (equally bad in my personal opinion) policies. I'm observing from europe, so my view might be wrong, but it seems most of the critique is on the form level.


This is not about Trump. I was just illustrating my point.


I know, the comment was not directed against you :-) just in general.


People these days act like the internet didn't go just fine for decades without a bunch of wannabe hallway monitors and other officious bullshit.


It was also a widely held belief for decades that the internet was nothing more or less than history's most efficient porn-and-bullshit-machine. Anyone who took seriously what they read online (without verification) in 2005 was rightly regarded as an idiot. I think these two things are connected: people now take the internet seriously as a media delivery platform, and no one has yet figured out how to solve the problem of legitimacy.


Scripta manent, as the romans said. Written (digitized) content is so much more important because its objective evidence, not hearsay, so we have to take it more seriously than rumors. This hasn't changed across millenia, i dont see it changing now.


In a world so completely connected, we are all still completely disconnected.


The uniqueness and diversity of individuals will soon be lost as everyone is forced to adopt the socially acceptable opinion or risk being exiled. Think about how many revolutionary ideas started on the fringes of society. Astronomy, America, Civil Rights.

The techno-utopia is always portrayed as some society where open mindedness and diversity are embraced. This is the paradox of modern political correctness. If the hivemind of society rejects you and your ideas, it is in fact not open at all.


Definitely. I pretty much keep silent on political/social issues online these days. It's just not worth the risk of being targeted for personal/professional destruction by the internet social outrage machine (Fortunately, we still have secret ballots here in the US, so at least some form of political speech can be engaged in safely).


Sort of, political parties have profiles for most voters based on their public profiles etc, they compile all info and predict your stand on various issues and when those databases leak..... and they do: https://www.upguard.com/breaches/the-rnc-files you are no longer protected, your best bet is to not have public facing anything at this point in time


> It's entirely possible that one day many years from now, a prospective employer/insurer/whatever finds such a comment and flags me for it

This is why you should use pseudonyms and strive for anonymity. It's trivial to signup to Hackernews under an assumed name, or handle, and start venting on contentious issues. Hackernews might shadow-ban your throwaway account, so you might have to lurk moar and share some interesting links before you can comment without being censored. I know from experience. Last time I checked, HN has no strict policy on multiple accounts and you can do this very easily.

In terms of OPSEC, you obviously shouldn't contaminate your real iden with your anon iden, or contaminate your anon idens with other anon idens. You should also deliberately alter the stylometry of your writing so nobody can link two pieces of text to each other. Anonymouth[0] is my favorite tool for doing just that.

[0]: https://github.com/psal/anonymouth


What you say is all true, but unfortunately misses the point of the article and discussion.

Also, just a note, but you have some interesting tells in your text. 'anon iden' is a phrase I don't recall seeing, at least not very often. You used "it's" correctly, another signal, along with the 'moar' spelling, and a few other shorthand phrases.

You might want to start thinking about methods for scrubbing your text if you're actually interested in drawing a line between your personae and you.

https://github.com/evllabs/JGAAP


He might have already and the personality you see may not be tied to reality


Our maybe he's counting on that and that's his real persona. Anonception.


True, but since everything is saved you only need to fuck up once and now everything can be pointed at you. The asymmetry in effort between staying anonymous and someone de-anonymizing you is getting bigger every day


I was recently doxxed on Reddit. Never posted my name, used a unique username, but after 6+ years of posting, someone triangulated the data and successfully found my full name and home address.

Even though the user was banned (several times), even though all the posts are now deleted, along with my entire comment and submission history, my username there is now permanently tied to my real name.

While it was easy for a human user to doxx me using my comment history, it's even easier for computers, who save everything, no matter how briefly it is posted, to comb through your data and determine who you are. The only way to stay anonymous is to completely avoid ever talking about anything remotely identifying


Hell if you are using some program that implements type ahead, they could just identify you with your typing patterns


I sometimes worry that old HN comments will bother me in the future. HN doesn't allow you to scrub your history. But ultimately I say fuck this. Maybe it'll be for my own good, ensuring that future employers and partners will not be the kinds of people who would care about such things. Might be a great moron filter.

Maybe an antidote to this phenomenon is for us to collectively go punk on it. If everyone trolls nobody trolls.


I've cycled through 3 nick names over 8 years for this reason. Not that I'm ashamed of what I wrote or its untraceable, but I am a bit paranoid overall and prefer privacy anyway.


Future employer can blow me.


"Thank you for applying to ProfitCorp. While you have excellent credentials and passed all levels of interviews, your web sentiment analysis score does not meet the threshold for hire. Best of luck in future endeavors"


> your web sentiment analysis score does not meet the threshold for hire

except you would be lucky if one well-meaning employee would tell you the above sentence off the record in violation of every contract he/she singed with ProfitCorp.

more likely, your life would be a mysterious series of surprise rejections with no or spurious reasons, the latter being especially insidious, since it would lead you to to believe there are aspects of your resume that need improvement, when in reality the missing piece is a "postitive comment generator" to flood every relevant online community with comments like "awesome", "let my know when it is finished" and "we should definitely have lunch sometime".

EDIT: okay i'm being sarcastic again, so in the spirit of improving my web sentiment analysis score, the points i wanted to make are * these systems are invisible * once you know about them, they can be gamed


Over the winter, I used a Trader Joe's padded/insulated bag for my laptop. Partially open, because of a broken zipper.

It took me a while to realize that the uptick in friendly conversations with drugstore workers, and the onset of being stalked in my local supermarket, was likely because I now matched some shoplifting profile. It's been a useful reminder of privilege. Though it seemed unfortunate to be wasting people's time.

But here's summer, and sometimes not carrying a laptop at all. And it appears my supermarket, of more than a decade, has retained state. And given they certainly have my card information, I have to wonder how far that state has propagated.

So when choosing a laptop bag, or breaking a zipper, or paying cash, or spotting a possible misunderstanding, you have to wonder, can you really afford to appear different than the norm?

You might be significantly impacted, before (or never) realizing what happened. And thus you get to share in that joy of racial discrimination, pervasive uncertainty. Did the cab really not see me, or choose to not see me? Why did X happen to me, what's going on here?

And yet, the concept of "nudge" has public policy value. Doing noisy profiling, and helping people do the right things.

There's an old line, that the internet is creating a global village. But villages are extremely diverse. From warm and fuzzy, to amazingly toxic. There are tremendous social benefits to "everyone knows you". I just wish I saw more thoughtful discussion of the roles of anonymity, and on aiming us away from toxic.


Couldn't the retained state of friendly conversations just be based upon the fact that you have interacted socially with the people there, possibly induced by your broken laptop bag and thus there is a more open process of communication and friendliness vs. some kind of nefarious surveillance policy.

Also there are all kinds of unconscious social biases that can induce people to talk to us. Perhaps you know expect to be interacted with and thus this orientates you towards social interactions.


>these systems are invisible * once you know about them, they can be gamed

Like listing the full stack of every product you ever worked on in white 2pt font in your resume to pass naive keyword filters and pasting in irrelevant blocks of tags on craigslist


More like "Thank you for applying to ProfitCorp. You didn't match our criteria for employment. Have a nice day.".


Sure, but for ProfitCorp to effectively screen out anyone with a rebellious streak, anyone with a spine, anyone with strong genuine convictions, would this actually be profitable? Maybe another company will do better actively seeking out someone with bottled_poe's profile, someone who from an early age demonstrated (for the sake of argument, anyway) daring and individuality?


I understand the sentiment, but future you might be pissed at your current cavalier attitude.


If the situation reach that toxic point it's time to revolve.


Just wait, eventually that will be part of the job interview too: https://youtu.be/bcaVSTsYyOI?t=49s


Best of luck with your future endeavours James.


James can use my future business where we create fake social media profiles targeted to specific employment. It will work so well it will make James seem like the perfect candidate.


And this is why we need online anonymity, to be perfectly honest.

Its too dangerous to be honest under you real name and has been for years.

Its alot like Roko's basilisk that way. Once you know the capability exists, you have to destroy it or help it. There isn't really any middle ground.


> Its alot like Roko's basilisk that way. Once you know the capability exists, you have to destroy it or help it.

I sort of wish Roko's hadn't played out as such a joke, because the general sentiment is actually a really under-appreciated one.

There are all kinds of settings where the best outcomes are gained by either preventing a thing or enabling it - and succeeding. Revolutions seem like the obvious case, where the highest payoffs accrue to the vanguard revolutionaries (if they win) or the establishment (if they win). Various doomsday cults in fiction also count, where people produce a bad outcome on the logic that if someone else does it first, that would be even worse.

It's actually really nice to have the idea of something which is sensible to restrain, right up until it gets out of control and turns on the people who restrained it.


> Roko's basilisk

Curiosity killed the cat. Now I have to decide whether to destroy it or help it..


Seems like there's an interesting analogy between keeping a startup in stealth mode and keeping your ideas in stealth mode

I see it pretty often here, the start up idea in stealth mode mocked (reasonably I think) the advantage of getting feedback on you idea far out weighs the potential disadvantage getting your idea stolen

seems like something similar might be going on here, lots of people worry about something they say being used against them later, but theirs a cost to that

when you put your ideas in a public place, and expose them to smart people, those critiques sharpen your ideas and give you useful feedback

if you're not actively trying to troll people, and legitimately trying to make your points in good faith

its probably highly unlikely that the potential downsides will outweigh the positives from improving your ideas by getting feedback on them


The difference is you can dissolve your stealth startup if the feedback is negative enough and form another startup around a different idea.

You can't easily "dissolve" your personal identity if things go south. You'll forever be "that stupid person who say X online" to search engines forever. Unless someone's working on a stealth startup to fix this…


I think the antidote to this is to be willing to go down with the ship of Truth. If you are saying something that you believe is true, that you believe is as kind as possible, then when you are pilloried, you will be forced to retreat back into the arms of other people who recognize truth and kindness when they see it.

If you neglect the truth, and instead traffic in half truths and innuendos... in the off chance you are pilloried anyway, where will you retreat to?

The world is full of second chances. You may lose your chance at becoming a senator, or a university professor, or somesuch. But there's mostly always another opportunity somewhere. In the internet age, you can eke out a living off of a motley crew of diffuse patrons more easily than ever. You don't need to please everyone the way Walter Kronkite did. You just need to please your core following.

If you accept that you have no entitlement to any particular space or industry or position then it becomes much easier to accept that things might go sideways. It's not the end of the world. Just the end of your story in one slice of it.

... says someone who just wrote a post this morning on HN about the notion that whiteness and masculinity could be associated with brain damage. I may come to regret it. But I think it's worth it, to put up my sail and allow it to be pushed closer to truth.


  You better watch out
  You better not cry
  Better not pout
  I'm telling you why
  Big Data is coming to town

  It knows when you are sleeping [0]
  It knows when you're awake
  It knows if you've been bad or good
  So be good for goodness sake.
0: https://medium.com/@sqrendk/how-you-can-use-facebook-to-trac...


I think the creator may be better off conducting a through philosophical investigation in efforts of pinning down the concept. You would be much better off giving full citations to Foucault and Deleuze, and full analysis into the Snowden incident and the fallout rather than a sort of half nod to, what I think, are the foundation beams of this concept.

This page kind of assumes the audience is already willing to admit social cooling as a legitimate phenomena, and if not, will be convinced to do so after a few short bullets and very little in the way of actual analysis (ironically, this sort of approach leverages one of the modern patterns the piece could tackle--short bursts of information, instant delivery, decreased skepticism and amounts of reflective thought).

Also, I'd highly recommend avoiding the global warming comparison. It does a disservice to your cause. It basically comes off, at least to me, as saying "our problem isn't a substantial thing in its own right so lets compare it to this other big problem people already care about and hope the very loose and forced analogy strings them along"

All this being stated, ya'll should check out Horkheimer's essay "The Concept of Man." He wrote it in ~1952(might've been 53 or 57, I'm forgetting the exact date)--and it's crazy how prophetic that essay turned out to be. It shows how all our innovation really just led to an amplification of social structures and patterns that were already emerging during the dawn of automation and mechanization. I think it's relevant to your project.


Author here:

Being trained as a media theorist I understand your criticism (and am going to check out Horkheimer's essay, thanks for the tip!).

But this website purposefully tries to keep things accessible in order to reach a wider audience.

I often see how academics have a deep understanding of what's going on, but just aren't as good at spreading that insight to a wider audience, like the startup community.


Ah, that is a fair concern, and this approach makes sense if that is your goal.

Still, I think it's useful to point to some of the academic backing--like you already do with Foucault, just perhaps in greater depth. Maybe add some of that academic/conceptual source material to the further reading section--then again, might just distract from the main point. You know your target audience better than I do, I only have my particular reaction (which is probably a bit idiosyncratic and outside of the scope of your intended audience).

In any case this is a cool project and a noble effort. Hope you stick with it.


> All this being stated, ya'll should check out Horkheimer's essay "The Concept of Man." He wrote it in ~1952(might've been 53 or 57, I'm forgetting the exact date)--and it's crazy how prophetic that essay turned out to be. It shows how all our innovation really just led to an amplification of social structures and patterns that were already emerging during the dawn of automation and mechanization. I think it's relevant to your project.

Do you mind sharing a link to it? A quick search didn't return anything close to that title written by Horkheimer.


I had the same problem.


Sorry--might not be out there as an individual piece--should have mentioned that.

I read it in the Verso edition of Critique of Instrumental Reason

https://www.versobooks.com/books/1138-critique-of-instrument...


I read a lot of both Horkheimer and Adorno. It has shaped my view on capitalism and society in a profound way.

Be careful, reading them is like choosing the red pill.


That essay sounds really interesting.

How much background knowledge would I need to take advantage of it? I'm utterly ignorant of such matters, but I'm trying figure out how to go about learning this.


Unfortunately you will need some philosophical and historical background to fully comprehend the essay--Horkheimer writes, for instance, about the "Kantian Hope" which is a centerpiece of the short text--he also draws quite a bit on Nietzsche and some of his own previous work with Adorno. I'd say Kant and Nietzsche are the most important fellas to understand as far as comprehension of this essay is concerned. Chances are you'll still be able to get the gist of what Horkheimer is trying to say even if you don't have any background in the history of philosophy, but you'll definitely understand the essay better if you first read a summary of the developments and history of philosophy from about Plato to Nietzsche (essentially the major figures of philosophy up until 1900).

Alternatively, you can just dive in and look up what you don't understand as you read.

The Stanford encyclopedia of philosophy is a great resource for that sort of thing: https://plato.stanford.edu/


Thank you for the thorough response!

I'm still starting my learning of philosophy, going through Plato's works.

I'm keep Kant and Nietzsche in mind :)

PS: That Stanford page is awesome.


No problem. I'm also always looking to keep my philosophical muscles fresh, so if you ever want to chat about philosophy in general feel free to shoot me an email. My email is on my HN profile.


> People are starting to realize that this 'digital reputation' could limit their opportunities. (And that these algorithms are often biased, and built on poor data.)

That's interesting of itself, but the bigger underlying issue is that opportunities are becoming more concentrated. When only a few companies dominate hiring in many fields, their mistakes get seriously amplified. Back in the day you were fine if Google's hiring process misjudged you - you could work for Excite or Altavista instead. Nowadays if some ML algo decides that people wearing blue sneakers are worse job performers you can get screwed (without even knowing why). And even worse, the major companies (where the jobs are) often share algorithms.


Worse still-- past employment at the majors is seen as a strong "social proof" indicator by many including other employers and investors.

I saw an angel.co drinking game once. I think you had to chug two drinks for "worked at X" (where X is any major) being put forward as the sole qualification of a founder or key early employee. This is starting to edge out "went to X" where X is a top-tier school.

China is supposedly deploying their own horrific state-sponsored "social credit score" system, but we're doing it too. We're just doing it in a less centralized way. In a way that's worse. In China everyone will know of this system and its existence and I'm sure people will figure out so many ways to game it it'll become irrelevant. In the West people will remain blissfully ignorant as ours has no name or formal identity.

Ultimately I am still more creeped out by what our private sector is doing than what our NSA and CIA are doing. Neither is good, but the latter has some oversight and regulation. The former has absolutely no regulation or oversight whatsoever, and in any case the private sector is very often better at such things than the public sector is. I wouldn't be at all surprised if Facebook's data analytics are far superior to the NSA's.


One reason why I think management in the US is so bad is because the entire corpus of managers went to the same few groupthink institutions and that is why they are so bad--no real risk takers, no real anything. It's like that in politics, too. We aren't promoting the right ideals, but merely conformity, risk-aversion, and foregone conclusions accepted as fact.


Separation of power


> People are starting to realize that this 'digital reputation' could limit their opportunities.

Thanks to the moral police and keyboard warriors out there normalizing contacting employers over an internet argument.


I think you meant to say "hiring managers Googling potential employees."


Surely this is a case of "why not both?"

Campaigns to get people fired when their online posts are revealed are well-attested from all parts of the political spectrum, and generally come out of doxxing efforts that hiring managers don't undertake. There are even campaigns that boil down to "this person said X, harass/troll their employer so that even if the employer doesn't object to X it becomes too costly to keep them employed".

But at the same time, hiring managers have Google and all kinds of tools random harassers don't, like the ability to check criminal records and credit scores. (And there's a great example of an opaque and inaccurate tool governing people's lives - just read about the people sharing a name and birthdate with someone who has bad credit or legal issues!)

So yeah, hiring managers with Google. But I wouldn't discount the other issue either, since it can cause people problems even for comments that don't violate any general social standard.


There are definitely people out there who email screenshots of social media conversations to employers.


Hmmm...

There's one side of this which is straightforward. Companies and governments are compiling data for their own purposes, which range from modeling user behaviour to profiling you so that they can sell you stuff or arrest you for dissidence.

The lines we previously defended for privacy, freedoms of conscience, affiliation and speech have been disturbed, to say the least. This has generaly been done under the surface, without involving users. It is increasingly felt on the surface, via the ads you see on FB or the recomendations youtube feeds you.

The other side of this is what I think of as a "post-history" problem. We're now transitioning into a period where reality is simply recorded. Your comment on Chelsea Manning's release is now a matter of public record. Your next Tinder date might see it and so might the HR manager reviewing your application for senior talent accumulator in 2032.

There are all sorts of implications to that, but mostly people just feel weird about it for now. Anxious and uncertain.

So... FB (HN, whatever) is a space for casual discussion. Casual generally meant private in the past. Now, some of the most casual discussions mean an extreme opposite of private. This inevitably comes with stress.

Calling it a hilling (or cooling) effect is evoking a political dimension, one that speaks to the first part of the issue. The second issue, that's more of a social issue. It's political too, but I don't think that's where the centre of mass is.


We should start compiling data on the government and turn the tables.


I really feel that engineers need to wake up to this kind of thing.

I'm not saying we should stop (although that's what might happen), just that we pause and consider what this is doing to the world. It is the undercurrent for so many profound changes going on right now.

Are we really comfortable as individuals building systems which predict someones mental (ill) health, personality traits or ethnicity just so we can sell them things, or worse, not sell them things?


It's like any other thing in capitalism. You'll have a contingent of folk that won't do the work, based on ethical reasons, and you have folk that will, for financial reasons.

Anecdotally, the few folks I know that work for data collection companies are all "tinfoil hat" types. They have flip-phones, they have no online presence, they smile like a Cheshire Cat when you ask them about it and you generally get the impression they've just decided to categorize it as "us" and "them". :-\


> It's like any other thing in capitalism. You'll have a contingent of folk that won't do the work, based on ethical reasons, and you have folk that will, for financial reasons.

That is true, but going on my peers (especially the ones fresh from university), I think a dangerous proportion of people simply aren't aware on any level of the ethical implications of what they do. It's that which worries me.


I don't think I've ever seen as much apathy in a classroom as my fellow CS majors displayed in our society and ethics course.


> society and ethics course

How was the course?

I didn't have one, but many other "engineering ethics" courses I've seen inspire apathy just because they're terrible. It's like school anti-bullying campaigns - even if you're vehemently anti-bullying, most of the campaigns are too ridiculous to feel anything good about.

On the other hand, something like Canada's Iron Ring seems to get taken very seriously. It seems like a nontrivial part of the challenge is teaching ethics in a way that reaches even the people who want to behave ethically.


It's true, it wasn't a mindblowingly exciting class or delivery. There were a handful of people who cared to ask questions beyond the prompts for group assignments or during lectures. A lot of people accepted as obvious fact that every household should have a humanoid robot, or e-government would make perfect decisions, or complete quantification of the individual couldn't possibly be abused. (These are just the ones that stand out in my memory.) Then you also have the garden variety folks playing minecraft, doing other coursework, etc.

I'll also grant that I'm not very visionary or even great working/leading large groups of people. How would you teach a class exciting enough that virtually all students would attend it, enthusiastically, even if it were elective? (It was required for us.)

At the end of the day, it's only going to be as exciting as the students make it by involving themselves and thinking. They are the ones creating tomorrow's startups, not the professors. As it stands, it seemed like quite the accurate litmus test for how many people care to think about issues in this way in our field.


Because "society and ethics" sound a lot like "bowing to the man", which coincidentally is exactly what this fine article is against. Because the man does stupid decisions based on flawed assumptions.


It might sound that way to you. To me "society and ethics" means asking: what are both the positive and negative implications of something, and how do you weigh or mitigate them? What should you do, what shouldn't you do, and why?


I can tell just from the postings on HN that SV people need better ethics training.


We had innovation team member spawn a project that would track how often call center reps were seated and monitor their activity. I had to be the party-pooper that brought up how Draconian his amazing idea was. I told our lead I refused to work on projects that spied on people to punish them, but would love to work on projects that reward people for doing a good job as long as it respects their privacy.


I don't think there's any risk of that. As I pull out my favorite quote for the last few months:

"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

Very few people in the upper echelons of society (like highly-paid Silicon Valley engineers) truly believe they're doing something wrong, or could even be convinced they are doing something wrong. Above a certain level on Mazlowe's hierarchy, people have selected where they work in part because of the mission. They have bought the company line because in part, the company line is what they're there for.

People at Google don't work there because they really love advertising and really love putting banner ads in front of people. They work at Google because they believe they're 'making the world a better place'.

And good luck convincing anyone that their purpose in life is a lie and that they're part of the problem. I've convinced exactly zero people so far.

I mean, I'm sure Uber employees feel they're empowering people to work for themselves and helping people make a decent living who might otherwise be stuck unemployed. Maybe it took that many blatant scandals and fiascos for Travis Kalanick to come to terms with the fact that he wasn't making the world a better place?


> "It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

One of my favourite quotes too, but I refuse to accept it as law.

If people really do work at places like Google, Uber and Palantir to make the world a better place, then it suggests they would care if they are making the world a worse place inadvertantly, right? In which case it just takes education.

If, on the other hand, they don't want to look too deeply into the social consequences of what they work on, then that is more difficult to deal with.

All of these companies (especially Google) make some fantastic contributions. I just wish sometimes engineers looked up from the keyboard to see the bigger social picture.


You're missing the possibility that your definition of making the world a worse place is their definition of making the world a better place.


I'm not saying that it's impossible to overcome, just very difficult. This is cognitive dissonance territory, and a lot of people would rather just deny what's presented to them then reevaluate what they've been doing for a number of years.


> I really feel that engineers need to wake up to this kind of thing.

Wake up to this thing? Who do you think is enabling it and laughing all the way to the bank and/or the VC money?


True. But from my own experience I get the impression most engineers are ignorant in this regard rather than malicious. Though I'm happy (or not..) to see evidence to the contrary.

We all enable it, I'm not sure the awareness of the full affect is there though.


For me, it was being a software developer that made me aware of it.


I once turned down an excellent offer in order to avoid working in AdTech. It was a lot of money and probably fun work, too. Not everybody can do this, and most people would consider me crazy for having turned done that kind of money. After all: it isn't illegal, right?

One other day I met with a guy from a company, that branded itself a social startup. Do you know what social means? Social, like in society, like in living and working together? It means, I employ people to think about the best way to monetise your relationships with others.

I admire technological progress and I believe it's the only chance we have. But the corporate, SV or VC double speak and the weak and irresistant minds of maybe 90% of the population makes western societies an awkward place to live.


Our species is hardwired to mix among multiple social groups, each having different norms and hierarchies. We have positively proven that we evolve over time by allowing repugnant minorities the ability to publicly speak. Out of 100 reprehensible social opinions, one turns out to be the next Martin Luther King, Jr. Our perception of something as being "good" or "bad" for public discourse is notoriously fallible and broken.

The things I say in a social group of former college buddies and the things I say in a group of the local clergy are two different things. That doesn't make me two-faced: it makes me human. In fact, the ability to converse and trade with drastically different social groups is probably the essence of humanity.

Yet our current overlords that program the internet are convinced that the entire world should run as if it were just a huge version of their favorite social group. Joe tells racist jokes? Maybe we let Joe continue, but we definitely ought to score that. After all, Joe could offend somebody -- and then they would be mad at our platform, not Joe.

We are instrumenting a terrible evil on our species, even more evil than the security and surveillance state, if such a thing could be possible. SkyNet has finally attacked, and because there are no T-1000s leading the way the vast majority of the population doesn't even know it's at war.


This sounds like a self-fulfulling prophecy. Tell people that having data of them out there will change their behavior, and then they'll become aware of that idea, and will change their behavior. If they don't know about the concept, they might not.

You could even say that this page, and people trying to raise awareness for this issue, are harmful!

Imagine a few important people stepping up and saying, no, we will not disadvantage applicants because of their "unprofessional" facebook profiles. In fact, we value authentic, unintimidated people. The act of saying so will make it a little bit so!

We need to shift the blame from people expressing themselves, to those people punishing them for it, or even to people giving well-meaning advice like this.

(Just a crazy thought I just had. Didn't want to be to harsh with the creator, who raises an important discussion.)


Ignorance is not a way out.

In 2015 the databroker market was already worth 150 billion dollars in the US alone. (source: FCC report on databrokers)


>>> people trying to raise awareness for this issue, are harmful

Harmful for who? If, let's say, they make people actually aware of this, that might lead to change towards privacy. But if people are unaware, and they remain unaware, the probability of change is smaller.


That's what I'm wondering - maybe "privacy" is the chilled state. Maybe what we should be aiming for is not "privacy", but rather being at ease with modern communication.

The following is a kind of evil comparison, and I'm sorry, but I can't come up with a better one right now - it's a bit like saying: "If 'chubby' people were aware that they should wear flattering clothes, then this might lead to other people liking them better." - Maybe well-indended advice, but WTF no! You shouldn't tell somebody to hide their body, and likewise you shouldn't tell somebody to hide their emotions, political ideas, drinking pictures, social moments, and so on.


People will figure it out one way or another. In fact the site explicitly mentions people are already changing their habits. It doesn't take a coordinated campaign for people to realize something's up, just rumor and the occasional anecdote.


The first thing I thought of was the "Nosedive" episode of Black Mirror this season. Seems more and more plausible every day.


What is missing is a government actively pushing this scheme - which China is doing with their "social credit score".

An intermediate step would be selling of derived data to anyone, not just companies interested in hiring you but really anybody.


It is never mentioned how things came to be in that episode. I'm sure there was a lot of government in it, as is in all the episodes of the series.


I am not saying there is nothing to this, but this is just a catchy concept with a convincing narrative. The sources are news articles and YouTube videos, not scientific papers specifically addressing the issues mentioned, for example showing a link between self-censorship and online monitoring or quantifying that effect.


Author of the website here:

The website does link to a lot of scientific studies actually. Both throughout the page and at the bottom.

But I purposefully didn't want to link directly to the PDF's of those studies too much. By pointing to accessible news articles about those studies in the "further reading" section I was hoping to keep things accessible to a wider audience.

This article that appeared in The Guardian today about Social Cooling has some more sources you may like. https://www.theguardian.com/commentisfree/2017/jun/18/google...

And you may also like "Postscript op societies of Control" by philosopher Deleuze, that greatly inspired this view. https://www.qwant.com/?q=Postscript+op+societies+of+Control

We need catchy concepts to reach a wider audience.


Catchy concepts and convincing narratives sometimes precede more rigorous studies. As you yourself cautiously indicated, there might be something to this.

I can't satisfy anyone's need for a thorough scientific paper offhand, but I can certainly add anecdotal evidence : I regularly self censor online precisely because I know I'm being tracked in some fashion (and probably in ways I haven't even thought of : retroactive big data analysis 20 years from now is likely to be more sophisticated than it is today). I doubt I'm the only one.


I agree with the general premise, but I think privacy is not the perfect weapon here: if someone is making a public statement (e.g., a personal view expressed on a weblog) they cannot claim that it is a private statement. The goal (I think) is to prevent them being hounded for it outside the channel where it was expressed.

One option is to bring back anonymity so people can make public, anonymous comments. Anonymity has been sharply curtailed (because terrorism) and this is, IMO, bad for society.

Another is to mandate short term limitations on use. For example if the employer wants to look at your online presence they can only look at last week of your posts and only for initial employment consideration. IMO employers should not look there at all, but maybe this may be a palatable compromise.

The chap in HR is not itching to dig dirt on employees -- he just has a distorted notion of due diligence forced on him. If he has a clear, legal definition of what he can and he cannot look into I suspect he will gladly comply. My 2c.


What stops hiring managers and potential colleagues?


We are in the age of "virtue signalling" people pretend to hold what they believe are popular virtues. Is it possible to mention "Donald Trump" and not cause an uproar of virtue signalling?


Virtue signalling is only a subset of this problem. It's an important one because of the way it interacts with human psychology, but it's not the whole story. This goes into things like "not being able to post those photos of me getting blasted last weekend", being unable to partition one's identity such that you might be able to keep your sexual orientation details away from people you may not want to know them, and all sorts of other things well beyond the political issues of the day. (Sexual orientation may be a "big political issue" too, but in this case I'm referring to the personal dimensions of those issues.)


I absolutely agree with you about the subset however the fear of being different is huge and I believe that plays into what people say online. People have lost jobs with inappropriate online rants, sometimes racial or religious discrimination sometimes being negative toward an employer or even looking for another job. Wrong signals can cost you a lot. I am really glad it is been debated/discussed


"Virtue Signaling" is the most asinine, mean-spirited concept invented since about the middle ages. Its only ever used as a blanket dismissal of others' opinions, even going so far as to blatantly use the moral strength of an argument as a weapon against that argument.


Sure, my wife would mostly agree with you, I don't totally disagree either but try and see the context in why it has become widely used I am going to give you with a quote from Antifragile by Nassim Nicholas Taleb - “Never listen to a leftist who does not give away his fortune or does not live the exact lifestyle he wants others to follow. What the French call “the caviar left,” la gauche caviar, or what Anglo-Saxons call champagne socialists, are people who advocate socialism, sometimes even communism, or some political system with sumptuary limitations, while overtly leading a lavish lifestyle, often financed by inheritance—not realizing the contradiction that they want others to avoid just such a lifestyle. It is not too different from the womanizing popes, such as John XII, or the Borgias. The contradiction can exceed the ludicrous as with French president François Mitterrand of France who, coming in on a socialist platform, emulated the pomp of French monarchs. Even more ironic, his traditional archenemy, the conservative General de Gaulle, led a life of old-style austerity and had his wife sew his socks.”


That's pretty asinine too. You're not allowed to advocate a different life than the one you're leading? It's like the people saying you can't promote a greener environment if you still own a car. It's just another way of shutting them up by creating ridiculous moral standards and acting like you've 'caught them' at an inconsistency. Life is full of inconsistencies.

It's also too simple to equate advocating wealth redistribution with needing to give away your money. Giving away your fortune is probably not the best angle for wealth redistribution. Certain rules of fairness for the wealth redistribution may be needed. For example, it may be useful to fight for rules that get all relatively rich people to join in (aka taxes), which will create a much more powerful push for equality.


Yes, there are people who use their wealth and influence responsibly for the betterment of others, this is the opposite of "trading in virtue"


It's a perfectly coherent political position to advocate for higher taxes, including oneself, without being willing to unilaterally give away money. In fact that's the basis of all taxation. And there are many on left, including lots with money, who are advocating for higher taxes for themselves, Warren Buffet being the obvious example.


I agree but I must also point out that in some parts of the country not being a Trump supporter could have serious social consequences. The entire country is not San Francisco.


As much as I love this site and what it provides, the last sentence of your comment should be an all-caps sticky message on the front page at all times... :)


Hah.

It's not just HN. When I talk to other companies I often get asked "are you in the Valley or the city?" Answer: we are in SoCal, so keep going South. Sometimes they are surprised, though they seem to find it reassuring that we're still in California. If we were in Texas or North Carolina I could see that being disorienting to some people.


We're in an era of accusing other people of "virtual signaling" instead of actually discussing what people say.


In one model of human behavior we all just become better people because we self-censor. I think that may be true for the first generation of people experiencing these effects in a society. Later generations will have more of a problem.

The problem comes when they repress negative emotions and other status detractors because the cost of even being aware of them is too high. Then you have people who, in psychological terms, are prisoners of their shadow selves. They become anxious and depressed because they fear confronting it.


Author here:

I also worry about Learned Helplessness, where we believe that there is nothing we can do about it. https://en.wikipedia.org/wiki/Learned_helplessness

In Silicon Valley the Technological Determinist view that technology has it's own will, that it is some unstoppable force, is the dominant but dangerous viewpoint.

That idea is only creating a self-fulfilling prophesy.

The reality is that we as a society have always taken the rough edges of new technologies through the creation of laws and new norms. For example, we pretty much put a halt to nuclear energy.

We can and we must regulate the Big Data world much more. And the first step is that we must help people understand the problem.


I agree. We need to be more conscious of the Precautionary Principle, but having said that it is easy to see how technological determinism is an accurate view regardless of whether we like it or not. As an example, no one quite knew how much cars, television, or the ready availability of cameras on cellphones would impact society. But consumers were so excited by the upsides that there couldn't ever be debate about any downsides or call for regulatory frameworks until the downsides manifested themselves.

The trick that we use is to hook the consumer faster than law can react. Uber did this successfully. Segway flubbed it. With Segway Kamen wanted a big rollout. It attracted attention and municipalities started passing legislation restricting Segways before they were off the assembly line.


You just described the Victorian condition. I wonder if we're on the verge of a digital neo-Victorianism.


I think we are. I think societies exist over a spectrum. At one side you have societies that are loosely knit because people/families can take care of themselves with minimal social dependency on others. At the other end there are societies that where because of population density or economics, people need to convince each other of their social worthiness to participate. The latter become very competitive and the way that competition manifests is in rigid social mores along with the "outing" of people who don't conform.

What we're seeing now is technology connecting us more and more and amplifying the potential disqualifiers. There is more social freedom when people have more independence.


How does everyone training themselves to think more alike create better people? As the Vulcans say "infinite diversity in infinite combinations." I find inspiration in new view points, especially when I disagree with them.


I had a high school teacher once tell me, "Never write anything down you would not want published in the local paper". This was before the internet and smart phones. It seemed like good and not very restrictive advice that I used as a heuristic for many years. Maybe at that time there was a better balance between personal privacy and society's right to know things about you. Equivalent advice today might be, "Never say or do anything within 100ft of a smart phone or put into a computer anything that you would not want everyone in the world to be able to see now and at all times in the future." That is quite a bit of change and I would feel unjustly controlled following this heuristic.


Harvard Rescinds Acceptances for At Least Ten Students for Obscene Memes[1]: "Harvard College rescinded admissions offers to at least ten prospective members of the Class of 2021 after the students traded sexually explicit memes and messages that sometimes targeted minority groups in a private Facebook group chat." (June 5)

A related NYTimes opinion piece [2] encourages "help young social media users realize that their online and real-life experiences are more intertwined than they may think. Parents might, for example, cite current events, like the Harvard episode, to remind them that nothing online is ever completely private". Which is true, good advice, and social cooling.

And the nytimes/reuters version [3] is currently "Page No Longer Available". How does that affect your confidence that "if it was going on, you would know about it"? :)

[1] http://www.thecrimson.com/article/2017/6/5/2021-offers-resci... [2] https://www.nytimes.com/2017/06/07/well/family/the-secret-so... [3] https://www.nytimes.com/reuters/2017/06/05/business/05reuter...


I might be misunderstanding the point being made here, but at one part doesn't make sense to me. The digital reputation argument seems to be saying "big data is bad because your reputation (i.e., people's valuations of your past actions) is now more accessible to people." Such an argument can hold two ways:

1. Giving anyone access to your reputation is inherently bad.

2. Giving some amount of people access to your reputation is OK, but the amount of people big data gives it access to is now magically worse.

(1) is definitely untrue, at least to most people. We all definitely use our knowledge of other's reputations to make judgements, and apply social pressure to make them conform. For instance, if someone you know is a rich snob, or a vehement racist, you won't hang out with them.

(2) seems ad hoc. Why would letting more people know about your reputation magically be worse? Whether someone knows about your reputation should either be bad or not -- it's not dependent on how many other people are aware of your reputation.


The problem here, like all mass surveillance issues is two-fold. The first, and generally less serious: you will be discriminated against due to your views/behaviour, or or your past views/behaviour.

The second is much more insidious and difficult to resolve: since systems are never perfect, it is likely that many will be discriminated against due to over-generalization and mistakes in the system. The more complete the surveillance appears to be, the more confidence authorities have in the system, the more likely that people get into serious trouble due to no fault of their own.


Sure, but what I'm saying is that the above holds for non-mass surveillance as well. So if you're OK with offline systems of social reputation (which most people are, and it seems essential to the function of society), then you owe an explanation as to why your two points [discrimination on views, imperfect generalization] don't apply to offline systems of reputation.


The difference is: - scale - transparency (ability to complain or discuss decisions). - culture: people can recognize normal discrimination, but think algorithmic judgements are 'neutral'.

Check out https://www.mathwashing.com


I wholeheartedly agree with the linked page (we should encourage this kind of scrutiny in algorithmic judgements). However, where there seems to be disagreement is that an algorithmic judgement is a step forward compared to offline, human judgement. An algorithm has fixed code, and fixed code is traceable and auditable. Yes, it might be hard. Yes, legislation may not be there, but it's possible. Compare this to human judgement: 10 years ago, if HR threw out your resume, you have no recourse. Today, if an algorithm automatically rejects your resume, we at least have a path to potential recourse in the future, since you can analyze an algorithm's decision making. A human's judgment is only more opaque. Essentially, when you say:

> people can recognize normal discrimination

I don't see how you can reliably recognize discrimination in any way that can't also be applied to the decisions of a computer program.


Broadly, I agree with you. For discussion's sake, I think the counterargument would be to focus on the scale and culture points. Specifically,

1. The scale means we're applying many more judgments in many more places that would've slid by the human judgment radar. This is arguably dangerous in itself because if we hold judgments to generally be unreliable, we're just adding many more points of unreliability.

2. The culture could conceivably develop in the direction of blindly trusting automated judgments, such that the type of scrutiny you encourage will dwindle as a practice. This would put us in a much more vulnerable state with respect to bad judgments.

That said, I still side with you. I think the ultra-dystopian scenario outlined as a possibility by OP is unlikely precisely because power/control are decentralized and very difficult to wield intentionally. And there's massive inherent conflict between various actors that have greater means to try to affect that control.

However, I also don't think it's inconceivable to end in the ultra-dystopian scenario. Technological progress is generally good and generally can't be stopped, but it also continually introduces undesirable possibilities as well.


To be honest, I don't see how an increase in algorithmic judgments will necessarily lead to a culture where we are more blindly trusting of them. I think it's up in the air, and there are forces working both ways: the more present it is, the more that engineers and policymakers have to think about it, though perhaps end-users will start noticing it less (?). In any case, it seems like those latent forces are to blame, not "big data" in it of itself.

The scale point makes some sense, I'll admit. We're raising volatility in applying potentially-discriminating judgements in the first place. Perhaps indeed it's a tradeoff between the expected boons of algorithmic decision making and their increased risk of discrimination. That said, the scale argument would then not hold for replacing what are currently situations with opaque human judges with robots.


I got the impression it was more to do with gaining access to the various pieces of information which make up your online (and offline) reputation, out of their original context. That lack of context then means the otherwise benign info suddenly makes you look awful.

That's the big thing here, the internet strips context and nuance from everything you post unless you take a lot of time and thought over exactly what you say. It's proably fair to say most people on the internet don't do that.


Still, why is that distinct from offline reputation (hence my original question still holds)? All second-hand information about another person's reputation is just as out-of-context when repeated by another person. Consider (1) your friends, who might have their own political agendas, bad-mouthing a politician who disagrees with them or (2) someone recommending their friend for a job you have. It seems to me that it's even worse off-line: people actually intentionally select evidence to influence a person's reputation when trying to relay reputation about a person you don't know directly. At least when a machine does it there's code you can look at to verify whether it has an agenda.


It seems to me we need to write 'privacy in public' into our laws. We've reached a time where government possesses the power to track most citizens simultaneously. Businesses can do the same with everyone they come in contact with, and technology is cheap enough I believe an individual could do a decent job tracking people in his community. We all have to exist in public to some extent, I believe it's reasonable to demand more privacy be extended into public space.


Ok, but isn't this the system working as intended? Yes, to me this is absolutely awful, a new form of Gestapo with similar awful consequences, but isn't this exactly what Big Data is supposed to do? Make everyone conform and punish those who step out of line.

Just raising awareness won't change anything - the system is working as intended for the people who were sold on it and the people who implemented it (bar a few unfortunate engineers who had to do it for the money). History is rife with examples of people trying to enforce a more rigid social order with varying degrees of success. Letting people different from you have freedom is not something that many people want. Think hard about the last time you thought "the world would be a better place if everyone thought like me". Then realise how many people don't follow that with "but enforcing a mind-police on society is awful".


I would add that, in an age where old values fade away, people seem to be caught in a strange economy of visible virtue.

By reducing moral relativism to the self and ignoring its role in relationships at large, individuality overcomes any collective moral system (be it religious, political or philosophical), and so self-righteousness assumes a form that values spontaneity and originality - the tools of personal promotion - above ethical soundness. This seems to be, in my opinion, the hummus of the most visible social outcry. Social media outrage took the place of discussion, just like opinion articles are taking the place of news reports.

Uncritical adherence to this logic harms us all. And the chilling effect strengthens it.

In the past, people fought against a static, conservative religious or political moral, in order to make room for individuality, liberty and democracy. Now we have an agglomerate of individual perspectives fighting for visibility in social media, where popularity (by any shallow measure) took the place of reasoning. The chilling effect makes public virtue even more black and white, and conformity (or social cooling) is just settling in either side. Living in the fringe that is refusal of conformity (social heating?) has become more difficult and exhausting than ever...

I don't know. Maybe I'm wrong and things were like this for ages. Maybe there is an answer in all the valuable teachings of the past that we simply choose to ignore for the sake of the here and now.


All of this with 4, different share buttons. Just to flag everyone who visits the site.


Those buttons are privacy-friendly (as stated right above them). They don't load any tracking scripts.


That's the saddest part. Social beacons are so entrenched in the internet at this point, I don't think we'll ever be able to get rid of the majority of them.


Oh. My. God. Fearmongering is everywhere.

We also might conclude that "meh, teens getting drunk occasionally" or "meh, people actually having a sex life" is pretty goddamn normal and get over a bunch of nonsense.

No matter what goes on around us, we still have a choice in how we interpret things and what kind of world we choose to build. There is zero inevitability here.

When Demi Moore posed naked on the cover of a magazine while pregnant, this was some sort of shocking dramatic thing. Now, it seems like every pregnant celebrity does the exact same pose and posts it somewhere. It has become prosaic.

Seriously, we can choose to be more humane to people. Things going to hell is not some inevitability.

Edit: Maybe a better example is that when 24 hour news channels became a thing, it changed the news. Before that, people were very straight laced and serious for the 30 minutes that they reported the news. This was not sustainable when reporters had to talk live all day, every day. They became less stiff and formal, more able to crack a joke and be human. They still had to treat some subjects with appropriate respect, but 24/7 news channels caused news to lighten up some. Geez.


A related question: can privacy be quantified? And if so, how? On what bases?

Thoughts I've had:

Total quantity of data available?

Ability to define boundaries?

Ability to enforce those boundaries?

Knowledge of what boundaries to even define?

Who knows what about a person?

How many agents know what?

How aware is the subject of actual knowlesdgee?

How rapidly can that knowledge be further transferred?

Does the surveillor know more of the subject than the subject?

Can the subject access that knowledge?

Can others?

What level of benefit (or harm) can be transacted on the basis of surveillance? Does this accrue to the subject or others?


"Knowlesdgee"?

Anyway, I think you're forgetting one important dimension: whether the person in question would like that particular piece of information to be known.


Sigh. Soft keyboard keeps duping and misregistering ketstrokes.

That dimension is the setting of boundaries. E.g., "I don't wan't you to know, or share, or seek, or ask of some X." Or if it's acquired, not to share it except as specifically specified -- only with notice, on request, within a given grroup, for (or not for) a specific time, etc., etc.


What is the goal in asking these questions?


Answers.

Or possibly just better questions.


What would you do with those answers?


What suggestions might you have?

Or perhaps, what possiblities occur to you?


Nefarious ones.


It is obviously bogus that Foucault raised that issue in 1975. For example Erving Goffman introduced the term "Total Institution", which implies total surveillance already in 1961. Subsequently, there have been many public reports concerning the information society and the problem of data bases. For a legal assessment see Westin, Columbia Law Review 1966, pp. 1003ff.


It's not 'obvious'. The goal of that chart is not to be a perfect representation of history. It's to show 'greatest hits'. Foucault was a superstar-philosopher who was a regular TV guest. His panopticon analysis is way more widely know than Goffman's analysis (as much as I love Goffman).


Sure. But why this urge to rewrite history then? Just because Foucault is the grandfather or surveillance studies? Why forget all these older and more fruitful discourses?


I see this chilling effect manifested sometimes on HN as well. People will create throwaway accounts to comment on certain subjects.

An example - there was a discussion a couple of days ago about FB and I questioned why a commenter felt the need to create a fake account simply to comment on FB. It turned out they weren't even a current employee but an ex-employee.


on HN, the mods are directly responsible for the chilling effect. they censor anyone who doesn't tow the company line, or has any kind of strong dissenting opinion. this is achieved by shadow bans, single-IP bans, and then finally by flagging every single IP they've ever logged in under as blacklisted for new account creation.

i cycled through 3 or 4 accounts with multiple-thousand-points of karma, but in the end i just stopped giving a shit. this sort of amateur-level banning may work against your typical troll, but on HN, the end result is you're wiping out diversity of opinion, because although the people on here are smart and resourceful enough to get around any ban that happens over the internet, at some point it just becomes not worth it just to express your opinion -- that's how censorship actually works in the real world.

at the end of the day, YC is a VC and has interests to protect.


>"on HN, the mods are directly responsible for the chilling effect. they censor anyone who doesn't tow the company line, or has any kind of strong dissenting opinion ..."

What is the company line exactly? Is it just maintaing agreement with YC-backed companies?

I hadn't heard about the shadow bans, are they only on submissions or commenting too?


How is this not the well documented Hawthorne Effect? That being watched digitally and with more ubiquity seems to be a distinction without a (real) difference. Calling something new does not make it so.


Social Cooling is related to a number of concepts, including Foucault's panopticon. But Social Cooling is about much more than "individuals change their behavior when observed"(the Hawthorne effect). That's only the starting point. Social cooling aims to cover the large scale societal consequences of that effect, and makes a comparison to Global Warming to point to the scale of the problem as well as the possible path that leads us out of the problem.


I sincerely doubt that anything that places the 'panopticon' in the center of the analysis will be able to point us the path that leads us out of the problem. The discourse of ubiquitous surveillance (or less political: observation) is a discourse of weakness, a discourse of lacking alternatives. The only normative proposal they make is: watch us less!


of course, "social credit systems" to rate citizens are already being tested in China: http://www.economist.com/news/briefing/21711902-worrying-imp...


Everyone is generally playing catch up with China in these matters so I expect this will show up all over the world in five to ten years as well.

And with "show up" I mean as blatantly as in China. I'm actually fully expecting these things to happen below the water already.


>It is planning what it calls a “social-credit system”. This aims to score not only the financial creditworthiness of citizens, as happens everywhere, but also their social and possibly political behaviour.

This already exists across the world in the form of credit rating agencies and social media. It's merely an issue of data integration


Except its:

1. Centrally regulated and collected 2. Mandatory 3. Explicitly includes matters of ideology and opinion

So, it's hardly comparable to what is extant under the surface in other countries.


1. So is Facebook and the credit rating agencies in the US

2. No it's not, it was unsuccessfully tested in a single county

3. So do the judgments of employers on whether or not to hire or fire someone based on publicly expressed political ideology

It may make us feel better to point the finger over there to distract from the parallels of what is going on locally but doing so is hardly practical.


HN can help us investigate this phenomenon by tracking how often users click on 'reply' but then shy away from submitting it.


I've done this multiple times over the last 2ish months, and every time it has been because I don't want to create the wrong perception of myself. In fact, I almost did that with this comment, but decided the irony was too much.


Then HN can sell the "self-censorship index" data to data brokers... oh, wait. :P


I think that one of the reasons why awareness is still relatively low, is that it is often hard to imagine the implications of this for the casual internet user as HN is quite a bubble in this regard.

And by implications I mean something more than not seeing job ads, or not getting a loan.

Fortunately, there are no such cases that I am aware of. Unfortunately, it might be just a matter of time.


This is a very delicate topic indeed. On one hand, users are not willing to pay to use the services of platforms such as Youtube or Facebook so they need to find ways to monetize. On the other hand, As a user, you feel that your trust in platforms has been betrayed by knowing that platforms are selling the data that you willfully post to the public.


Yes, it's becoming more difficult to argue that users are giving 'informed consent' when they sign up to 'free' services.

In the EU the new GDPR law is already making the 'informed consent' requirement more strict. http://www.eudataprotectionlaw.com/consent-under-the-general...


Domainistication - How inventing a quite meaningless term, made of N words, then registering the domain made of these words glued together, and uploading a single page thingy, became a thing.

More on that in https://domainisticationofngrams.com


wow, this is a huge question and a huge distinction "If they say they don't sell your data, ask if they are selling theirs." I've never even contemplated the ramifications of derived data. Can anyone enlighten the gang about what risks reside along this derived data surface?


In the future, the right to anonymity is a right that will become worth fighting for.


That future is now.


If public expression of a thought MAY result in more sophisticated communal understanding revealed by discourse, then it follows that self-censoring said thought MAY result in eliminating more advanced thoughts.

That is a real concern to me.


Serious question - how viable would it be to implement some sort of data decay?

I don't know how it would be implemented, on what schedule, and to what extent.


If even one person that has access to it makes a non-decaying copy, the decay has failed...


That depends on the person and their capabilities. It's more a probability distribution.


And this is why Facebook has become my go to place for contacting old friends I haven't talked to in years and relatives I speak too even less.

If it's people I interact with all the time there's other ways of contact that are less data mined like a good old text message or phone call.

Oh yeah and my last Facebook post is well over a year ago. There's no way in hell I will post random pictures it that will show me in bad light. It's basically a slightly less official business profile.


Forget about posting. Even visiting that stupid site will mess with your mind and others in unseen ways. All parties are at all times interacting with a set of algorithms optimized for profit/integrity intrusions/behavioral changes/god knows. You might think you're smart enough, but then why are they spending so much money on something that doesn't work. Just say no.

https://github.com/andreas-gone-wild/snackis


Regardless of current issues, I figure that in 30 years everybody will have access to AIs capable of trawling the internet and correlating all my online identities. At least barring things like nuclear war, global draconian censorship, etc. I've taken to just using my real name in most places online to remind me of this. If I'd created this HN account a few years later I'd be AndrewClough rather than Symmetry.


I guess it's time for Samizdat all over again.


It's like we need Samizdat all over again.


I know that it's crazy

I know that it's nowhere

But there is no denying that

It's hip to be square

?


the formatting of this site is truly awful


It's a great presentation of facts into an infographic form that is easily digested by lots of people.

It could use some serious fine tuning for grammar though, likely as a result of English being a secondary language.

If the guy who owns the website is on here, I'd be happy to help out with the syntax and grammar. PM me, I'd love to help out with this - it's really well laid out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: