(See we can all make "pithy" one word comment in order to undermine attacks on our stance, but it's not really like, useful. I think this comment is actually cowardly as you seem to either be baiting someone else into counter-attacking what they perceive to be your stance and thus allowing yourself a chance to refute their assumptions prior to making your own claim, OR you're actually just afraid of wading into an actual debate and you're just throwing out a nothing virtue-signal. Either way, HN generally expects better)
The chilling effect may actually be a good thing, given that discourse these days is overheated.
There's a weird magic trick that social media companies have played on people to convince them that the text and images consumed on their websites are socially/culturally/politically relevant. Once it becomes clear how easy it is to fake that text people will come to understand how cheap and irrelevant "opinion" has become and this magic trick will become weakened.
We want an an appropriate degree of emotional engagement with discourse, which is 1-1 with how much discourse is happening. People being too angry is caused by people discoursing too much and vice-versa. There are opposite problems associated w/ too little discourse, but we don't suffer from those.
Things are hyper-polarized right now and there is no magic political synthesis that is right over the horizon if only we could just keep discoursing a little bit more. This is like a heroin addict thinking they'll cease being addicted after that last fix. The solution is to cool things down.
Personally, that's not been my experience at all. I often find that when I have two friends with highly disparate and deeply held beliefs, the intense emotions they associate to these ideas are due to them not actually engaging each other but, instead, taking their emotional cues from their respective ideological silos (where no real discourse is occurring), and then proceeding to talk past each other.
Learning how to actually talk to one another in good faith with humility and charity is a skill that comes with practice. Deciding to engage each other less can worsen the situation by allowing one camp's preconceived notions about another camp to go unchallenged by reality. This allows each camp to tell an increasingly vilifying story about the other, which increases, rather than decreases, the emotional charge between the two.
engaging w/ someone is different from discoursing with them. "engagement" is what social media companies say they provide - but really they just offer "discourse".
I would predict that the chilling effect will be lesser for "unreasonable" voices such as trolls and extremists, and will be greater for the moderate voices.
This is not a good thing, as the past 10-15 years of social media has shown.
The entire thesis of TFA is that human psychological traits on societal scales are not prepared to handle an arena of discourse where that's true, that people will either be duped or completely check out of discourse per se, not limited to online social media.
No, it's a feature of the human mind that truth is considered objective independent of the lens used to acquire it. "Post-truth" sources in some arenas will be conflated with "real-truth" sources in others, leading to a blanket demotion of the perceived value and quality of truth. The whole argument from TFA is about human psychology, not about specific offerings of "marketplaces" where truth may be more or less maligned.
dunno what "TFA" refers to here, but it seems like we're heading into a argument regarding epistemology, which is not a discussion that HN handles well.
IMO "truth" is distraction here because what is at stake here are people's values, not their understanding of math and physics. When people worry about "post-truth", they're worried about liberal values no longer being the unquestioned default. It is absolutely a marketplace, and if people switching marketplaces en-masse makes it harder to launch rockets and develop vaccines, then it probably means those activities are making people net unhappy. People are a lot smarter than we give them credit for, even the dumb ones.
TFA = the f'ing article, something quite commonly understood on HN for time immemorial. Its snark is borne of the community's distaste of the kind of people who dive into comment sections without engaging with the very subject of and reason the comment thread exists in the first place. The fact you don't recognize this acronym calls into question your authority on what HN can or can't handle. But anyway, that's beside the point.
We are (well, I am, and TFA is) not talking about epistemology so much as the public's inability to engage with epistemological problems on systemic scales. Instead the limits of human psychology control how we as a society respond to these issues. Your argument is a distraction that remains uncontextualized within the conversation it finds itself in.
People are not dumb animals but you won't be able to engage anyone toward a solution on the basis of an argument about how they just need to understand more about epistemology. That's the kind of thing that people can only internalize via empirical means.
At this point it feels like you're being deliberately obtuse. I've been quite clear about the primacy of human psychological limits as the main aspect of the argument and you simply refuse to engage with this point. You haven't been very good about adding to the conversation, only diverting it.
> the public's inability to engage with epistemological problems on systemic scales
The public's inability? What about everyone's inability. No one deals well with epistemological problems on a systematic scale, not even the technologists who delude themselves into thinking that they're driving anything.
I am exactly talking about psychological limits. The difference is that I don't think the psychological limits of the creators are any different from those of the users. If anything, I think the creators are more psychologically limited than the users. This is because the creators need to explain to themselves why they are creating the thing - everyone else just puts up with it. When you say ppl will either be duped or completely check out of discourse, don't forget about yourself.
right - like how the stock market behaves nowadays w/ quant traders - which then motivated people to construct darkpools of liquidity where the real trading happens.
Tech can start out as an extension, but what people fail to understand is that it can become slowly become autonomous over time, kinda like how a party gets out of control and people start putting holes in walls.
I think technologists rarely, if ever, talk about this because a lot of what enables our blind pursuit of technology is to say that technology must be an extension of humans and so it can never become autonomous. When ppl talk about AGI and paperclip maximizers, I think it's a way of pushing the problem far into the horizon and ignoring that the boundary is fluid and pushing.
Imagine if all the people at Google and Facebook stopped thinking that they're making the world a better place?
I think you really nailed the ultimately question: Can technology have agency?
If not, then it's just another tool for humans. We are excellent tool users, and leverage everything we can to expand our senses and abilities. We already successfully wield tools of unimaginable power.
If technology itself can have agency, then it truly is a paradigm shift for the millennia. There has never been an entity that is better at tool-use than us humans. All bets are off.
I think this is all a red herring. At least until we crack GAI, at which point paperclip maximizers and other lethal agents of pure technology come knocking.
Point being: technology, so far, has never been autonomous. But technology also doesn't grow on trees, nor does it stick out from the ground like a valuable rock. Technology is actively invented, and requires costly reproduction and maintenance. It only sticks around if enough people deem it worthy to have enough resources allocated to birth and propagate some piece of technology.
In other words: there is always someone commissioning the technology. Someone with use for it. When considering the gains and ills of progress, it is IMO wrong to focus on technology itself. Especially when talking ills, it's a good way for the actual cause of suffering to remain hidden. Every technology that ever harmed anyone was commissioned and deployed by somebody. Perhaps commissioned with ill intent from the start, or perhaps only repurposed for evil. But it's not technology that does the damage, but people - and these days, organizations, which is both government branches and businesses.
Going back to agency and autonomy - technology doesn't have agency, but people do, and importantly, large organizations seem to have separate agency of their own. Sans of GAI, no tech will turn on all humans on its own - but a corporation might, and corporations wield the most powerful of technologies.
I think Nassim Taleb used the analogy of studing an ant or a bee colony: it is not sufficient to study the ant or the bee in isolation, as it is the interactions between them and their respective colonies that shapes the behaviour. Shifting the level of analysis makes counterintuitive behaviours at the individual level (i.e. bees sacrificially stinging attackers) make sence when we shift the level of analysis up.
A corporation is just a group of humans. There's also clear governance. The CEO makes the decisions and the board of directors has oversight. The shareholders elect the board members.
It's ultimately still a group of humans making the decisions, and they are almost always rational decisions, may just not look that way from the outside with only a partial view.
> A corporation is just a group of humans. There's also clear governance. The CEO makes the decisions and the board of directors has oversight. The shareholders elect the board members.
This is true in the same sense that a human is just a group of cells. There, too, is clear governance. The brain cells together make the decisions and the endocrine system provides oversight. Or something.
A corporation is a dynamic system. There are roles with various responsibilities, but no one - not even CEO - is truly free to make decisions. Everyone is dependent on someone else; there are feedback loops both internal, and those connecting the corporation to the rest of the economy. Then there's information flow within, and capability of various corporate organs to act in coordinated fashion. All that is mediated by a system called "bureaucracy", which if you look at it, is nothing but a runtime executing software on top of human beings[0]. There are some good articles postulating that corporations are, in fact, the first AI agents humanity created. They just doesn't feel like it, because they think at the speed of bureaucracy, which isn't very fast. But it is clear that corporations can and do make decisions that seem to benefit the organization itself more than any specific individual within it[1].
--
[0] - You send a letter to a corporation, it is received, turned into a bunch of e-mails or memos traveling back and forth, culminating in the corporation updating some records about you, and you getting a letter in response. That looks very much like a regular RPC, except running on humans instead of silicon.
With that in mind, it shouldn't be surprising that the history of software is deeply tied to corporations, enterprise systems, office suites, databases, forms - all kinds of bureaucracy that existed before, but was done on paper. Software slots into these processes extremely well, because it's mostly just porting existing code, so it runs on silicon instead of humans, as computers are both faster and cheaper than people.
[1] - Compare a common observation about non-profit orgs, where lack of profit motive makes it clearer that, at some point, the whole organization seems to focus on perpetuating itself - even if it means exacerbating the problems it was trying to solve. C-suites and workers both come and go, leaving to be replaced by new hires - yet the organization itself prevails.
That's not what's implied. Only thing that was said was that the average lifespan was 31. Could be any curve under that number, but it's bad however you cut it.
Isn't that average lifespan? There was a lot of death either in childbirth or during early childhood years. If you filter that data out a bit, adults lived a more comparable age. Still not as good as today, but not 31.
TBH I'm not sure the argument that only women and those under 5 were dying young in large numbers is a great rebuttal to the idea that we were better off in 1023.
Isn't it the opposite? People who are willing to say that a simple ELIZA bot is a form of AI think that something being AI isn't very important; it's a low bar to clear and doesn't imply much about the usefulness of the thing. To these people, AI is not intrinsically important. A chess bot is AI, it has very limited utility and just isn't important at all in any domain other than chess.
On the other hand, people who say that AI is an as-of-yet unobtained far-future technology are saying that AI is intrinsically very sophisticated and useful; so much so that nobody has succeeded in creating one yet. These people think that AI is intrinsically important. They think the term AI is so important that it must not be applied to systems with limited mundane usefulness.
> They think the term AI is so important that it must not be applied to systems with limited mundane usefulness
Yes? Thinking is hard. People would happily delegate as much cognitive load to a computer as possible - especially if they believe the computer is intelligent. With this comes 'computer says no' and 'we are not responsible; it was the AI that was behaving unjust'. Until we have an artificial intelligence that can take responsibility for its actions - and remedy them if needs be, we should hammer home that these systems have limited mundane usefulness.
People delegate their decision making to other systems all the time, whether or not those systems are termed to be AI.
There is no empirical test for whether or not a system is AGI, and 'AI' is a term which has been applied to all manner of mundane systems for decades already. It is not important that we restrict the term AI to only mean AGI (whatever it is that AGI even means!) The 'CPU' players in Mario Kart 64 were frequently called AI, and there was no problem with that. Anybody who gets bent out of shape over the casual use of the term AI is placing way too much importance on the term.
Seems like this is more about millennials getting old than it is about the internet changing. Especially since it embodies millennial's FOMO + fragile sense of self.
Sedevacantists have a massive internet presence, many sedes seem to be terminally online. I'll be watching a random YouTube video of someone playing a game and on the scoreboard I'll see someone with a feeneyite sede website.
I'd love to see demographic data on sedevacantism, but it's probably pretty difficult to gather. It def seems like the phenomena is enabled by the internet, and demonstrates the same kind of ideological radicalization to be found in other political/cultural domains.
I'm not Pope Francis's biggest fan, but even a cursory look into ecclesiastical history reveals that the stuff happening nowadays is mild compared to what's come before. It breaks my heart to see people who are reacting so strongly against modernism embody modernism so thoroughly by insisting that the "now" is so tremendously unique and special from "the past". To the point that it's OK to ignore that history, because what is happening now is unprecedented. I think this willful disconnection from history/context/embeddedness/belongingness is essentially what modernism is, and people are low-key starting to figure out that the result of this mindset is mostly depression and misery. There's a lot of money and influence to be had by boosting this mindset, but that crap has never been known to make people happy.
That negative self-conceit (i.e. "nowadays is the worst") is still self-conceit, and manifests itself in a fetishism/idolatrazation of the "the past". It's all very strange, but I can understand the allure.
I am sympathetic with people who don't appreciate how Pope Francis has done everything, though much less so now. The vast majority of issues people take with him, my past self included, have turned out to be manipulations of the truth or outright fabrications. A lot of media from liberal western nations do everything they can to portray the pope as sharing their values when often thats not the case, but people who are motivated against him will believe it without actually looking in to things. Nine times out of ten when there has been a controversy the reality is actually mundane. The curse of mass media, I suppose.
> ...but even a cursory look into ecclesiastical history reveals that the stuff happening nowadays is mild compared to what's come before.
I think a look at history is interesting because many of the sedevacantist arguments are built upon them believing in the "historical (traditional)" teachings, yet are often warping reality. Quite unfortunate since it takes a massive amount of time to actually look at many of the claims that get made since examples and arguments spanning hundreds of years can get referred to in a matter of seconds.
This is list of ways that Pope Francis behaved differently from how a narrow section of catholics would prefer for him to act. Sucks that catholicism seems to be too difficult for people to swallow. There is always a temptation to dumb the message down, but that should be resisted.
Opinion falls into two camps because opinion falls into political camps.
The right-wing is tired of the California ideology, is invested in the primary and secondary sectors of the economy, and has learned to mistrust claims that the technology industry makes about itself (regardless of those claims are prognostications of gloom vs bloom).
The left-wing thinks that technology is the driving factor of history, is invested in the tertiary and quaternary sectors of the economy, and trusts claims that the technology industry makes about itself. Anytime I see a litany of "in 10 years this is gonna be really important" I really just hear "right now, me and my job are really important".
The discussion has nothing to do with whether AI will or will not change society. I don't think anyone actually cares about this. The whole debate is really about who/what rules the world. The more powerful/risky AI is, the easier it is to imagine that "nerds shall rule the world".