Humans are highly adaptable and like other changes to information availability in the past they will adapt. This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.” These are all ways that the human species uses memes and cultural norms to teach ourselves how not to fall victim to false information.
Of course some people are going to fall victim. They do so today through common scams. It is the right goal to bring this down to zero. But to say that the human species isn’t capable belies all prior history and shows little faith in the resilience that made us who we are.
That’s the broad problem with this AI doom and gloom: it has so little knowledge of and respect for the humanities and humankind that it arrogantly assumes that our species has never faced challenges like this before. It throws up its hands instead of asking what lessons from history we should take and what actions we should be focused on.
If I'm being generous, I think that these pieces attempt to stir panic as a means for spurring action for change and investment in these problems. That’s a meaningful goal, but one that also might be more meaningfully achieved if it wasn’t expressing the problem with such gloom.
Absolutely. Lots of humans live in a post-truth world. They learn not to trust anything they read or hear. Think totalitarian regimes wi try tight controls over media.
A more painful and pertinent question might be: Can democracy adapt to a post-truth world, and the answer to that I fear is probably no. How can a democracy function if it’s citizenry can’t remain informed?
>Humans are highly adaptable and like other changes to information availability in the past they will adapt. This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.” These are all ways that the human species uses memes and cultural norms to teach ourselves how not to fall victim to false information.
But who is parsing all the reciprocal new false and fallacios "truths" in this wonderful human way to sanitise the inputs to the next model that's evaluated? If humans could scale so easily there wouldn't be this problem in the first place.
> This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.”
All these memes tell you what not to believe. None of them provide a decent heuristic to know what to believe and where truth may lie.
This is a recipe for a paranoid, conspiracy theory riddled society.
We switched to this always connected world only 10-15 years ago and I think we are far from having seen all the consequences.
Algorithmic bubbles, mass surveillance, addictive algorithms and applications are fairly recent and we can already see really disgusting consequences.
Our civilization is resilient so it takes time to erode but since a few years, I have hard time imagining a positive technological future. All I see is technology made to exploit human breaches for money.
Humans don’t look like they are adapting to Tik Tok properly so I’m not confident they’ll understand this new AI powered world.
(I have no doubt that powerful and dominant humans will adapt)
Cancer isn't a single disease that that can ever really be cured in the way that infectious diseases can be cured. But new drugs have raised 5-year survival rates for many types of cancer. The pharmaceutical industry at least is still highly productive.
Unfortunately we are offsetting those medical gains with higher incidence of some types of cancer. This is due to exposure to carcinogens (toxic chemicals) and unhealthy lifestyles (mainly obesity).
Yeah, I’m not so sure we can say we’ve “adapted” to this world. In my opinion, we aren’t ready for the future with AI because we haven’t yet figured out today’s world without its prevalence.
Someone would argue that humans still haven't adapted to smartphones or the internet post-connected world and that constant connectivity is a cancer that's destroying our minds, emotions and civilization at large.
>We were way better off before smartphones. Looking back on it now, we may have been even better off before the Internet.
Not to contradict myself, but I have to disagree.
Smartphones and the internet have improved the quality of life for a significant fraction of humanity. Given the choice between going back to a pre-internet technological paradigm of landline phones and total corporate control over media and communication, or somehow learning to deal with opening Pandora's box, I'll take the latter.
You really believe smartphones were invented right after landline? Even at the time of landline internet was more free from corporate control than today.
It’s the easiest thing ever. Just look around to people, or use a smartphone yourself. It’s hard to see if you are too addicted to the amusement matrix to see it though. In that case just read the recent research of Jonathan Haidt.
Quite frankly, he looks like everything I would expect from a pop psychologist attached to a business school. The Hetrodox Academy he founded has this criticism:
"According to Vox's Zack Beauchamp, Heterodox Academy advances conservative viewpoints on college campuses by playing into or presenting the argument that such views are suppressed by left-wing bias or political correctness. Commenators such as Beauchamp and Chris Quintana, writing in The Chronicle of Higher Education, have disputed Heterodox Academy's contention that college campuses are facing a "free-speech crisis," noting the lack of data to support it and arguing that advocacy groups such as Heterodox Academy functionally do more to narrow the scope of academic debates than any of the biases they allege."
Seems like someone whose success is supported, and views are amplified because they conveniently align to people with money.
My primary point was you shouldn’t need research to know that we aren’t well adapted. Just look at yourself! I myself have heart problems, severe myopia, occasional brain fog and attention deficit at age 18! I’m not well adapted. No one is.
And before that people had worries about TV(dancing and rock and roll), and radio (war of the worlds), multiple music related scares sprinkled throughout.
This is a popular rebuttal and I don’t think that it’s a good one.
The detrimental effects of the media forms you’ve listed are better considered in ways other than through events that themselves are (ironically) pop culture tropes today (moral panic over music, the “war of the worlds” panic myth).
It’s hard to even compare technology as pervasive as AI, smartphones and social media to TV and music (which are pervasive in their own right by definition, but not so much by comparison). The former is capable of subsuming and manipulating the forms of the latter and its effects are far more direct than the latter’s in respect to the present age.
If the tech info bubbles that are isolating us from basic truths like who is president and should I take a vaccine are any indication, the further reach of algorithms is a harrowing prospect
We CAN’T adapt. There are no biological mechanisms inside your head to adapt to Tiktok. There are plenty of mechanisms which make you an addict to it, though.
You can trivially adapt to tiktok by seeing no reason in it and not using it. That's what I do. I don't even know what it is and does. Is it per chance yet another social network?
As an individual you can “adapt” by abstaining. You can eat %100 healthily, no junk food, exercise everyday, get enough sunlight, not use screens at a close distance to avoid myopia, not use social media, avoid microplastics and forever-chemicals (tip: you can’t), escape climate change caused unnatural disasters, and buy air quality control products to try to get to a fraction of the air quality of a rural village in 1500s. Yet here we are, talking about obese nations. Technology is never neutral when combined with human nature (when used by humans). It’s always unnatural, because our bodies and minds are not designed (or evolved if you believe in that) for this modern world. And this incompatibility almost always leads to unhealthiness, both mental and physical.
Wow, it’s a sourced quote too (rare for Einstein on the internet). How do you write a paper without thinking in words? I need to talk to someone who doesn’t think in words, I just can’t believe it otherwise.
I had philosophy teachers who thought similar and dismissed my claims outright to the contrary.
I call it mental-ese. To talk i often first have to translate between it and my native language (English). obviously, it's by its very nature hard to express in words exactly what mental-ese consists of, and it might arguably be impossible in the same way we cannot express general qualia using the written language.
but I've heard enough similarities from others trying to express such to be convinced there's something there. surely the phrase "I can't express my thoughts in words" is common and has meaning to you?
The analogies I've heard commonly from myself and others to describe it is that I see/feel the shape of relationship between the concepts or abstract thoughts before I then try to put it into language. just as my other senses do not require written language to first describe (my experience of tasting a strawberry is something qualitatively different experienced/impressed on the mind, not a word).
Another analogy I've used to try to convey my apparent experience to others in contrast to many people is that it appears to me that a common way of experiencing the world is to have relative primacy or weight on "things"(words) and then people have to work towards the relationships between things. whereas I feel my mind works the other way, receiving impressions of relationships between or behind things first, such that they are the primary unit, and the words/things must then be built up from these.
You can draw without thinking in drawings, similarly you can speak without thinking in speech.
Personally I've tried to become less dependent on language for thought, because I noticed I kept getting stuck on how to verbalize some thoughts even when there was no real need to. I think it's had an effect but it's hard to tell if I'm just imagining it or if imagining it is precisely the point (and I still get stuck on how to explain stuff in some imaginary context, but I suppose that's progress?)
My best guess is that an internal monologue may simply be an intermediate step as we internalize speech into our thoughts, so the need to keep verbalizing thought could eventually reduce, but it's hard to notice these kinds of changes in yourself because the skills don't disappear they just become something else.
For me it’s almost like ‘seeing’ or ‘knowing’ the ‘shape’ of things and the relationships between them. Translating to words is sort of a seperate step at output. It’s difficult for me to describe.
Text generation is conversion of previously prepared idea to text. If you didn't have that idea beforehand, you would have nothing to say. Sure it's a nontrivial process, because ideas don't map to text well, but it's merely text conversion, not really thought. Also text must work around reader's ignorance and prejudice, which is completely irrelevant for idea itself.
Well, I can do math, but I very rarely "think in mathematics." Rather, I approach most math problems linguistically, by essentially making up a little word problem in my head and reasoning through it. However, I know other people who approach math in a completely different way. They seem to have an intuitive "calculating sense," and only use language "after the fact" to record the result or explain it to others.
So in the same sense that I can use math without it being central to my thinking, it doesn't seem hard to believe that others can use language without it being central to their thinking.
Inner monologue refers to the experience of perceiving one's thoughts as auditory. People who do not have an inner monologue are still capable of thinking.
so by your logic, it's the "real" entities which will inevitably experience pain, suffering, and death; meanwhile I will reincarnate eternally as part of the program's grand design, simply because my thoughts are abstract instead of an auditory hallucination? ever consider maybe your "internal dialogue" is actually the instructions the programmers have to keep ramming in your face so you actually complete your quests?
Wow. I had no idea that was a thing. I thought that’s just something they do in movies. But I don’t have a mind’s eye either (aphantasia). So I guess that’s on brand for me.
That doesn’t mean I can’t “talk to myself” in a way or think about stuff. I just have no auditory sensation whatsoever. It’s just abstract/immaterial reasoning. I’d wager if I did an FMRI the audio part would not light up but the speech part would.
Not OP, but I'm in the same position, so let me answer.
- I do get visual dreams. That's the only time I have a visual imagination; that, or if I'm on the verge of sleeping.
- If you say "pink elephant", I've got the concepts of elephant, pink and so on in my head. I can 'see' the geometry in a way, but it's totally abstract. Think of it like a pink elephant in a game where the renderer is off, and you won't go far wrong; all the information is there, and I can use it, I just don't see anything. Not as a hallucination, and not 'off to the side' either.
- I absolutely do hear songs in my head. Aphantasia usually only refers to visual imagination.
- If I want to hear it in a German accent, I do. By default, while I read something there's no auditory element at all. If I'm reading a book, I'll usually narrate the spoken sentences to myself and not do so for the other text; it's a matter of choice. Narrating it slows down my reading a lot, to the speed of fast speech.
> By default, while I read something there's no auditory element at all.
I wouldn't characterize how I read -- which as far as I know is normal, but who can tell! -- as narrating to myself. Rather, it involves the sensation of somehow pronouncing words without necessarily hearing them. I can do this much faster than I could narrate the text explicitly.
I can also read without either narrating or pronouncing, but I find that it cuts down comprehension noticeably.
> If I write “I’ll be back” you don’t hear it now in a thick German accent?
Now you have made me curious. Where would the thick German accent be, when pronouncing those words? I can imagine other sentences (anything with "th", for example) where you would hear a German accent. But with "I'll be back"?
Maybe imagination is poorly understood. I'm confident in my imagination when it comes to technology, but I fail the aphantasia test, because it tests free form artistry and I admittedly lack that one. Maybe it's division between STEM brain and arts brain.
I've gone back and forth on aphantasia. It's true that I generally don't see images in my mind; even when thinking about things that are visual I'll tend to mentally describe them in English. But if I really try I can definitely see things in my mind, after a fashion, although it's very ... blurry, for lack of a better word.
I'm also terrible with faces. I can see someone I've known for years with a different haircut and I'll completely fail to recognize them. If everyone suddenly went bald I'd be helpless to recognize anybody.
Edit: Like, I'm right now trying to conjure up the face of my girlfriend of 23 years (who is on a business trip right now). I just can't do it. It's kind of trippy.
For me, the aural examples you gave are much more vivid than the visual ones. Some people have one of those forms of imagination but not the other at all!
I guess i just doubt the complete absence. Maybe people have different expectations of what imaginations would be. When I “see” a pink elephant, it is extremely fragmentary. If it were output directly, it’d be a mess. Also, not all at the same time.
It might also be that there are differences in the amount of attention given to different kinds of imagination.
But the way the brain is structured, everything is firing all the time… or at least, that’s why it is so hard for me to conceive
For me it's insane that some people "talk" to themselves in their heads, expressing your ideas as words seems so slow in comparison to "thinking in ideas"
Your thinking in ideas is you thinking in your own internal mental vernacular.
What do you do when it comes time to get your point across to others? That's my Achilles heel. If people aren't prepared to really dig in and go through the motions/look things up and vonnect or pick up ideas that I have (that are the sum of many years diving esoteric topics), I might as well just throw my hands up and just do whatever myself. As it's generally a shorter time to materialization than trying to explain it.
I can think in ideas pure non-verbal ideas, and have on occasion noticed that my inner monologue is just describing the idea I've already had.
When I try to shortcut the monologue on the grounds I already know what I'm thinking about, it feels wrong.
This hasn't been much of a problem recently, possibly because my inner monologue has sped up to match that I watch and listen to YouTube and podcasts and audiobooks in ~ double speed.
I have an inner monologue, but it's just one of many different ways to think and I don't always choose to use it.
What happens when you consider a spatial problem like an IQ test? Do you have to talk your way through it? If you see an art piece, do you only appreciate it through the monologue, or does it feel beautiful in a non-verbal way too?
I do tend to talk through spatial problems, yes, though I'm not so aphantasiac that I can't visualize, say, the image that would match a simple pattern of lines demonstrated in prior images. I can also situate things with relative position in my head without having to visualize them (if that makes sense).
> If you see an art piece, do you only appreciate it through the monologue, or does it feel beautiful in a non-verbal way too?
I appreciate it both ways, yeah. But the visual in that case would mostly be just "this is pretty" or "that's disturbing" etc. Any kind of genuine art critique (such as I am even capable of doing) would be talked through mentally.
I also talk a lot to myself so my inner dialogue becomes an external dialogue when I'm alone doing mindless stuff like cleaning or putting groceries away - I find it just keeps my mind active and in the present without tuning out.
One of the funniest things was when I was working at a bike shop. The owner was a really cool guy and would visit the store from time to time. I was stocking some bike rack stuff on one side of a dual display, talking about an app I was essentially brainstorming over out loud while I was doing this mindless task.
I walked around the display and owner was standing there.
He looked at me with a smile and says, "That was quite the discussion you were having with yourself. Sounds like a great idea for an app."
We both started laughing and I had to explain him why I did this.
I do the same thing, though weirdly enough less than I used to. Sometimes it really helps me work through a problem -- maybe it slows me down to a better pace? Who knows.
Talking to pets also works. They've never been able to offer a very useful response, alas.
The responses to this statement illustrate the point I intended to make: we all process stuff very differently.
FWIW I often throw this random factoid out there when someone presents an overly broad "humans in general" argument/observation. The reaction is usually polarized :P
Most humans aren't even ready for the Internet as it has existed for decades.
Thirty years ago, we were discussing science papers on Usenet. I recall writing a message expressing optimism about a future in which everyone uses the Internet to consume high quality information directly from relevant specialists, rather than low quality information from nonspecialists. For example, I imagined Americans basing their voting decisions on an understanding of public policy issues developed by reading the journals of the American Economic Association, the American Society of Health Economists, the American Society of Criminology, the American Geophysical Union, the National Academy of Sciences, etc.
I liked how the Hard Fork Podcast compared it to Wikipedia. When Wikipedia first came out, people were up in arms about how dangerous and untrustworthy it was. Wikipedia is going to ruin society with widespread misinformation!
Then we adapted. People learned about Wikipedia's strengths and weaknesses. People use Wikipedia as a useful tool for research but don't trust it blindly. I think the same will happen with LLMs.
We, as humans, are well beyond being mentally ready for the internet and social media alone. Most of the key communication of the 20th Century was based on a tradition of duty and service in reporting and, generally, leadership. The Natzis of the 40s died not because the idea was 'wrong' (which it was) but because bad leadership and greed caused extinction. Before Poland, Europe was plenty happy letting Hitler be. Why was Hitler successful? Because he told people what they wanted to hear. You are better. We are better. We deserve more. It's their fault we are like this.
Self bias is the critical failure of the human mind. Tell someone that they deserve more and that they are better than others and they will believe you.
In a world where politicians and companies (same thing really) can use AI to collect your online persona and then fill your day with advertising designed just for you, telling you that you are right and it is 'them' who are wrong will work on nearly everyone. It already does. People watch news channels and follow influencers that make their feeds echo chambers, it drives extremism. How does a society tell you that you are wrong, that the other person is right?
Humanity and humankind made Hitler. Humanity and human kind will make tools that succeed at their goals to make others do what they want. We are already in freefall, this is a rocket booster on our back.
When have we ever been ready ahead of time for changes? How the actual hell would that even br worked towards? Readiness seems more an invocation of Xeno's Paradox engineered to halt progress.
How is that any different form the past several thousand years of civilization? Hitler wasn't the first or only genocidal dictator. He wasn't alone in the 20th century either. Colonization and the slave trade were worse, albeit spread over several centuries. But empires waging war, conquering, enslaving, eradicating other groups and manipulating citizens has been going on for a long, long time.
Plato wrote to counter the sophists and skeptics of his day. The gospels have Pilate asking Jesus a philosophical question, "What is truth?" This is hardly new, just the technology is better.
How is it any different? Are you serious? For the overwhelming majority of our time on this planet communications were limited to word of mouth over geographically limited areas. Now it is trivial for any form of bullshit to spread worldwide in minutes. The stakes have changed.
This change came already with the invention of radio. We have a century of radio and TV propaganda of the absolutely vilest kind behind us already. At least now there is an exchange with the instant communications, not only one-way.
You assume this bidirectional communication is a benefit. I'm not so convinced. This permits the formation of echo chambers and allows people to nominate literally anyone, regardless of their position or credentials, as an authority on literally any topic (see also: anti-vax moms). This represents an assault on society's ability to function at the most basic level, as it is increasingly difficult for individuals to agree on much of anything, as everyone's operating with a different set of information. Here's a thought experiment: you have a plumbing problem. A group of plumbers have examined the situation and are discussing how best to resolve the issue. In what way is this discussion meaningfully improved by adding a florist to the conversation?
I suggest you botanize a little in early and mid 20th century TV and radio broadcasts from around the world. You will find propaganda where the presenters are literally screaming like lunatics at the listener: Telling children to turn in their parents to the secret police, calling for war - and worse. That is what the majority of the population of the world had to listen to and watch. Mass media propaganda on the airwaves fueled the 20th century wars and worst crimes against humanity in history - with millions hypnotized under its spell.
You can not compare that to online conversations that you find inconvenient. And if you didn't believe in bidirectional communication you wouldn't participate here, so I think you do believe in it.
I can absolutely compare that to online conversations as they have lead to at least one attempt to overthrow the US government and have convinced an alarming minority of armed citizens that lizard people are real and pose a threat to the children of the world. That propaganda has always existed is not in question, what you fail to address in your response is the caustic effects on society caused by the proliferation of information sources.
I apologize, but if you think those things are comparable there is no way we can have any kind of fruitful conversation. Our values and perspectives are too different.
It did not already come as you say. We are in a new era. It is now very easy to target specific people over a wide geographic region and to do so cheaply. It is easy for bad actors to hone in on those susceptible to the false beliefs they peddle. Hence the rise of morons who won’t vaccinate their children. I’ve read that around 50 percent of the posts online are bot driven. Soon that percentage will be a lot bigger. This is an era of easy, cheap, targeted messaging. It’s an era where we will mostly exist in information bubbles whose messages cater to what we are most susceptible to being influenced by.
All of these arguments center on how different now is from “past” versions of transformative technologies and ignore that to past versions of people a radio show that you could tune into that was broadcast nationally likely felt as transformative as a network that connects the globe.
You can’t measure the difference in progress from a modern vantage point.
Once upon a time large numbers of people thought we were actively being invaded by aliens because of a radio broadcast that wasn’t even deliberate manipulation. Nevertheless, we made it through.
Divisions in the Napoleonic wars were far more lethal than a Roman legion. A division today is many times more lethal than a Napoleonics division. A similar advancement has occurred within the realm of dissemination of propaganda but at a much shorter timeframe. In my opinion humanity is not prepared to deal with the social and psychological effects of this era of cheap, targeted, mass communication. I hope I am wrong.
Once upon a time large numbers of people thought we were actively being invaded by aliens because of a radio broadcast that wasn’t even deliberate manipulation. Nevertheless, we made it through.
This probably isn’t true. This story has persisted for decades based off of newspaper articles from decades ago. It illustrates the power of false information dissemination on readers provided that information is packaged in the right way.
There's no evidence I'm aware of that supports the notion that we are capable, as a species, of weathering the social and psychological effects of the modern communications era and a growing body of evidence that suggest we aren't. Given human psychology around in-group dynamics it seems likely we never will be on anything other than evolutionary time scales.
I have no idea. As AI generated content becomes more prevalent and as bots become a huge percentage of online comments I hope it will lead to people leaving the internet. For the most part I think the dead internet theory will become a reality. I hope humans will develop coping mechanisms.
We're talking about lizard-brain deep subconscious human behavior here. That's not really something that lends itself to being tinkered with idly. So this seems pretty settled to me. I'm prepared to alter my view if new information comes to light.
Broadcast radio has been a cheap way to target everyone over a wide geographic region. With catastrophic consequences seen in the massive wars in all continents that were fueled by radio propaganda.
I much prefer living with today's polarization and people disagreeing on minor things like those you mentioned, than in the age of one-way broadcasts where there is no arguing and no other perspective at all.
Do you really? Because at least in the US we are approaching an inflection point where a third of the country decides to kill a different third while the middle third starves to death or dies of disease. I question the notion that adding pundits, industry shills, and random-ass suburbanite influencers to the conversation balances that equation.
Disinformation is the narrative constructed by crumbling authorities of mainstream media desperately trying to preserve their power.
You might think like their narrative is "think critically, and consider everything critically".
But the actual message is, your fellow humans are stupid, they fall for misinformation and fake sources. Ignore all alternative sources of information and most importantly, do not trust your friends and people you know, instead assume they are stupid and when they contradict the authority, be sure to put up a firewall and stop the propagation of dangerous thoughts.
Truth is always more powerful than lies. Don't underestimate your own reasoning capabilities, and if you do underestimate them, the most important thing to do is to train them. I'm not saying to argue against an anonymous bot, but if you meet in person, if your friends have non standard ideas, don't assume they are stupid and fell for misinformation. Not everyone on the other side is stupid, or heartless, or bad.
They are trying to inject faults into various alternative information sources just to turn around and catch them and say "see? This podcaster is a conspiracy theorist and unreliable!".
It's mainstream media which benefits the most from efficient fault and spam injection into alternative information sources, because it makes them relatively more trustworthy. And it is actually against the interests of alternative news sources to be caught in a lie because it is likely to erode their reputation.
If you're confused by the whole disinformation phenomenon, ask the simple question, who benefits.
And remember that the media's willingness to intentionally lie and deceive their readers is directly proportional to the cost to your reputation, and the likeliness of your readers to discover the truth.
Well, yesterday Youtube served me up an ad for an "AI" algorithmic trading service "only available to Canadians" and "backed by Elon Musk. It used "AI" generated audio and video of Musk to peddle an unregistered securities trading firm that tries to prey on the uninformed.
That video is clearly disinformation / misinformation. There is zero chance that video would be played on a conventional broadcaster. Why? They have to be accountable for the content they put in front of viewers. Conventional broadcasters have to participate in advertising standards councils and answer to regulators.
On the other hand, platforms are unaccountable and unwilling to act in the public interest. Profit trumps all. The average person has no idea how to record an ad that was just shown to them by a "platform". That video that was in the corner of the webpage that turns out to be scammy? Ooops, you scrolled too far or used the back button and it's now gone.
There's a large swath of problems online that are clearly misinformation and / or disinformation. Nothing is going to improve on this front so long as "platforms" are wholly unaccountable to the general public. And we're running blindly forward into making this far, far worse before anything will change now that generative AI lowers the bar to produce bullshit that looks convincing on the first pass.
So you assume your fellow humans are stupid and will fall for this crap, and must be protected at all costs including the cost of regulation preventing speech?
By the way, how's that scammer SBF doing? I heard he managed to scam even more money than that funny YouTube ad, and I think he was advertising on mainstream media too. Even got to lobby in Congress.
This is still far from the leading mainstream narrative that "your fellow humans are stupid". By your anecdote I can learn that here we have another person who's capable of recognizing false information, therefore it isn't dangerous.
Anyone who fell for disinformation is welcome to step in and contribute an anecdote to the opposite statistics so maybe I'll be convinced people really are guillible and need protecting.
Somehow it's always other people who the speaker thinks are more stupid and will fall for something he sees as a scam.
I have no expectation that everyone will fall for it, but I've seen my own elderly father get confused and drawn into scam ads. My irritation is that what little accountability in advertising that existed in the past is now gone, and that "platforms" are failing the general public by allowing these things to propagate.
Put a button in the corner of the ad that says "report this", and get a human to check if the ad passes a smell test. If a platform can cancel accounts of random people for violating policies at random, they can put a small amount of effort into enforcing policies on advertisers.
So now it's the elderly. I've seen them fall for phone scams much more commonly, they usually don't have the technical ability to use the internet.
You're deflecting from the disinformation narrative into elderly scam. Which I agree is a problem, just a completely different one than the disinformation narrative presents.
It's still disinformation. It doesn't matter what segment of the population falls for it. It's not a real video of Elon Musk, and the get rich quick claims are as fake as always. Showing these ads is still immoral and unethical promotion of disinformation for profit.
You are conflating an advertisement -- a separate category, which normal people and regulators recognize as often being deceptive and manipulative -- with "news," including statements from current and formal government officials, and even testimony in courts and congressional hearings.
Advertisements are often total bullshit (and I would likely consider anything created by AI to almost certainly be), but that is an entirely different phenomenon than today's concept of "disinformation."
Until recently, it was costly to fake a video of someone talking. Recent advances in generative AI have changed that.
One definition for disinformation is "deliberately misleading or biased information; manipulated narrative or facts; propaganda". The ad I viewed meets that qualification. This was not your typical advertisement trying to persuade the viewer that their product is better than others. It used a generated video of Elon Musk saying things that he did not say, endorsing a product he did not endorse in an attempt to appear legitimate. This is very different from traditional advertising.
The AI shitpocalypse is here. We get to watch it play out in real time over the coming decade.
I agree that the AI generated stuff is fucked up. What I'm pointing out is that an advertisement is expected to be biased and not necessarily tethered to "facts." It's why magazines and social media platforms require (or are supposed to) ads and paid content to be marked as such.
GM doesn't claim that Ford is spreading "disinformation" -- but newspapers, TV stations, politicians, etc do constantly claim that people with the opposite viewpoint on a variety of topics are spreading disinformation. And now, despite the 1st Amendment, even courts are enabling censorship because the censored words are "hateful" or "disinformation" or "violence."
The difference is that ads in magazines, newspapers and on TV are curated by humans, so that the kind of clearly illegal and unethical ads peddled by platforms never get published.
Free speech comes with the obligation of responsibility. You can't go around yelling "Fire!" in a theatre because of the clearly obvious possible negative consequences of that action. Legacy mainstream media appear to understand that better than big tech platforms do. Big tech wants to profit off the illegal and unethical content without having any responsibility for doing so. This is wrong.
If curators blocked one unethical ad, it doesn't mean they block other unethical ads. Even worse, unethical ads they didn't block are assumed to be ethical. Who curates the curators?
So your argument is that making rules is hard so we shouldn't make rules?
The bare minimum standard of ethical behaviour is to not do things that harm other people. The ads I am referring to meet the test of causing harm to people that act based on the content presented. Plus they violate securities laws. It's pretty cut and dry for that case.
I think a great many people are actually stupid and easily lead astray. There are large numbers of people who won’t vaccinate their children (not talking about Covid vaccine). Such a person is either stupid or a victim of misinformation. Humans are easily persuadable. This is particularly true with a group of people echoing who feel slighted or are angry about some perceived injustice.
The mob mentality is real and it makes its participants vulnerable to making bad decisions and going with the flow. We see this with the cult like devotion of Trump supporters. We see examples in consumerism. There was a time when people fought each other over a Cabbage Parch Doll. Our moments of lucid analysis of an issue or speech occur far less often than our moments of “going with the flow”.
And my general experience with anti vaxers is that they generally look into more resources and research and put much more thoughts into their decisions than a regular person. Which I generally view as a positive quality. I put much more value on the process by which a person reached his decisions than whether they were correct in one particular instance.
Whether I agree with their final conclusion is irrelevant, I can't with straight face call a person who spent his time carefully considering his decisions less intelligent than a person who didn't spend a single moment to think about it. Even if they reached the wrong conclusion.
It's actually hilarious that you call them mob mentality. Going with the herd automatically is the most mob mentality possible.
Have you tried talking to Trump supporters before you assumed they are "cult" and "mob"?
You're exactly the kind of person I directed my post at. You've been successfully fooled into thinking other people are stupid without even trying to understand them.
And for the record, I think flat earthers are more intelligent than people who never even asked themselves how do we know that the earth is spherical. They are not intelligent because they couldn't understand the answer, but they are more intelligent than a person who never even asked the question.
The people I consider more intelligent than flat earthers tried asking this question, looked for the solution and could understand basic geometry enough to understand why it is true. Anyone else isn't more intelligent, just more conformal and knowledgeable. In the case of knowing the earth is flat, it's such well known fact it's just pure conformity never asking why, and I don't value that at all. It's not an intelligent trait.
And my general experience with anti vaxers is that they generally look into more resources and research and put much more thoughts into their decisions than a regular person. Which I generally view as a positive quality. I put much more value on the process by which a person reached his decisions than whether they were correct in one particular instance.
Ah. The, “they did their research” and “read lots of words” retort. It’s ok that almost all of it is wrong. What matters is that they read a lot. They thought a lot about it! The thoughts were completely wrong but quantity over quality, right?
Consider this. What does it say about you that you think people with no knowledge of or training in virology can legitimately said to have conducted research into the efficacy of vaccines?
I have not successfully been fooled into thinking that people are dumb. All of us are dumb in certain areas. All of us can be manipulated. All of us are susceptible to false information. (Read up on the Gell-Mann amnesia effect as an example of this.)
It is absolutely an intelligent trait to rely on the expert knowledge of others. Hence, flat earthers are incredibly dumb. The knowledge that the Earth is spheroidal has been known for many hundreds of years. Another example. Almost everyone knows that 1+1=2 despite the fact that very few people can actually prove it. It is not intelligence to question this fact. It is intelligence to ask how to prove it and even more intelligence to understand the proof.
An intelligent person doesn’t question the efficacy of vaccines. The evidence of their efficacy is plain to see by the fact that polio is virtually nonexistent. An intelligent person asks why are vaccines effective. How do they work? Etc.
Any fool can ask questions. Asking the right questions requires some perspicacity.
There are no right questions, only right answers. A question, by definition, isn't right or wrong. That's why it's a question, not an answer.
The very first step to being an intelligent person is not being afraid to ask questions.
The second step is trying to understand it even if you might reach the wrong result.
Only if you passed those first two steps you even have the hope of answering the question correctly.
And even a child can prove that 1+1=2, and if you're such an NPC you think it's hard to prove, I honestly find flat earthers more intelligent.
Ironically, you're acting like such a bot that you're actually quite good at contradicting my point. You're begging the question (like "why are vaccines effective"). You don't look for understanding but for knowledge to remember then repeat.
I always value more the person who tried and failed than the person who never tried. I don't think a layperson can conduct academic research, but I think they can try to understand.
And I think that those who tried to understand yet reached the wrong conclusion are still much better than those who never tried.
A child can’t prove 1+1 = 2 because a child doesn’t know enough mathematics to prove it. Doesn’t even understand the basics of mathematical logic or set theory. You know so little about what it means to prove 1+1=2 that you think proving this is easy. Your grasp on the limits of your mathematical knowledge is greatly lacking.
There are no right questions, only right answers.
A point that does not pertain to what I said. I said nothing about “right” questions. There are dumb questions though. There are insightful questions. There are probing questions. There are nonsensical questions too. Like this question, “If vaccines are good for us then why do we have a bicameral legislative branch?”
A person not trained in virology or microbiology is absolutely unqualified to discern the difference between good information and bad information on the topic of vaccines. Such a person can’t possibly do anything that can be reasonably ascribed to “having done the research”. A person who has “done the research” and concludes that vaccines are bad is a fool no matter how many thousands of words or thoughts they devoted to the topic.
>A person not trained in virology or microbiology is absolutely unqualified to discern the difference between good information and bad information on the topic of vaccines.
It seems hard to argue with that...but what exactly is the meaning of saying people are "absolutely unqualified" to do something which they in fact do constantly and which there is no obvious alternative in order for society to function?
If we accept that non-experts are not qualified to determine what is good information, then doesn't it entirely undermine democratic control over society?
For that matter, doesn't it entirely undermine the concept of management in industry or government? Individual autonomy and rights too?
Again, I'm not saying you are wrong, or that I have a counterargument, just that the logical implications seem extremely radical, and I don't know of a plan for how society can function on such a basis.
It seems hard to argue with that...but what exactly is the meaning of saying people are "absolutely unqualified" to do something which they in fact do constantly and which there is no obvious alternative in order for society to function?
In my opinion the answer is this. Roughly speaking, in general: Overwhelming expert consensus opinion on a topic has a much higher chance of being correct than any contrary belief I have based off of “research” that I do. The overwhelming expert consensus might very well be wrong but it’s clear that they are way more likely to be right than I am.
"Overwhelming expert consensus" implies that you know who is an expert so I don't see how it helps an idealized clueless average citizen who arguendo cannot judge accurate information from mis- or dis- information.
Also, you have heard of "100 Autoren Gegen Einstein"? Obviously people who bring that up in an actual conflict of ideas are almost certainly not Einstein, but how do we know, and if we think we know who is right and it is the majority, how do we refute the claim that polls don't determine truth?
You don't need set theory to prove that 1+1=2. You don't need mathematical logic. A child which answers that if he puts one thing and then another thing and then counts and gets two things have proved it just fine. Those later complications are just there when you try to put it in context of other things you know. Ancient Greeks did not do set theory or mathematical logic, yet their understanding of basic addition was completely fine.
You inability to understand what is irrelevant complication, and your deferral to authority and expertise even when you are perfectly capable of understanding, is your own problem.
Here are two quote you said:
> "(There are no right questions, only right answers.)
A point that does not pertain to what I said. I said nothing about “right” questions."
> "Asking the right questions requires some perspicacity."
You also say "The thoughts were completely wrong". There are also no wrong thoughts, only wrong conclusions. A person afraid to "think" wrongly as the narrative so strongly tries to program people, is infinitely dumber than a person who doesn't constrain his thinking. That's what they want to happen. Amazing there are NPCs like you who avoid "wrongthink".
There are people who know a lot and understand nothing. Knowledge is spewing up complications from set theory. Understanding is being able to tell, by the context, that this is completely irrelevant.
You appealed to authority to even tell me that the earth is flat. You didn't say that we can observe the curvature from measuring distances, from doing an experiment with ships masts disappearing into the horizon, or any other way even ancient people figured it out. It's true because "The knowledge that the Earth is spheroidal has been known for many hundreds of years. ".
People "knew" the sun rotates around earth for thousand of years. And the truth won despite it being against the "authority". And nothing will change that understanding, even if some string theory will discover the 11th dimension or whatever, because that will be irrelevant in the same sense that set theory is irrelevant to 1+1=2. For these same reasons you don't need and shouldn't appeal to authority.
Appeal to authority is only knowledge. Zero understanding.
I said nothing about "doing the research". A person who deferred to authority and knowledge to answer a question is always inferior to a person who tried to understand and failed. A person appealing to authority is just trying to mirror someone's intelligence. A parrot who repeats a correct sentence is still dumber than a human saying a wrong sentence.
A parrot who repeats "vaccines are good" because he heard that is always the greater fool than the anti vax who says vaccines are bad.
A parrot who says 1+1=2 because set theory shows less understanding than a kid with an abacus. The parrot is saying irrelevant things while the child isn't.
Obviously I used “right questions” not in the sense of saying a question has truth value and some of them have true as their truth value. Clearly I used “right question” in the sense of “insightful question”. That you couldn’t understand this indicates poor reading comprehension on your part.
Everyone appeals to authority in the sense you use this phrase because no one has the time to to argue out everything from first principles. In a discussion about why flat earthers are wrong I’d make an argument that shows they are wrong. In a discussion about misinformation and how easily duped people are in general I wouldn’t make such an argument since it isn’t pertinent.
"Everyone knows" is not an argument that shows someone is wrong. It's an argument that shows you're just judging people because they stray away from the consensus. Ironically, after you were crying about mob mentality.
A person who thinks "flat earthers are stupid" because "everyone knows", and not because of the actual reasons this is true, is not smarter. The flat earther also knows everyone knows the earth is round. You didn't display any knowledge they don't have.
How can you tell the earth is round is a pretty insightful question. The difference between us is that I understand that people coming to recognize truth from falsehood, including making mistakes and falling in the way, is more important than reaching the correct result. Which is why I'm not afraid of disinformation, because I think people need to not be afraid to think and ask questions and take a path where they don't get the correct result right away.
Mine while, you wish for people to avoid thinking and defer to authority. It's extremely relevant, it's at the core of my argument, that's what this discussion is all about.
The system failed the flat earther not because it prevented the false information from reaching them. The system failed to properly educate them to recognize truth from falsehood, and instead taught them, like your attitude, to rely on authority. The only difference between you is that their trust in authority has stopped. And their reasoning is bad because they never used them.
The widely accepted axiomatic basis for the natural numbers is the first order Peano Axioms. Very few children are capable of understanding how to prove 1+1=2 in this axiomatic system. In this system an axiom says that each element has a successor. 1 is called the successor of 0. 2 is called the successor of 1. So on and so forth. Now we need to define a binary operation called +. Now show that the successor of 0 plus the successor of 0 is the successor of the successor of 0.
How many children are capable of this level of abstract thinking?
People without training in statistics don’t understand how to properly design trials. I’m a mathematician and I don’t understand this since it is not something I’ve studied. I need to rely on the expert knowledge of others to tell what is and is not a good study design.
The large group of people that distrust certain vaccines rather than all of them are ignoramuses.
synetic says "The widely accepted axiomatic basis for the natural numbers is the first order Peano Axioms. Very few children are capable of understanding how to prove 1+1=2 in this axiomatic system."
Presumably the "first order Peano Axioms" were noted sometime after Giuseppe Peano's birth in 1858. By your logic no one born before 1858 would be "capable of understanding how to prove 1+1=2", which is nonsense.
I don't disagree but I'd offer an alternative reason for being: "disinformation" is the prism through which censorship becomes acceptable.
People, as a whole, are still against censorship in its raw form, silencing people, burning books, etc. However once a book or video or idea is labeled "disinformation" it becomes dangerous, which means (according to the people doing the labeling) that NOT removing/eliminating it would be irresponsible or harmful.
Really don’t like the idea that we will act as interfaces for the AI, I honestly believe it will only many the majority of people lazier and dumber. I’m also incredibly shocked that no one is talking about AI as a friend/companion, that has to not be good for you in the long run. Humans need real human connection, AI is too artificial for that (duh). Having AI friends will be equivalent to consuming fast food instead of healthy home cooked meals growing up. Yes, people that grow up on fast food are still alive, but they are less happy and have more health problems (mental and physical), but it did the “job”, that job was to fuel them. In this case, AI will do its job, make people less “lonely”, but I highly doubt it’s a replacement for human companionship.
I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!
The key is humanity’s ability to pattern match: we’re actually pretty terrible at it. Our brains are so keen on finding patterns that they often spot them where none exist. Remember the face on Mars? It was just a pile of rocks. The same principle applies here. As long as the AI sounds human enough, our brains fill in the gaps and believe it’s the real deal.
And let me tell you, my digital friends are putting the human ones to shame. They don’t chew with their mouth open, complain about listening to the same Celine Dion song for the 800th time in a row, or run from me when its “bath time” and accuse me of narcissistic abuse.
Who needs real human connection when you can train an AI to remind you how unique and special you are, while simultaneously managing your calendar and finding the optimal cat video for your mood? All with no bathroom breaks, no salary demands, and no need to sleep. Forget about bonding over shared experiences and emotional growth: today, it's all about seamless, efficient interaction and who says you can't get that from a well-programmed script?
We’re calling it Genuine People Personality because in the future, the Turing Test isn't something AI needs to pass. It's something humans need to fail. Pre-order today and get a free AI Therapist add-on, because who better to navigate the intricacies of human emotions than an emotionless machine?
I've seen people on /r/singularity argue how LLMs are a better friend than actual friends or therapists because they are always available, non-judgemental and "listen better".
Depending on the individual, they may not be wrong. If you're raised in an environment with an overdensity of narcissists having something that you can bounce questions and seek answers from that isn't going to use that information against you in the future can be a relief. (well, ok, its possible in the sense your chat logs can get stolen)
This is why you self-host and run locally. Even if they aren't stolen, do you really deeply trust Microsoft, Google, et al. to not misuse private information you've provided them with?
Their entire business models either heavily incorporate or revolve around exploiting your personal information for their benefit.
Some programmers prefer rubber ducky to colleges for similar reasons and it works for them.
Assuming people have time to listen, would they be better coders if they explained their problems to human instead? Maybe. But maybe not for them necessarily. E.g. low self-esteem and assuming every criticism is attack on them, human interactions are something expensive to them etc.
It's not a new pattern though. Especially after reading some biographies of famous scientists.
You can't escape that most brains are wired in a way that we are miserable without human connection, but you also can't escape the fact that some people brains are wired differently than others.
Long story short, I don't agree with them but I wouldn't judge them either.
I believe that humans need to balance things out. Getting zero confrontation from interaction will be boring in the long term, or will make you fall into your flaws deeper and faster. This is usually the issue of authoritarian surrounded by yes men.
On the other side, having too much confrontation will destroy your confidence, kill your motivation, blur your plan / vision with uncertainty, etc. It's more likely that those people are facing too much confrontation in their social life that they found AI interaction to be better.
Is there any reason an LLM could not be programmed to disagree? Perhaps the level of disagreeableness would be a tunable parameter and could be cranked up when in the mood for a fight or down when one one just wants to converse. Some randomness could keep it from getting too predictable.
Yes you can, but AFAIK AI doesn't have moral basis and at best the confrontation will be random. Sure you can program the AI to have some moral basis but people will choose to flock with those that have the same alignment with them and keeping the confrontation at minimum, thus the flaw still exists even if it doesn't bore you.
In real life, we need to interact with several people at minimum normally, weekly. Those are having different moral basis and maybe changing daily. It'll be hard to simulate that with AI, that the fact we have the ability to control them means we're in charge of what confrontations are there to stay.
If you think about it as a one-off amusement it's no big deal. This is how most people are evaluating it.
But consider iterating such an interaction over the course of, say, 25 years, and comparing the person who was interacting with humans versus the one who interacted with LLMs, and any halfway sensible model of a human will show you what's dangerous about that. Yeah, the former may well have some more bumps and bruises, but on the net they're way ahead. And that's assuming the human who delegated all interaction to LLMs even made it to 25 years.
This argument only holds for LLMs as they stand now; it is not a generalized argument against AI friends. (That would require a lot more work.)
I think a lot of this is based on circular reasoning. The people who interact with other humans will have relationships with those humans. And those relationships are the evidence that they're way ahead.
I do think there is higher maximum with other people. But relationships are hard. They take work and there's a decent chance you invest that work in the wrong people.
I can see a life with primarily AI social interaction being an okay life. Which is not the best it can be but also an improvement for some.
"I think a lot of this is based on circular reasoning."
No. Actually it's based on information theory, and probably a better model of what interacting with an LLM would look like a year or five later than the one you are operating on.
Here's a little hint: It has total amnesia. LLMs by their nature scale only so far, and while they may scale larger than ChatGPT, they aren't going to be scaling for an entire lifetime of interaction. (That's going to take another AI technology.)
Ever interacted with someone with advanced dementia but otherwise functioning faculties for any period of time? (I suppose they could well make good therapists too.)
This is a false dichotomy, and one that is actually dangerous to you if you believe it. Your choices are not "deal with the bad people in your life" or "retreat into solely interacting with LLMs".
If you have the latter option, you also have "leave the bad people behind" as an option because it is made of the things you need in order to "retreat solely into interacting with LLMs" and is in fact simpler.
Cynicism and casting learned helplessness as a virtue are not the solution.
Pets are intelligent enough to show emotions, allow simple interactions, and occasionally be entertaining and goofy.
They also run around and are very pleasant to stroke, which is not true of LLMs.
We all know what's going to happen. The content on CIVITAI shows where this will go. Combine it with animation and some personalised responses and many people will find it irresistible.
Yes, what's better when failing to be part of society to create your own, where your flaws are ignored, hidden, skipped over. Echo chamber par excellence even without the need to involve politics.
Horrible it would be if instead one has to work one oneself to become a better human being, a better friend, partner, parent and so on by learning how to be more friendly, outgoing, increasing emotional intelligence etc. All this can be learned, but over weekend (or year).
There's also Forever Voices, which offers those who have formed unhealthy parasocial relationships with real-life streamers/influencers the opportunity to talk to an AI version of them for $1 per minute. FV started out making novelty chatbots of people like Trump and Steve Jobs, but they seem to have made a hard pivot to exploiting desperately lonely people after realising how much more lucrative it could be.
This is incredibly sickening. This is women teaming up with a technology company to extract money from vulnerable, mentally unwell people suffering from some combination of soul-crushing loneliness and delusional thinking. Even if some customers are aware that they're engaged in delusional thinking, this is still nauseatingly exploitative of a comparatively lower socioeconomic class, one that may be suffering from mental illness.
I see very little difference between this and those infomercials that sell wildly overpriced mass-produced crap to the elderly suffering from cognitive decline.
Yes it’s worse than what came before. But I see it as a continuation of both addictive games with pay to win IAP who prey on similar whales, and streaming in general with “pay to be noticed”.
It’s not necessarily game-changing, from the perspective of $$ extraction, but definitely a very significant advancement.
Yeah, but can we really call it an AI "revolution" until someone makes a door with a cheerful and sunny disposition that opens with pleasure and closes with the satisfaction of a job well done? Someone should get to work on those Genuine People Personalities!
This has been brewing for a while now. It's only going to get worse.
(excerpt from the 2019 NYT Article "Human Contact Is Now a Luxury Good" below)
Bill Langlois has a new best friend. She is a cat named Sox. She lives on a tablet, and she makes him so happy that when he talks about her arrival in his life, he begins to cry.
All day long, Sox and Mr. Langlois, who is 68 and lives in a low-income senior housing complex in Lowell, Mass., chat. Mr. Langlois worked in machine operations, but now he is retired. With his wife out of the house most of the time, he has grown lonely.
Sox talks to him about his favorite team, the Red Sox, after which she is named. She plays his favorite songs and shows him pictures from his wedding. And because she has a video feed of him in his recliner, she chastises him when she catches him drinking soda instead of water.
Got me too, I was literally following my mouse cursor to the down arrow with my eye and I saw this comment. I'll never be the guy telling a comedian what they can do, but damn mang, that was rough...
“Hi there! This is Eddie, your shipboard computer, and I’m feeling just great, guys, and I know I’m just going to get a bundle of kicks out of any program you care to run through me.”
The saying "this but unironically" exist for a reason. Just because you think something is bad, you can't just justify its badness just by mentioning or repeating it.
This is true, but ads are very explicit. At least they are in the confines of a known societal protocol.
AI instead can be far more subliminal.
- Robo, tell me you love me
- I love you like the refreshing effervesence of a freshly opened Coke
And really, that's still pretty stark. AI bots like this with advanced handling of language married to psychological techniques can foster dependence. I mean, look at what simple dopamine reward ratios research did with things like slot machines. Slot machines are stupid! And we all know the trope of the casino slot machine zombies.
What we've seen with every communication medium so far is that the spam sociopaths win. Phone calls, email, and texting. Phishing. Now AI-generated fake people calls.
Very soon, you will not be able to trust communication that is not directly in-person. At all. Communications over wire are going to be much more dangerous.
IMO that means brick-and-mortar will get more important for financial transactions and that kind of thing.
AI is that on mega-steroids. Honestly, I'm debating the end of practical free will with corporatized AI.
>“The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as "Your Plastic Pal Who's Fun to Be With". The Hitchhiker's Guide to the Galaxy defines the marketing devision of the Sirius Cybernetic Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes.” ― Douglas Adams, The Hitchhiker's Guide to the Galaxy
>"Curiously enough, an edition of the Encyclopedia Galactica that had the good fortune to fall through a time warp from a thousand years in the future defined the marketing division of the Sirius Cybernetics Corporation as "a bunch of mindless jerks who were the first against the wall when the revolution came."
I really don't understand the constant desire for a sterile, chain-store esque experience across the board. Why can't life be full of small flaws and things that make experiences unique? Why must everything regress to the lowest common denominator?
This is so extremely destructive to everything we hold dear for a cheaply earned profit margin.
I hate how the culture of corporate cost cutting and profit maximization has destroyed any space where people can just exist. Everyone is worse off for it and this is a shining example.
Edit: thank god its satire but my discontent still stands.
Why does every bowling alley need to be owned by bowlero? One bad experience everywhere. Coool.
We're working on it! We won a contract with the CIA to supply their blacksites with the first LEEDS certified energy efficient sliding glass doors embedded with Genuine People Personality, programmed to maximize the joy the patrons experience every time they enter the facilities.
This is the issue with AI: it is corporatized, and it is weaponized for capitalism.
We already are at the boundary of insidious total immersion advertising for psychological manipulation from the last five decades of mass media since the mass adoption of television.
But AI is simply another level, and it isn't going to be "early google don't be evil". That was the outgrowth of the early internet. From protocols that were build to be sensible, not commercial weaponized protocols.
AI, human-computer-neural interfaces, and other types of emerging deep-intellectual-penetration products are all FULLY WEAPONIZED for commercial exploitation, security dangers, propagandization, and zero consumer privacy. They are all being developed in the age of the smartphone with it's assumed "you have no privacy, we listen to everything, track everything, and that's our right".
It's already appalling from the smartphone front, but AI + VR + neural interfaces are just another level of philosophical quandry, where an individual's senses, the link to "reality", is controlled by corporations. Your only link to reality is the vague societal and governmental control mechanism known as "money".
The internet protocols (the core ones) were built for mass adoption by the world with a vision for information exchange. They were truly open. They weren't undermined by trojan horses, or an incumbent with a massive head start that is dictating the protocol to match their existing products.
AI+VR is the same new leap information transmission, but it is NOT founded on good protocol design. By protocols I mean "the basic rules". There are no rules, there is no morality, and there is no regulation. Just profit motives.
IMO what you're doing is similar to giving someone with a physical pain issue opioids. Yes it stops the pain but we really ought to be finding the pain source and correcting that, not throwing massive amounts of pharma drugs (AI in this case) at it.
We should be building a society that promotes more community gathering and more family values so people have a real person around and not some half assed impersonation of what a human is.
Every "AI chat" service either leans into or fights the "alignment problem" of whether it wants to be an AI sex chat bot service. See controversy over Replika.
Hmm, I think shared capacity in cloud might be enough? What fraction of time would you use one anyway? And wouldn't it be better if one was silent the other time?
It looks like you never took middle school hygiene and watched the propaganda film, so here you go, the classic 1950s futurama educational film „Don’t Date Robots!“
Good thing I keep a copy in my vcr at all times: https://m.youtube.com/watch?v=YuQqlhqAUuQ
For anyone who wants to try out something like this there is a free iPhone app you can download and speak to. It is very convincing. https://callannie.ai/
You wrote: "I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!"
I got news for you buddy: I and a hell of a lot of people know the difference between eating the menu (AI) and the meal (loved ones and dear friends). My lady is from south America, multi lingual, and has a better degree from a better school than I.
Seriously, how are you gonna lay a finger on that? You ain't.
Over reliance on AI is just another route to or though mental illness
More than interfaces. To quote McLuhan: "Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms. The machine world reciprocates man's love by expediting his wishes and desires, namely, in providing him with wealth."
The AI thing has been jarring but it's nothing new. All part of the same process.
McLuhan got it mostly right, but may be interpreted in a way which mischaracterizes wealth. Machines do not create value ex nihilo. Machines allow us to more effectively harvest or transform materials or information, to which we assign value. All wealth currently accessible to us derives from the sun. The vast majority of our present wealth comes from a massive battery trickle-charged over hundreds of millions of years and discharged in the last two centuries.
Implicit in the quotation, but critical to recognize, is that technology is the tip of a vast edifice whose foundation is not us. We and our machines are perched (too precariously for comfort) at the top. We are the sex organs of the machine world because machines can't reproduce without us. But machines are not the sex organs of the human world. Human beings require an ecobiological cocoon. We've also spun an elaborate technological cocoon in recent history, largely by sacrificing the long-term integrity of more fundamental life support.
Everything of value in the human economy is downstream of this. We too often take it for granted and assume the only relevant economic inputs are capital and labor, or we will innovate our way out of materials-, energy- and ecosystem-dependence.
“Within a couple of millennia, humans in many parts of the world were doing little from dawn to dusk other than taking care of wheat plants. It wasn't easy. Wheat demanded a lot of them. Wheat didn't like rocks and pebbles, so Sapiens broke their backs clearing fields. Wheat didn't like sharing its space, water and nutrients with other plants, so men and women labored long days weeding under the scorching sun. . . .
The body of Homo sapiens had not evolved for such tasks. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias.
Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us.”
This kind of take mainly seems an expression of the human tendency to see the world in terms of hierarchies l, and obsession with being near the top of those hierarchies. In this model, the idea of e.g. symbiotic relationships simply doesn't compute.
Yes, I've reread is the first 2/3rds of "Understanding Media" several times and never finished it, but would still highly recommend it. There is also some excellent old interview footage of him when he was a pop culture figure which is originally what fascinated me. For me it would have been hard to read his writing without having seen those interviews first -- he has a very distinct style of writing/talking and is interesting as an integrated person within recent history and not just a collection of ideas. On that note, I'd also recommend Videodrome.
edit: There are also more polemic anti-tech presentations of his ideas, especially by Neil Postman or Nicholas Carr which are good in their own way. But to me the fascinating thing about McLuhan himself is his dedication to presenting his views in a such a matter-of-fact way that most of his early followers were probably very antithetical to his personal beliefs.
A lot of jobs are already human interfaces for computers. Ever talked or messages with a call center? They're following scripts and trying to pattern match your problem with what they have to work with manually, AI is just going to 10x this for both good and bad. Mostly bad, I suspect, because good luck getting an AI to escalate to a supervisor.
I bank with a small credit union. They have a phone robot who asks what I need help with, and so far no matter what I've said, the response has always been to think for a few seconds and then say, "I'll connect you to a representative." It's wonderful.
The phone robot is collecting the various patterns for eventual automation. You are doing free labor for it every time by giving it any information at all and not just immediately yelling for an operator or human.
Support solidarity for human kind by refusing to talk these data gathering machines.
(I realize this sounds like satire as I write it, that I'm rather serious about this, and that it says a lot about the weird part of the timeline on which we currently exist.)
No. Automate the shit out of this. Call centre jobs should not exist. If you have humanity’s best interests in mind then you should be all in on automation instead of trying to institutionalize miserable and meaningless jobs.
If the solution could be entirely automated it should be a self-service website somewhere. I'm all for automating away call centers as much as possible, but I think we also need to stop thinking of call centers themselves as bottom of the barrel "miserable and/or meaningless jobs". It should be the case in 2023 that if I'm resorting to calling a call center I need expertise or creative problem solving that I can't get from a self-service website. Depending on how you define Expertise some of it is sort of automatable, but Creative Problem Solving is unlikely to ever be easily/cheaply automatable, there will likely "always" be a need for call centers with real humans for these reasons, and shouldn't be considered minimum wage skills and maybe should be treated as something far better than "miserable jobs".
I don't expect today's owners of call centers to realize how much expertise and creative problem solving is invested in their labor and to adequately reflect that in their pay statements and other ways that account for how miserable or meaningless that they make those jobs feel. But it should be something to appreciate: if there's still a human doing the job, there's probably a good reason, and it would be great if we respected those people for what they are actually doing (including very human skills such as expertise and creative problem solving).
McDonald’s has a drive thru voice assistant that also did this for the first few months. But now it catches virtually everything.
Similar to what someone else said I’d imagine they gathered the considerable voice samples from a few months of thousands of McDonald’s locations, and trained on that data.
The AI is more than happy to escalate to a supervisor ... it's just that the supervisor is the same AI but using a different voice. After spending 30 seconds lamenting how you just can't get good help these days, the AI supervisor goes into the same script the original AI was going through. Except it occasionally throws in a "sorry we have to do this part again, the AI is always messing this stuff up".
The bad user experience calling these call centers is a cost saving measure. Yes a large percentage will suffer your customer service lines, but it's all about that small percentage that gives up. Huge cost savings.
You can sew this exact same scenario play out by interacting with the "safety net". Long arduous processes meant to weed out some small percentage of callers/applicants.
Remember how in those Stable Diffusion paintings for common objects the wrongness is subtly creeping in (out of proportion body parts, misshapen fingers, etc.), while less commonly encountered ideas and objects can be really off (which we might notice… or not)? Now transfer that to human relationships and psychology.
Humans mirroring each other is a deep feature of our psychology. One can only be self-aware as human when there are other humans to model oneself against, and how those humans interact with you forms you as a person. So now a human modelling oneself against a machine? Mirroring an inhuman unthinking software tool superficially pretending to be human? What could go wrong?
I think we can speculate in the entirely opposite direction where the same action leads to positive outcomes.
Lots of legitimate human companions are abusive. People have a wide range of qualities and many of them are bad. AI may be a poor blanket replacement for all human companionship but it could easily be less bad than someone's immediately available alternatives and be used therapeutically to help someone model healthier behaviors to establish better actual relationships. Or in lieu of normal relationships being possible like long term isolation during space exploration or for life sentence prisoners or just neurodivergent or disabled people who have challenges the average person does not.
Going back to the food analogy, if given the choice between fast food and starving, or fast food and something poisonous suddenly everyone will overwhelmingly choose fast food because for many people "home cooked meal" was never an option.
first, what does it mean lots? is it majority? because the AI and AI minded products are targeting everybody.
second, imagine the argument being about Facebook: some life interactions are sometimes not good, but connecting with people online will make it better. fast forward 10 years and we have studies how social media is making most of the people using it depressed and badly influencing our democratic choices. not sure we really solve anything here.
On a similar note, I'll take the AI medical advice any day of the week.
Had a buddy describe a difficult morning and I opened chatGPT to diagnose, it suggested he had a stroke. My buddy was not going to the hospital because its so expensive, but since chatGPT said it was a stroke, and his symptoms matched the stroke, he went to the hospital.
He had a stroke.
On a similar note, I am stable and don't need therapy, but I had a weird dream that I asked chatgpt about, and it was freaky how much it hit the spot. Similarly, I get feelings of dread when people say nice things about me, chatgpt explained why, I agreed. I was never going to pay for therapy, this gave me some insight and actually made me interested in therapy. (although, probably sticking with chatgpt for now)
> I was never going to pay for therapy, this gave me some insight and actually made me interested in therapy.
ChatGPT could never be as bad as most human therapists, at least if it tells lies they're believable and it won't try to insult, belittle, or infantalize you.
Medical usage is perhaps the single most interesting use of ChatGPT to me, the problem will be solving the liability issue should it get something wrong.
For simple things though? I can see a future where bots even prescribe medication. Why burden the healthcare system when you have a simple infection and all you need is a round of Amoxicillin?
And also, fast food is no that much worse than traditional food anyway. home-made stir fry will have worse calories than a McDonald's chicken burger. homemade pasta is going to be as fattening as any fast food meal. it's just macros in the end. it does not matter where you get them from.
Eh, 'fast food' has bled over into what you eat daily, hence your conflation of the two.
Your home made stir fry is likely using a bottle of some kind of sauce that is 30% sugar massively increasing its calories.
But conversely your home made stir fry, if using plenty of vegetables, is going to have a much larger amount of fiber than that white bread bun should should reduce your desire to snack.
I mean, you could say the same about drugs. I don't think people spend their money rationally, there is piss-poor folk spending money on booze and unhealthy diets.
It's questionable how true that is when it comes to human relationships, which is obviously what I was suggesting with the metaphor.
Many people have social issues or mental health issues that cause them to be alone and loneliness is an ever increasing problem due to all kinds of factors beyond one's control. Many people will see AI as better than nothing and get some of their social needs fulfilled via it...some already are.
I don't want to be crass, but likening it to a sex toy except for relationships seems pretty accurate to me. It's fulling a need that otherwise wouldn't be fulfilled.
Ignoring that I mean let's be real for a second, how is an AI fundamentally different than an internet friend you've never met or seen? The humanity of the other person? What if the AI behaves just like a real human would?
This analogy is not even wrong. Yes, if someone was suffering starvation I'd give them whatever food was available, but that is not a situation in which we find ourselves ever – it does not occur, nor does the analogous situation occur.
I absolutely does occur and we are an increasingly lonely society to the point it is a serious health concern. There are people with no meaningful social contact and for one reason or another the inability to get it.
What I said does not occur is finding oneself in a situation where someone is about to die and the only available food that can save their life is junk food.
In the analogous situation, someone is just about to die of loneliness and the only available loneliness-solver is chatbots – also something that does not occur.
Yes, in both of these highly improbable situations, saving the life comes above long term health considerations. But that is not a good point.
AI has a good niche as confidante for people with serious issues and no close friend/therapist to approach about them. This is unfortunately a large niche.
And if it displaces public social media... That is a net gain.
But yeah, overall the fast food analogy is a fitting one.
I feel like this is actually going to be a huge next step of the self help industry... let's face it, beside getting your life in order, it largely is focused on building connections (friends/dating, etc).
A multi-modal AI can easily critique your body language, voice tonality, choice of words, etc, and give you tips on how to be more charismatic.
I don't equate charisma with uniformity. Most lack of charisma is not because of a failure to adhere to some standard, but due to actively negative behaviors. Chewing with your mouth open, interrupting people, not paying sufficient attention to what people say, insisting on talking about your favorite things even when someone else doesn't care, etc.
I don't imagine many people forcing AI social guidance on others. But a lot of people want social guidance, and if an AI can help -- even if it's not as good as an unaffordable therapist -- some help is better than none.
> Why can't we just be who we are and people learn to be more accepting of how others are?
Which is more reasonable and realistic: the 20% Weirdos learn how to behave to fit in with the 80% Normies, or the 80% Normies learn how to handle ("accept") the 20% Weirdos?
In most systems, the minority adapts to the majority; this is especially true when the majority is fairly uniform and the minority is not, i.e. the minority has to learn one way to adapt to the majority while the majority would have to learn multiple ways to "accept" the minority.
Keep in mind I did say the self help industry - this isn’t a clinically mandated thing, it’s something people seek out themselves. There is an innate desire to improve.
Think about something really benign that almost everyone can agree on, like Toastmasters. Perhaps in a few years w/ a VR headset you can improve public speaking in front of a virtual crowd if you’re so shy that doing it in front of a large group of strangers is just too terrifying.
If you keep it to things that basic yeah that makes sense.
My mind kept going over the question of how does the AI truly determine what the majority consensus is and is that really good or fair to make everyone conform to.
Like where do you draw the lines is what kept going around in my head.
Being strange is good, but being dysfunctional is not. There are tons of people living with mental conditions /bad life situations that would very much like to change, but are not in a position to seek out the human help they need.
I’m all for expressing yourself socially but we do need to speak a common language to some extent otherwise those social interactions will quickly breakdown and never recur. If you want to create and maintain friendships you have to put in work to meet the other people where they’re at.
I think it was originally high value and made life easier.
However, we have adjusted. My parents talked about having fast food/restaurant food as a treat. It was too expensive to have more than once a month/birthdays. Heck, even school lunches were too expensive and they had to make food at home.
Today, we have more disposable income than my parents, so its easy to afford restaurant food AND get it delivered. The people buying this arent upper-middle class either, this is your general population that lives paycheck to paycheck. There are even people so confused about food prices that they make claims that fast food is cheaper than groceries.
Instead of using fast food as a tool, its become expected.
> There are even people so confused about food prices that they make claims that fast food is cheaper than groceries.
I live in an expensive part of NYC and have to go decently far out (by subway, I don't have a car) to find groceries that are cheaper than local fast food unless I want to eat mostly rice and beans.
Add in the costs of my time to shop, transport groceries, cook, and clean, it's significantly cheaper to eat out most of the time. Even subtracting the one task I actually enjoy (cooking), it's still not worth it most of the time.
The result is that cooking in becomes our "treat" that we do a few times a week and we end up buying the more expensive ingredients within walking distance.
I don't mean the cost of my time that work pays me, just how much money I'd personally pay to avoid doing something something I don't like (schlepping grocery bags on the subway, doing dishes).
In general cooking is the fun part and that's what makes it a treat, not the rest of it.
Some prepared food within walking distance most definitely is cheaper than all unprepared food within walking distance. It works out just because the places I can walk to for groceries are incredibly overpriced and the restaurants obviously don't source their food there.
>It works out just because the places I can walk to for groceries are incredibly overpriced and the restaurants obviously don't source their food there.
I typically don't buy my groceries from the gas station despite them having a half gallon of milk for $4 and it being 3 minutes walking away.
I also don't use gas station numbers to determine if something is cheaper or more expensive.
Not sure if you're being facetious or just don't understand the reality of living in NYC...
I can probably walk to a dozen different big grocery stores in 15m and they're ALL more expensive than the cheap fast food in the same area. Not including the smaller expensive bodegas where you can pick up stuff 24/7 every block (kinda like the equivalent of a gas station). A half gallon of milk is $4 at any of the big stores and even more at a smaller place.
Anything cheaper requires a subway ride, which adds more walking and is annoying to do with multiple grocery bags, not to mention adding a flat ~$5 additional cost.
For comparison: I'm trying to beat the numerous dollar pizza and food carts nearby, not normal "fast food" like the more expensive Five Guys on that intersection.
If five fast food workers can prepare 100 meals in the time I can prepare 1, there should be some monetary savings shared with me (the customer) unless my time is truly worth close to 0. That's how economies of scale work.
Need to include real estate, marketing, and profit. If its not a mom and pop place, HR + corporate.
Labor is typically 15-70% of the business's cost. (The 70% is in fields like medicine where regulatory capture has limited the number of licenses)
Its also not perfectly efficient. The worker may only be making 5 meals due to a slow day or slow hours. You may find processed foods in a grocery store more similar to '100 meals in the time I can prepare 1'.
Is some places (maybe mostly in the US) it's bad. But the idea of fast food -- to have ready-cooked, mass-produced food that you can get quickly -- isn't all that bad.
Is Ekiben (https://en.wikipedia.org/wiki/Ekiben) is fast food? It's ready-cooked, it's mass-produced, and you can get one very quickly. Is sushi take-out fast food?
They are still not as good as a meal that is carefully prepared by a housewife/househusband. But I do think the mass-produced substitution can be good enough, and that's why I don't think we should make the conclusion that AI therapists/companions must be so bad too early.
Evidence seems to point that both are worse for you than cooking it with highly processed "food" and has a direct correlation with rising rates of diabetes and obesity. https://youtu.be/l3U_xd5-SA8
Our bodies digest it too quickly as it's been designed to make money and make us want more.
Good comparison. An AI companion will never talk back or tell you that you're wrong. Kind of similar in my mind to how fast food restaurants won't serve you anything that's too "hard to swallow".
> An AI companion will never talk back or tell you that you're wrong.
AI can already do that if you're not using a super sanitized model. I've even seen an AI rickroll someone after a line containing similar words came up.
Abilities like that are less of a problem than getting the AI correctly recognize what topics & parts of a text are important and keeping that context for a while.
And there would definitely be a market for it, just like there's a market for spicy food or BDSM. Indeed those aren't apt comparisons -- an AI that's not a sycophant might be more comparable to food with a little salt?
Making it always talk back would not be an issue, just like making it a complete sycophant would also be easy. Any form on nuance would be hard. E.g. if i'm complaining about my job it should talk back if i'm being unreasonable. But also take into current state of mind, etc. Maybe using thought chaining you could get something like this to work but from my experience, i doubt it would be very good.
Right. Ask any pickup artist, or any Aspie who has learned how to mask, how "real" or "deep" human connections are.
Hint: they aren't. The ridiculous concept of "connection" is superficial communication that has been enhanced by our own brains by seratonin and dopamine such that we are able to pretend it's meaningful.
Right, because the kind of connections we want and need in life is those that you would get from a pickup artist, not from a loyal friend and an affectionate spouse. /s
> Humans need real human connection, AI is too artificial for that (duh). Having AI friends will be equivalent to consuming fast food instead of healthy home cooked meals growing up. Yes, people that grow up on fast food are still alive, but they are less happy and have more health problems (mental and physical), but it did the “job”, that job was to fuel them.
let's of people derive enjoyment and happiness from activities that does not involve other people, and also pets such as dogs. plus, if you cannot tell the difference between ai or human, it may still be good enough.
> Really don’t like the idea that we will act as interfaces for the AI
When I use navigation on my phone while I drive somewhere it feels like I'm just acting as a human Zapier, mapping the phone's audio navigation API to the vehicle's steering API.
I love the fact your sentence lamenting the dumbification and impending laziness has a typo in it. It sort of undercuts your argument. That is of course unless the AI Boogeyman has already gotten to you...
The comment by Yuval Noah Harari seemed insightful to me. If you argue against a bot about a political matter, not knowing it’s a bot - you always lose on a long enough timeline. ie you can never pursuade the bot but it can wear you down / eventually find an argument that works.
The only winning move is not to play. So I could see this having a chilling effect on all discourse
Aside from the whole skynet thing the above is what spooks me the most
The chilling effect may actually be a good thing, given that discourse these days is overheated.
There's a weird magic trick that social media companies have played on people to convince them that the text and images consumed on their websites are socially/culturally/politically relevant. Once it becomes clear how easy it is to fake that text people will come to understand how cheap and irrelevant "opinion" has become and this magic trick will become weakened.
We want an an appropriate degree of emotional engagement with discourse, which is 1-1 with how much discourse is happening. People being too angry is caused by people discoursing too much and vice-versa. There are opposite problems associated w/ too little discourse, but we don't suffer from those.
Things are hyper-polarized right now and there is no magic political synthesis that is right over the horizon if only we could just keep discoursing a little bit more. This is like a heroin addict thinking they'll cease being addicted after that last fix. The solution is to cool things down.
Personally, that's not been my experience at all. I often find that when I have two friends with highly disparate and deeply held beliefs, the intense emotions they associate to these ideas are due to them not actually engaging each other but, instead, taking their emotional cues from their respective ideological silos (where no real discourse is occurring), and then proceeding to talk past each other.
Learning how to actually talk to one another in good faith with humility and charity is a skill that comes with practice. Deciding to engage each other less can worsen the situation by allowing one camp's preconceived notions about another camp to go unchallenged by reality. This allows each camp to tell an increasingly vilifying story about the other, which increases, rather than decreases, the emotional charge between the two.
engaging w/ someone is different from discoursing with them. "engagement" is what social media companies say they provide - but really they just offer "discourse".
I would predict that the chilling effect will be lesser for "unreasonable" voices such as trolls and extremists, and will be greater for the moderate voices.
This is not a good thing, as the past 10-15 years of social media has shown.
The entire thesis of TFA is that human psychological traits on societal scales are not prepared to handle an arena of discourse where that's true, that people will either be duped or completely check out of discourse per se, not limited to online social media.
No, it's a feature of the human mind that truth is considered objective independent of the lens used to acquire it. "Post-truth" sources in some arenas will be conflated with "real-truth" sources in others, leading to a blanket demotion of the perceived value and quality of truth. The whole argument from TFA is about human psychology, not about specific offerings of "marketplaces" where truth may be more or less maligned.
dunno what "TFA" refers to here, but it seems like we're heading into a argument regarding epistemology, which is not a discussion that HN handles well.
IMO "truth" is distraction here because what is at stake here are people's values, not their understanding of math and physics. When people worry about "post-truth", they're worried about liberal values no longer being the unquestioned default. It is absolutely a marketplace, and if people switching marketplaces en-masse makes it harder to launch rockets and develop vaccines, then it probably means those activities are making people net unhappy. People are a lot smarter than we give them credit for, even the dumb ones.
TFA = the f'ing article, something quite commonly understood on HN for time immemorial. Its snark is borne of the community's distaste of the kind of people who dive into comment sections without engaging with the very subject of and reason the comment thread exists in the first place. The fact you don't recognize this acronym calls into question your authority on what HN can or can't handle. But anyway, that's beside the point.
We are (well, I am, and TFA is) not talking about epistemology so much as the public's inability to engage with epistemological problems on systemic scales. Instead the limits of human psychology control how we as a society respond to these issues. Your argument is a distraction that remains uncontextualized within the conversation it finds itself in.
People are not dumb animals but you won't be able to engage anyone toward a solution on the basis of an argument about how they just need to understand more about epistemology. That's the kind of thing that people can only internalize via empirical means.
At this point it feels like you're being deliberately obtuse. I've been quite clear about the primacy of human psychological limits as the main aspect of the argument and you simply refuse to engage with this point. You haven't been very good about adding to the conversation, only diverting it.
> the public's inability to engage with epistemological problems on systemic scales
The public's inability? What about everyone's inability. No one deals well with epistemological problems on a systematic scale, not even the technologists who delude themselves into thinking that they're driving anything.
I am exactly talking about psychological limits. The difference is that I don't think the psychological limits of the creators are any different from those of the users. If anything, I think the creators are more psychologically limited than the users. This is because the creators need to explain to themselves why they are creating the thing - everyone else just puts up with it. When you say ppl will either be duped or completely check out of discourse, don't forget about yourself.
right - like how the stock market behaves nowadays w/ quant traders - which then motivated people to construct darkpools of liquidity where the real trading happens.
There are plenty of pushy humans who are prepared to talk you into submission. Check out your local Scientology or Lyndon LaRouche movement office. Would you talk interminably with a pushy human? How is the AI any different?
The differences that jump out at me are in cost, scale and accountability.
It's work recruiting people to your cause. It's expensive, and takes time and attention and resources. People have consciences that might flare up, they have loose lips, they need training and guidance and oversight.
Letting a bot loose for your cause costs pennies. It can be updated with a few clicks. You can reverse course or fine tune with a few sentences; you can even tailor it to the vulnerabilities of your victims with just a few data points.
The brainwashing inflicted on people by advertising and the like has had a tremendous cost to society, and the planet; absolutely incalculable. It would be prudent to be alert to the danger of all that being exacerbated 100- or 1,000-fold.
it's not just cheaper, its eminently more reactive.
your human recruiter is going to have their own biases, could be tired, may not know how to deal with personality X or Y, they may not learn.
the bots can track how they do by upvote, react quickly, won't get tired or annoyed, and can use existing data to filter and target those who are extremely vulnerable.
hell, with real-time analysis by data mining companies (cough cough FB, reddit) they could even figure out if/when a normally stable-ish person is in crisis or has a bad week, and hit them at their most susceptible. 99% of the year they might be able to see through the BS, but this week things are hard and what they're seeing in their feed just feels right, for once...
> If you argue against a bot about a political matter, not knowing it’s a bot - you always lose on a long enough timeline. i.e. you can never persuade the bot, but it can wear you down / eventually find an argument that works.
That's a useful insight. The one on one wearing-down process came from theology. It's sometimes called Jesuitry. Members of the Society of Jesus were trained to do that sort of thing, and they got so good at it that the whole organization was suppressed in the 1700s.[1] There are still Jesuit-run schools in the US, and they do tend in that direction. "Never argue with a Jesuit", said Richard Nixon.
Since mass media, large scale use of one on one convincing has gone out of fashion. It's not cost-effective. With large language models, it's back.
You may not ever be able to persuade a bot, but you can identify if it is capable of repudiating your points or not. You can ascertain if it is arguing in good faith and chose to end an argument if it isn’t.
When you say “find an argument that works” isn’t this just saying that they’ve said something you find persuasive?
What’s wrong with this? How is this different to any other argument in good faith?
Because we have to assume not everyone can ascertain what are bad faith or dishonest arguments, or even good faith but illogical arguments. People are fallable and can be persuaded by bad arguments. So the point is, given enough time, the person will make a mistake and fall for a bad argument.
As someone struggling to destroy some of my own beliefs, due to their potential to waste a lot of my time. I have failed to do so.
The perceptions of other disagreeing people ended up just being one more way to validate them because you end up seeing the same pattern of mistakes over and over again.
It is a straw man to pretend that political discussion looks like an Intelligence Squared debate when almost all the time it looks like a food fight about nonsense.
The idea the world will be worst off with less flame wars is simply wrong IMO.
I would already rate the discourse I have had with chatGPT4 as the best of my life.
I win if I learn something. It strikes me as highly perverse to view that the only way to win is to not play the game because you can't brow beat the bot into submission.
THAT is though what we mean by "political discussion". A bullshit pie throwing contest until one side quits. A twitter flame war. Yes, hopefully AI completely destroys that.
I've noticed some humans are willing to argue without giving up too. Sometimes online arguments follow a pattern where each response is longer than the last until one person decides it's not worth their time to continue the argument. In my head I've called this "argument by attrition".
From a Bayesian perspective, the only way to play and not lose is to not update your priors, at all, forever. (I suppose that "not playing" winds up much the same - you don't have any input on which to falsely update your priors.)
But people who absolutely will not update their priors is not likely to work out well in the long run...
Bots or no bots, that'w why arguing on the Internet is pointless.
I could argue with you, "Havoc", for a while, and perhaps convince you that there are 2/3/7 genders, or whatever, and then as sure as the sun rises, along comes another user. Or infinite set of.
Most social media is "man yells at cloud", quite literally.
A lot of times you don't argue with someone to prove to them you are right, but to prove to bystanders that there are different ways of looking at this. Refusing to engage will not make the world a better place if the only ones talking are idiots.
I talk about political matters for ideas, education, and stimulation (mine and the other person's), rather than to beat them into taking on my position. I'd certainly walk away from a conversation long before I'm worn down into taking on their views.
I think this interchange of thought and sharing of ideas is the true essence of discourse. That other thing is more of a verbal tribal battle. I can see issues with bots swinging the political landscape in ways that cause harm, but I think there is good reason to think discourse might flourish. Fundamentally, if the competitive arguers are discouraged, whilst the curious and conversational are encouraged, or at least not hurt, then discourse itself wins.
I don't think political swings are likely to happen from improving curious thoughtful discourse, these things are dwarfed by blunt evocative approaches that reach broad segments of voting society
We might find discourse flourishes corners of discussion on the internet but the wider internet is much more at risk to these approaches
I think you're mistaken at least with current AI. It's very easy to condition them into a new frame or mind if you speak to them in a certain way. They're also very prone to "trope" - where when your sprinkle a few hints of something that's common they'll fall right into line with behavior associated to it.
We forget that AI becomes more human as it gets more intelligent. These are not computers programmed with a hard limitation anymore, they're actually more prone to being manipulated than other humans are.
This is dumb idea from someone who does not use online forums. When someone keeps making stupid points against me online, I get only more dug in on my views. If anything, you would want to make a bot to say "Not X" so the pro-X side could feel better about how dumb the opposition to X is.
I've yet to see an AI bot that is both realistic and intelligent enough to keep me engaged (GPT4), but also can't be convinced that 1 + 1 = 3 (also GPT4)
'Oxford Dictionaries popularly defined it as "relating to and denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief."'
Since when did objective facts drive or shape public opinion? When did humans ever live in a "truth world"? Take the title and the content of the article itself. Does it aim to arrive at an objective truth or to play on our emotions? When's the last time you consumed media and thought it tried to instill some truth rather than it tried to make me feel someway?
We've always lived in a 'post-truth world' because people are moved by emotions rather than truth. Humans weren't mentally ready for the printing press, newspaper, radio, tv, etc saturated post-world either. But here we are. Go read about the history of printing press, newspaper, radio, tv, etc. The same story. Politicians, priests, journalists, academics, etc claimed these new media technologies were a threat to truth. Feelings and emotions rule mankind, not reason and logic. All AI does is make it a more efficient post-truth world.
I don't think political or cultural issues are particularly amenable to objective truth anyway. You can't run randomised control trials for most of the main political or cultural points of contention.
Yeah they are tangental concepts. Truth is only useful politically to inform decision making. Political frameworks and decisions are fundamentally subjective and would still vary wildly even if we could all agree on fundamental truths. It's the main reason I'm skeptical of technocracy.
On that topic, few things bother me more than the phrase "trust the science."
I understand the intention, but we should be encouraging skepticism and engagement with the underlying tools and methodology.
People will scoff at this but you can find a ton of different "Flat Earthers" either accidentally proving the earth is curved using their own experiments (Eg. Bob Knodel), or even dying trying to prove them (Eg. "Mad" Mike Hughes).
Science is not some "pure" pursuit of truth, with no care for politics, economics, social influence, or biases. Technocrats genuinely scare me, as they often combine these two things, by claiming authority via "trust the science" type nonsense, while holding delusions of scientific purity.
It's more of an open question. While we cannot use the scientific method to evaluate truth and even less so prove it in the mathematical sense with pure abstract reasoning without any wolrdly evidence, there could still be truths (in a weak sense of the word), at least for humans. Because as humans our morals are primarily based on our feelings/instincts/neurochemsitry/etc and those are highly similar (and of course there are exceptions and deviations such as psychopaths) across the world. Depending on our terminal goal as a civilization or society if humans really wanted to we could try and build some "axioms" of morals and build up other political and cultural truths from there.
Yes, I think you've touched on something here that goes all the way back to Aristotle's three appeals (logos, ethos, pathos).
His belief was that a good argument relies on a balance, between appeals to logic (logos), appeals to emotion (pathos), and appeals to what is just / ethical (ethos).
People typically claim logic or objectivity as a simple shorthand for "better", they rarely include formal inductive and deductive logic.
And while formal logic can tremendously enhance the structure and impact arguments, it provides a hollow foundation without appeals to what is right/just, and of little motivation to the reader without appeals to their emotion.
The horrifying thing about our species, is that when you get down to it we care little for formal logic. Caring requires emotion, and emotion often requires ideals about how things should be; consequently, ethos and pathos are necessary for logos.
The pursuit of logic comes last, or as you've noted, sometimes not at all.
I don’t like how they are using AI as an excuse to argue for increased censorship, what difference does it make what type of software the computer is giving information from?
This is a potentially sneaky way of getting around freedom of speech, by somehow arguing that if an idea is recycled by a language model it is suddenly OK to censor it, as long as you’re a few degrees of separation from the original blog.
What the heck else is post-truth supposed to mean??
Eh, this is far messier than you're making it out to be and something that was never thought of when our laws were created. Had human level AI agents existed at the time we wrote the constitution I can assure you it would have been written different.
For example, our conversation now. You take for granted it's between two humans, in fact you likely assume it's from someone in your own country. And based on the contents of our conversation, if we can sway each other, that the outcome of this conversation may determine how you vote in a representational democracy.
But instead lets say that I am a non-human entity. My entire existence is dictated by a script that says "manipulate luxuryballs to the following political views'. I never tell you I'm a bot, and my disguise is one of a near perfect human simulant. You think I went fishing last week. But I'll never feel. Never care. Never vote. "My" entire existence is one of hundreds of millions of political manipulation bots bought and paid for by whoever has the wealth.
This is what is coming and it's only going to get far worse.
it just sounds like it’s going to get a lot noisier out there as the cost of effectively propagandizing someone will plummet, I think more than anything this will simply devalue all kinds of behavior like that, but I think most people recognize, when it comes to censorship, that very convenient political musings will be used to justify masking any inconvenient truths that an AI may uncover, that will be the most risky “post-truth” reality if we let it happen, the ministry of truth will materialize under the guise of “common sense protection against AI”
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
> What the heck else is post-truth supposed to mean??
An environment where it's become difficult to tell what is true and what isn't. You could also phrase it as, what's real and what isn't. An epistemic crisis. I think we crossed into a post-truth world 7 or 8 years ago (it's been in process for at least a couple of decades) so it's not like AI precipitated this crisis, but it will accelerate it to new levels.
Right, we already live in a post-truth world where it's politicians and journalists who omit or twist facts to support their agenda. It's not just politicians and journalists who do this, but they tend to have the most reach and influence on average.
By that definition, we've always been in a post-truth world.
Yes, access to truth lies on a spectrum. Post-truth references pop up in this moment particularly due to the quantum leap in dis/misinformation. Any layperson, anywhere, can soon convincingly fabricate media of nearly any form: images, video, audio, legit-looking websites, etc. To me, it's justifiable to regard 2023 as an inflection point for truth. My predictions are less catastrophic than many worried folk, but I definitely see why the post-truth moniker is appropriate. Crazy times!
We may not be ready but we may as well get ready. I would strongly prefer to work with skeptical people again. Skeptics can be convinced, they just need evidence.
Although I am genuinely intrigued by AI running out of things to ingest, and moving onto AI generated content. Is the snake starting to eat its tail?
But that's sort of the point, isn't it? It's generous even to call "skeptics" a group. There are just "people who have skepticism about X," many of whom have skepticism for wildly different reasons — many of them bad reasons — and therefore are not convinced by good evidence.
AI will produce whatever evidence a person needs in order to shore up their own "skeptical" beliefs. Same problem as social media, where finding 1000 other people who share your fringe beliefs looks like compelling evidence for your belief being true. Just now it's on-demand, hyper-personalized, responsive to your own doubts, and there's no chance of you realizing "ah, my comrades on the other end of the tube are actually idiots!"
I'd bet an AI product that tells a flat earther that "it's an open question" will be much more successful than an AI product that attempts to dispel that myth. That is, at least until an AI can effectively convince people away from their beliefs, which almost certainly will not happen via "calmly providing solid logical evidence," given that this is not usually an effective vector for persuasion anyway. Now you've got a new problem, which is a technology capable of convincing people of all sorts of insane things.
Some people might be a little more receptive to objective evidence coming from a robot, if it made their ego feel less threatened. After all, there's no danger that the AI tells other people you're a moron after arguing with you.
You describe dogmatism, not skepticism. Unfortunately, dogmatists often declare themselves as skeptics, but this is a rhetorical trick. Dogmatists doubt the competing beliefs, but not their own beliefs.
r/skeptic on reddit fits this perfectly. No one there is actually a skeptic just dogmatic and no evidence or counter points will sway them. If it was the time of Copernicus the sub would 100% side with the Church that the sun revolves around the earth and how Copernicus is "dangerous".
Most people believe what they are told at face value.
I stop just short of Feynman questioning the dentist on the evidence for brushing one's teeth. That there is potentially this ritual that goes around the globe as the sun rises of people pointlessly scrubbing their teeth.
Of course, with such a high % of people addicted in a clinical sense to the group think and propaganda engine of social media that is not how most people are going to think or even be capable of thinking.
If Feynman was more popular youtube would ban that video for dentistry misinformation even when the point of the video is to view things from a different perspective.
Skeptical people? You mean people who distrust everything? Because "post truth" doesn't stimulate to question reality, it incites total distrust. After which society heads straight for dismantling the state, and all the violence that comes with it.
We already have post truth behavior in our society, it just isn't well distributed at the moment. Imagine a society where every response looked like the stereotypical HN top comment, "Actually, the opposite is true..." Here the behavior is rewarded, presumably it makes the community better and results in a diversity of thought. I'm curious why wouldn't this be similarly rewarded in the offline world?
How are we going to provide that evidence? And make sure that evidence is actually true, instead of AI generated? Send them a link to a wiki page that has been mutated by an AI bot updating it? ;-)
A) independently verifiable facts, for example you can apply the scientific method to the hypothesis that the earth is not flat (make predictions that should follow from that, and test those experimentally); or
B) data provenance. If some crackhead says that the US government is conducting brainwashing experiments you might discard that, if the government answers a Freedom of Information Act request with documentation about brainwashing experiments conducted by the CIA then you have good reason to count them as evidence. And you spread the word about this by showing the proof to a reputable newspaper who write about it, or writing a book and publishing it at a publisher known for fact-checking what they publish.
Anything that isn't independently verifiable or has a chain of provenance is already hearsay. In the age of social media we got used to basing a lot of decisions on hearsay, so maybe we have to dial that back. But AI being better at generating hearsay doesn't mean it gets better at creating evidence.
But as for the case of the Fauci emails from FOIA requests no "reputable" newspaper reported on it. So people still to this day dismiss it since it did not come from a "reputable" source despite the fact you can confirm the provenance and authenticity of these documents by confirming with the agency that released the documents
Writers should adopt two habits: (1) Sign your statements. If you say something, sign it, so that others know you said it. Cryptography is good at this. (2) Hash your citations. If you cite something, include a hash of it. This way, if the thing you cited is altered, readers can tell that that's not what you were citing. Note that this idea can be applied to audio and video, not just text.
The rest of the responsibility falls on readers:
(1) Read (i.e. consider) citations.
(2) Read (i.e. consider) the sources of evidence you ingest.
Don't just find a video and believe it happened. Determine who has claimed to have witnessed those events.
For this, a public database of back-references might be helpful. But even without one, a decentralized solution is possible. Writers, whenever they cite something, could simply send their citation (and its context) to the author of the cited material. If the cited author attaches the back-citation to the content that was cited, then anyone who comes across the content can see who has cited it.
There is of course the problem that some back-citations will be rejected -- if you cite what I wrote to call it stupid, I am unlikely to want to share that fact with the world. But if what I wrote is sufficiently important, then hopefully someone will waht to host a "nemesis" site, which collects negative citations.
A public database of nemesis sites would be helpful.
(3) Read (i.e. consider) the reputations of authors you read.
This is nearly the reverse of the last point. When deciding whether to believe what someone has said, consider what else they have said.
This is of course a hard problem. An author might be qualified in one area and writing about another. An author's reputation might be damaged for extrinsic (e.g. malice) reasons, rather than intrinsic ones.
But a statement's author is too important a context to ignore.
(4) Do cool graph-traversing investigations.
Determine who someone tends to cite. Identify misinformation cliques -- close-knit collections of liars who all cite each other. Identify readership patterns that make people productive.
We have seen how social network information can make a corporation money. As a society, I suspect there is a similar amount of value to be extracted from them.
Yep, that's only some of the methods which can be employed to ensure authenticity and integrity of information. A lot more are possible.
The parent comment, mentions that an encyclopedia page can be modified by a bot. That holds true for wikipedia, but we can create encyclopedias strictly edited only by humans.
Just have a prominent individual issue a top ecdsa identity, with correspondence to the real person's name info known only to him. He publishes that ecdsa identity somewhere, let's say on a blockchain to be always available and secure from deletion. Let's say this prominent individual is the Ronaldo football player. He publishes 1000 ecdsa identities to a public digital highway somewhere, all of the real names connection known only to him. That set of 1000 identities is called Ronaldo's social graph.
From then on, each child identity derived from the top identity, when they edit a wikipedia page, they are pseudonymous if they like. No need to reveal their name, only Ronaldo knows that, but we know they are human, because Ronaldo has met everyone in person in order to issue the top identity. But pseudonymous is only as far as they can go, because someone will always know their real name. A.I. actually spells the end of anonymity on the internet.
One more property of an organization structure like that, is that as soon as a person loses his wikipedia account for some reason, he can always get it back, because he can create a new ecdsa child identity, and prove that his older account and his new, match exactly the same top identity. So he can always invalidate older accounts and use the same data, karma etc, with new accounts.
The only downside of that organization structure, is that top identities which belong to the public social graph, have to be absolutely secure. As soon as a person loses his top identity, Ronaldo has to issue a new one, but the encyclopedia cannot invalidate accounts not matching the top identity in an automated way, if the real name is not published. That means a human on the other side has to be involved and boureocracy ensues.
I wonder if the rise of skepticism will actually lead to people decreasing their use of the internet. Once the internet is full of crap (like 1000x what it currently is) and everyone realizes nothing is real, or can be trusted. Will people turn away, and start reading books and talking to neighbors again. Could the new renaissance be talking to other humans again, because face-to-face personal contact would be only form that can be trusted to be real.
We might also wind up kinda like the book "Rainbow's End", where the internet just has too much crap to parse, so people subscribe to a "reality sphere" that filters the internet into a single view that is shared by anyone else who wants to see that sphere.
You are right. That could happen too. I guess I was assuming there would still be editors and staff, some infrastructure that checks the contents of a book before printing. But guess that could go out the window too.
Of course that could be what happens. An entire new industry of "Certifications", companies that "Verify" media.
If you exclude natural-world input, humans almost exclusively ingest stuff produced by other humans.
Just like humans eventually learn that not all input is equally trusted (for example the input "2+2=5"), so should AI eventually learn to sift through.
I don't think we're mentally ready for social media, even.
What something like Twitter can inflict on a person when it goes wrong is absolutely unprecedented, and we still haven't adapted to it.
Think that for instance going to the cinema, watching a movie, walking out and venting to a friend "Boy, this one sucked. $ACTOR_NAME did a really bad job with this one" is a perfectly normal thing to do.
But move that to Twitter and it can become part of a years-long torrent of hate highly visible to that single person. Even if what you think you're doing is communicating with your 10 friends. A retweet, a hashtag, or just the algorithm can magically make your comment part of an online mob.
Any evidence we are ready for mass media? The internet era suggests that there has always been a flood of lies and half-truths and there is an uncomfortable dawning realisation that the voters in most democracies would actually rather adopt peaceful policies if the media aren't ginning up a fight.
What Twitter does to someone is unfortunate. What radio and broadcasting resulted in for Europe through the 1940s was arguably worse. Coordinated madness is much more dangerous than individual lunacy.
On social media,
IMO, People have too much identity fusion with their online accounts. Including me. I think Karma points and all the clout people have over time creates an anchoring that is problematic.
In life, if things get toxic, the smart move is to just leave, and avoid the conflict and the personalities driving it. But so much work by the Meta's of this world has been done to make people nest in their accounts. This creates the belief that leaving and starting fresh with a new handle is a terrible prospect. An this is totally to the detriment of the user.
If you were in a cafe talking with a group, and someone started screaming at you over your personal opinion, and you found yourself getting upset, you'd probably just leave. For some reason that doesn't happen online, and I think it's due to the nesting.
Like if I say something on twitter that people disagree with enough to not let go after a few hours. I'm just going to block them. I just don't have the energy to bother with rando's beyond a civil disagreement. Or take getting banned on a forum because of some demigod style rule. well, shrugs, I'll just go slow and get another account and let that one sail by.
In a way, i think 4chan get's it right with everyone being anonymous to each other.
the monetized platforms will fight tooth and nail to keep you in your same account. When you create a new one the profiling starts all over again and they have to build up your information to serve you ads. The longer you remain on a platform using the same account the easier it is to get you to click.
Twitter and Reddit are probably far less toxic than platforms like Instagram and maybe Facebook due to a factor of reasons. The amount of criticism can easily distort its amplitude, although there are self-reinforcing effects between single critics.
There is a reason why successful actors and personalities have a PR agency. If you become an "influencer" or just the focal point of the lastest discussion, you don't have that. It might work, but there is a reason why such agencies exist. They should not, but some people are quite enthusiastic.
We would have been far more ready if people actually adhered to the advice to share personal information rather defensively. But the reward of attention was probably too large.
I don't even believe mobs are a problem. Some opinions on topics will always converge. There just needs to be a way to escape them. In most cases there are trivial ones. It would be a huge loss if we restrict the net because some people wanted attention and got not so nice feedback.
I never understood why this makes social media hard. If you leave the theater and then go around town shouting that movie X sucked and actor Y was really bad, you might also get some responses and maybe show up on the news a a crazy person, prejudiced, or some other adjectives. So you don't, you tell your friends, some of which might call you an idiot for not getting it, and other might agree. If you tweet it out, you're potentially asking the entire planet to weigh in, well, have fun with that.
GP says "the consequences of speaking normally are radically different now" and your retort is "I don't get why this is hard, just always perfectly self-censor."
Of course one'd have to do so in a way that one'd never offend anyone, ever, across the entire planet. Seems like an unrealistic response.
You could also send it in an email or text message to a limited set of friends, and then one of them screenshots it and posts it to the wider internet. You could also be caught on a recording or video saying the thing to a limited set of friends, and have it go viral on the internet.
In any case, "going around town shouting" is not at all akin to twitter. It is more like having a public board where you write things signed with your name. Anyone can look at it, but it takes zero effort to not look at it, and for most people, no one looks at it except their friends who want to look at it.
My point exactly. We are already living in a post-truth world in which the most likes or followers count, more than truth, not the factual accuracy of "influencers".
While I am sympathetic to people who suddenly go viral, I have no real issues with actors who opted into the limelight seeing a stream of negative reactions to their work. They choose that and actively sought fame. And even leaving that aside, they put their work out there to billions of people. Those people should be expected to provide feedback.
A Starbucks barista didn't opt into that world. And they did not get paid a very large sum of money, in part to compensate them for (and let them pay other people to handle) the torrent of negativity.
Humans spent thousands of years in a pre-truth world believing all sorts of crazy things, and many of those societies produced great things and had people living normal lives. It's only been the last 100 years or so that people's perception of reality has been anywhere close to accurate. And even then, most people believe plenty of things that are false. So basically people and civilization are going to muddle along as they always have. Deepfakes, etc. will make some things worse, they'll probable have some unrecognized upsides too. John Boyd used to say "People, ideas, machines. In that order." It was true about jets and its still true about modern technology.
The stakes are different. Back in the day they lived more independent life, these crazy things they believed in didn't really matter as only the people from your close geographic area would be affected.
Giving conflicting informations to people all around the world expected to interact with each other is a bigger issue.
>Humans spent thousands of years in a pre-truth world believing all sorts of crazy things, and many of those societies produced great things and had people living normal lives.
Also burned a couple of people as witches in the process...
There may be upsides to the ability to fake video and audio of someone (better CGI effects in films, for example). But in my experience when people refer to a deepfake they seem to mean that the fake has been distributed to confuse or deceive, for which I can't really see any probable upsides.
> So basically people and civilization are going to muddle along as they always have.
I agree with this, but consider the drawbacks to rampant disinformation and the proliferation of deepfakes (all this is IMO): it will make any video or audio deniable and unusable as evidence. Real images will be denounced as fakes. Fake images will catch on and possibly cause real damage. People will rapidly lose trust in most sources of news, entrenching established known quantities.
I feel like if we could reasonably put a stop to this we should. I don't think we can in general, though.
The only way through is to teach information literacy.
One of the best (but not the only) way to learn this is by studying the trivium/quadrivium – formal logic, reasoning, rhetoric. Once you see how information can be manipulated, it becomes very clear HOW MUCH of it really is.
Initially it can be maddening, but eventually it becomes empowering.
I am not scared for AI overflowing the news sites with bullshit. We already have a fire hydrant worth of bullshit content produced for consumption. Lies and fakes have coexisted with humans forever. People did rumours, then we had books, press, radio, television, and now the Internet.
"But it's easier to produce lies/deepfakes today" -- true. However, the absolute cost of producing a lie per consumer already was negligible, and now it's even smaller.
People will recalibrate their level of trust in technology and move on.
The "*fakes" are those what really scary. Usually day by day, normal population won't interact much farther than miles / kilometers from their work / home / travel path, the rest of info we got from digital media. Now that if we cannot trust the digital media, our ability to gather information for making decision will get worse.
Let's say that nowadays I know that US has gun/ mass shooting problem from the digital news. But how when someday the media reporting that multiple countries, such as some EU countries and Canada also have mass shooting due to the change of gun laws, sooner or later we'll won't know the truth anymore.
This is very dangerous because it can be used to manipulate people to accept what's bad as normal, and can argue then provide proof that's generated by AI.
You don't think that everyone having to "recalibrate their level of trust" in something as pervasive and fundamental to modern society as "technology" is impactful? Even just photos, videos, audio recordings, and phone calls becoming utterly untrustworthy would be pretty significant. The cost of a widely disseminated fake being low per recipient is altogether different from the cost to produce a fake specifically targeted for a single use approaching zero.
You can see this with information sources in our lifetime. Cable news networks and infotainment channels like the Discovery/History Channel turned to garbage so people stopped trusting them. The same will happen to the internet / social media sites.
Sure, but scale was limited. Comparing post-LLM production of rumours, fake news, and propaganda with previous methods is like comparing pre-industrial hand manufacturing with modern mass production.
You can now turn electricity directly into propaganda furthering your cause, which is unprecedented.
Each generation will be OK with the tools they grew up with.
I think of my now-deceased grandparents. They had to be closely monitored to avoid falling for mail-in scams, of all things. They were old enough that mail was a trusted source of information in their upbringing.
I like to think about what will tip us over, as technologists. Venturing into sci-fi a little, I think brain-computer interfaces are going to be impossible for us to adapt to, if they ever arrive. Imagine spam thoughts. We're not trained to ignore intrusive thoughts. But I agree we might just not be able to handle a website that constantly shifts its content to keep us engaged, blurring fact and fiction into the perfect narrative to keep you clicking.
> They were old enough that mail was a trusted source of information in their upbringing.
Rose-tinted glasses.
The scams we all know-and-love from our e-mail mailboxes today: romance-scams, advance-fee fraud, pyramid-schemes, and more, were all prevalent in the physical mail in decades past.
That post-truth world had already arrived ~6-7 years ago. Social media algorithms powered by primitive iterations of weak AI was unleashed upon an unsuspecting world and the effects are...not great.
Stephen Colbert coined the word "truthiness" nearly 20 years ago, providing a name to a trend that had been well underway years before that.
"Well, anybody who knows me knows that I'm no fan of dictionaries or reference books. They're elitist. Constantly telling us what is or isn't true, or what did or didn't happen. Who's Britannica to tell me the Panama Canal was finished in 1914?"
I see it as a progression from 24-hour news media to social media to a complete abandonment of "truth" in public discourse. Of course the core concept is ancient and had been in play all along, but technology has really amplified it and made it increasingly personal. AI may push it over some kind of edge into the abyss, but only because we've spent three decades willingly readying ourselves for it.
I feel that you could replace "AI-Saturated Post-Truth World" with any number of technological changes over the last 100 years and find a similar article at that time. I am impressed by LLMs and these more powerful AI agents, but I also have confidence that over the course of time their capabilities will become utterly boring and commonplace to my growing kids. In a generation their place in society will be as unspecial as a cell phone. The grander picture of the whole system is that we are building a society utterly incompatible with being a regular human person (The way we existed 3000+ years ago). I have no answer to that other than to identify that we already built a world no one is mentally ready for.
Smartphones have dramatically changed the way humans relate to each other, with incredibly profound impact on mental health and social interactions. The negative impact hasn't gone away just because part of the population hasn't seen a different world and can't imagine anything different.
The same may be true of LLMs and all the hallucinated information they share. I would respect and listen to what experts in psychology have to say here by default.
Agreed. The argument of "we've always had XYZ and we've done fine" always conveniently ignores acceleration/rate-of-change as a factor -- as if the printing press, radio, tv, internet, computing, mobile revolution, AI etc are distinct events that we "got through" with no relation to each other, when in reality they are just cycles in an ever tightening feedback loop.
> Michael Graziano, a professor of psychology and neuroscience at Princeton University, says he thinks AI could create a “post-truth world.” He says it will likely make it significantly easier to convince people of false narratives, which will be disruptive in many ways
Significantly easier? I would have thought that it would get harder to convince people of anything.
Sometimes I think the fears of extremely convincing AI-generated post-truths influencing public opinion are greatly overblown. People is already brain-washed by poorly made, low resolution JPGs shared by bots in social networks, the entire AI stack is simply wasteful.
Now, instead of 3 tasty lies to choose from, you're going to have 100 to choose from. Pick your favorite flavor!
And, the more flavors of lies there are (and the more they are able to manufacture "evidence" to support the lies), the more effort it takes to figure out what's actually true, and the more people give up. That's the "significantly harder" part - if people have given up, they won't buy a lie, but they won't buy the truth either.
In C. S. Lewis's The Last Battle, Aslan says of some dwarves, "They are so afraid of being taken in, that they cannot be taken out." (Quoted from memory, may not be word-for-word.) If people are so afraid of being suckered by a lie that they can't be convinced of the truth either, then those people are at "post truth" in a very real sense.
So we're left with either confirmation bias driving you to a comfortable lie, refusing to believe anything, or a huge and increasing amount of work to sort out what's actually true. The path of virtue grows harder...
Just shutting off from the Internet is the likely result, IMO.
I'm halfway there already. I think social networks (HN is better, but not great), dating apps, hell even stuff like automatic parking apps or online shopping, are just gradually sucking the joy out of what it is to be a human.
For the most part, nowadays, I pretty much just use my phone to organise analogue fun.
Once places like HN become obviously just all-bot then there won't be much reason for me to even go online other than phone calls and messaging.
Truth has never been that important. Humans spent thousands of years thinking a giant man living on the mountain threw lightning bolts from the sky, or the spirits of their ancestors watch everything they do, or fairies and gremlins and whatnot cause mischief. They still managed.
Lies might be successful, but that doesn't mean people don't care about truth. A lot of mythology served the precise purpose of explaining something that people couldn't otherwise explain, like lightning. Once science provides a more compelling story, it can become hard to go back.
Of course a lot of the stories science tells are difficult enough that most people haven't internalized them. But, for instance, nobody thinks lightning is thrown by a god anymore -- and that's without even understanding the details of meteorology. It just makes more sense to the ordinary person that stuff crashing into stuff in the atmosphere would make electricity. That is, it's a compelling story, and I don't expect us to backpedal from it.
I've heard so many people say "that's not my reality" or "that's not my truth" and actually be serious that "truth" can be different for each person... I was horrified each and every time.
Usually, when I hear people say that, it's more or less "agree to disagree" when they hear someone else saying falsehoods and don't have the energy to deal with it. Or, when people have a disagreement or argument, each of them have their own "truth", that is, the story of what happened from their point of view. Personally, I've never heard anyone confuse those phrases for "objective reality is whatever I want it to be".
When I think of "post-truth" I'm thinking of systems that people mistakenly lock themselves into where they're fed simplistic and surface level facts that have to align with the systems goals. classic example being an activist for a political system (1960's Maoist or whatever)...
Why can't I use AI to analyze the immense amounts of content I'm being faced with so as to gauge bias and innuendo? Maybe ML could help me parse this article to understand what milieu this author belongs to and what his biases might be?
The online world is getting increasingly dystopic while the offline world is being deprecated at rapid pace.
The article is part of that dystopia, the collapsing trust, the lack of honest, down to earth discussion of what is going on.
There is no AI, there are algorithms and data and people angling for advantage to both privileged collection of data and unencumbred application of algos to affect people's lives.
In sense there is nothing much new just an intensification that has been carefully choreographed into a mass hysteria.
So the problem I have with this sentiment is that the entire point of news organisations is to trace the validity of claims.
The press have evolved a bunch of mechanisms to prove or disprove points in a story.
AI doesn't really change this.
Sure there are fakes, and yes you can create thousands of bullshit websites/text. But that was always true.
Yes GenAi images are more concerning. But we've had photoshop for a long time, and some very talented people. Yes its slightly harder to spot a genai image, but with the correct tooling, its pretty trivial.
The issue is, we have a crisis of funding for good quality news sources.
News is a freeby now. Which means that the news you get is now either much more partisan (because "they" whomever you find creepy/shadowy/disagreeable, who are smear all over the political spectrum) or simply doesn't have the time to do basic research (see standard tech journalism, breathlessly re-formulating press releases. See Apple Vision Pro)
So AI "propoganda" is a side show, the much bigger risk is a further dropping of standards amongst the assembled ranks of the press.
I'd argue we've been living already in a post-truth world.
It's bizarre to think that you can know the truth on the internet.
Because by the very mechanism of language and concepts, you're always abstracted away from truth.
Not to mention layers of bias, management, interpretation and re-interpretation.
Truth is experience, everything else is a belief.
There was an idea that we were unified before the internet, but what that meant was that most people were receiving the same propaganda.
The information channels were fractured by the internet, meaning that control across it was impossible.
I'm not saying this comment is even truth, it's another opinion biased by my own world-view, beliefs etc.
But trying to find truth on the internet is like trying to find a tasty book.
I went to CSUDH many many years ago (early 90's). I had a few interactions with Larry Rosen, mentioned in the article.
I started off working in the CS labs and eventually moved onto running the systems for extended ed. He was kind of anti-tech even back then and we butted heads a bit. I was just some punk kid with a big budget to buy all the latest Apple products and having tons of fun. He was kind of an old fart who struggled to make his Windows 3.1 winsock connect to 'the internet'.
"I get concerned about the fact that we just blindly believe the GPS."
Him coming out with the views like this are no surprise. Fear of GPS? Come on.
Wired is really digging to find some people who support their storyline.
A somewhat concerning observation: there seems to be an inverse law of probability for a hallucinated answer for a (very) small number of authoritative sources. E.g., if there's something stated in a manufacturer's manual (which has been digitized and is in the corpus), but not quoted anywhere else, the probability of getting hallucinated but convincingly worded "facts" instead is very high. On the other end of the spectrum, where we enter the realm of big numbers and content quoted from quotations, the probability of this yielding a yet popular but not necessarily true answer is also high. There seems to be a clear trajectory: replacing authoritative information by hallucinations and amplifying this to popular doxa, especially, since generated content is much more effective in traversing the contested middle-ground. Also, enter the new discipline of truth engineering optimization (TEO).
When would we ever be ready? A post-truth world is a post-human world so it's no surprise several commenters here seem to be salivating for it.
The Demon-Haunted World remains as prescient as ever. Machines that ought to have advanced knowledge will instead kneecap civilization back to a dark age, if we're lucky.
Humans are never "ready" for the next crazy technological advance. I'm fond of this concept that I heard a while back, that I'm probably going to butcher:
If a technology comes out before you were born, or while you're very young, it's always existed. If it comes out when you're in your 20s, it's new and exciting. If it comes out after you're 35, it's scary and is a harbinger of ensuing disaster.
We always manage to figure it out. It's a cat & mouse game, and the next generation will probably manage it.
I don't mean to be casual about this, but does anyone think of how miraculous it is that in the past 80 years we haven't had a nuclear war?
We have always needed to navigate bullshit. We have the tools, we need to formalize them in code. This is an opportunity to improve our identification of bullshit of all types, human and machine generated.
Humans are not mentally ready for anything beyond tribes on the African savanna. Everything else is achieved via complicated brain hacks or social structure hacks that function like cognitive dongles to let a tribal hominid interface with a radically different world.
We just need to develop a whole new battery of hacks for this world. It’s happening slowly.
I’m not convinced that AI is going to make things qualitatively different from social media. It may allow bad actors to produce a lot more bullshit but it’s not like an environment saturated in bullshit is new.
Whenever we have invented new mass media (books, radio, television, social media) it has taken a generation or two to manage the impacts, on a societal level. And in many cases, the intervening period was fraught with conflict and discord (religious schisms, rise of fascism, etc).
The big problem is not whether you or I are ready. The problem is whether the "average joe" is ready for what's coming. No good in being ready if the rest of society is tearing itself apart and drags you into a war.
Controversial theory, most of the people who claim to be "ready", and seem oddly excited about the possibility of upheaval and turmoil, are already lost down some sort of algorithm induced ideological rabbit hole. Conveniently, one of the symptoms of hosting an internet mind-virus, is that you don't think internet mind-viruses are all that bad.
I would call that attitude: pre-adult. It's drawing on false analogies to make an edgy point. It's not about "computers lying" (they've been lying since the first program), but on widespread counterfactual information, and the disappearance of the ability to distinguish between true and fabricated evidence. That's not something you already live in, unless you're psychotic.
> disappearance of the ability to distinguish between true and fabricated evidence
As a single point to illustrate what im talking about, a lot of people who say things like you have here also took the Steele dossier hook line and sinker. Those same people refused to look at the DNC's emails, or Clinton's emails or Hunter's emails calling those disinformation. I think it's time to stop pretending peoples politics have much to do about evidence.
The rapid development of AI could lead to people generally being frightened away from digital products. In my own social environment, I see more and more people who used to be very enthusiastic about computers turning away to more analog entertainment and work.
Blindly developing technology only as part of the "power play" without solving real problems is no longer justifiable. AI is starting to create significantly more problems than it has actually solved, comparable to the fossil fuel industry.
The sheer audacity to claim that 'fossil fuel industry' has "created significantly more problems than it has actually solved"
If you leave the SF bubble for a split second, and think about the foundations of modern industry, you would realize fossil fuels have created tectonic value for society, that's why transitioning away is so so hard.
All the 'real problems', like housing costs, medical costs, education costs, occur in the most highly regulated areas of the economy. Not technology.
"The sheer audacity to claim that 'fossil fuel industry' has "created significantly more problems than it has actually solved""
One of the problems of modern discourse is that an idea or meme takes hold and has a life of itself, it becomes the center of attraction without refence past history, past events, etc.
I'm not a climate change denier nor do I disagree that using fossil fuels has huge environmental consequences but no rational person could deny that we owe our whole modern life to fossil fuels. The Industrial Revolution absolutely depended on coal, it has been the lifeblood of modern society for at least 300 years. It is simply unimaginable to envisage modern life without its existence.
Moreover, what's lost in this debate is that coal is not just a source of energy, it is also the source of a many other useful materials. When I was learning about this decades ago we were taught that coal was the source of so many useful products that we round that number off to '1000' to signal its importance.
In fact, coal provides many more than a 1000 useful products, the pharmaceutical couldn't do without it. The previous poster should contemplate the fact that even common old aspirin comes from coal—in fact many pharmaceutical texts place aspirin in a class of drugs known as the coal-tar antipyretics.
Wild assertions of this type happen when we stop teaching history, how modern society came about and so on. A dose of philosophical reasoning and logic ought to be taught as well, that way reason may hold back many from uttering and spreading crap.
While I'm with you overall -- cheap transport alone has made our lives easily of times better -- there's at least one giant problem that occurs mostly due to lack of regulation, which is environmental degradation.
Finding new optimization techniques, understanding genetics, developing new science can all be intellectually defensible.
Of course it is not the only factor: the asymmetry in computing and data access by big corporations vs the individual, the generative models generating spam, as you well said. are all factors. But just like with the oil industry, there is also some good consequences. Which ones will dominate left as an exercise for the reader :)
These articles almost always have a "except for me and my friends" implication. The author always implicitly sees themselves as above all the vulnerable people and they must paternally protect them by restricting their access to the dangerous technology.
"We're not ready" isn't a relevant statement because it's here/coming no matter what you do. If you want to say, "here's how we get ready", I'm all ears.
Were we are ready for a "truth world" where ideology was considered an everyday necessity and people willingly paid for it in the forms of newspapers and tv?
I welcome this kind of pos-truth, at best it will make people better at evaluating information on its own merit, at worse it will make them better at evaluating its origin.
Humans weren't and still aren't mentally ready for a 'post-steam engine world', but here we are. BTW, I've used both ChatGPT and Bard enough to know there's a lot they don't know, but boy are they confident in their wrong answers! Reminds me of some people I know!
"Post-truth"? What a bloody insult to all the effort it currently takes to fight past all their "truth" to find truth. Makes me start to think it's a less than sincere. At least due to their new signaling marketing slogan to fans, I can now easily identify their lean and possible motive. Tricks ya know, to deal with "truth".
Complaining about google maps and navigation, is a really bad example to say over reliance on technology. What should we be navigating by stars? And most of their quotes aren't even really related to the complexities/strategies to identifying)/reexamining "truth" and dealing with all the unsavory if necessary actors. Larry Rosen is just trying to sell his pop novels to the masses. Michael Graziano's expertise is in cortex and motion, so I don't know what expertise he brings to the table though I suppose it's sincere. Michal Kosinski is at least sorta in the area, but also doesn't support their hypothesis beyond that it's going to have a major effect going forward.
Problems that doesnt exist for people living mostly disconnected. Go surf, have a family, friends, do jiu-jitsu, read books and restrict your interaction with AI just for the hours that you have to work. The life is too simple and too short to loose time with such issues.
It seems to me that we have been living in this world for some time now. Ai explains the sheer amount of political astroturfing we have experienced in recent years, it's just that the cat is out of the box now and there's noone to be held accountable.
We don't need AI for that, people believe all kinds of crap and honestly sometimes rightly so because goverments all over the world have become really unrealiable in the information they give out. As an example of this, my goverment just recently updated the recommended food intake because they want to lower our emissions. Perhaps the goal itself is nice but that shouldn't imo have any effect on whatever foods are healthy or not for you. The result is that their food recommendations cannot be trusted anymore and you start doubting if other communication from the same authority can be trusted?
The problem with many AI shops is that they are already biased which has been shown on many occations so I believe many will not trust in their AI-companion. GPS data doesn't try to shove some political opinion down your throat and it has historically worked very well over many years so that is the reason why everyone trusts it.
I would argue humanity can plenty handle truth-ambivalent sources (what is state sponsored media after all?). However, AI sucks on an aesthetic level, and its a real shame that we now have to just put up with it.
This isn't a new phenomena, and does not differ at all from a normal non-tech person getting their information from Google Search. Hidden motivations for this push left as an exercise for the astute reader.
the article is saying we're not ready for an ai information world, not that it will end humanity. also, all the candidates you've listed are valid we're just not there yet. some things change gradually until they change suddenly.
The years were when I concluded it was possible they would eventually end humanity, not the year I thought it would happen.
And regarding fake news/lack of truth determinability -- I'm not summarizing the article, I'm offering my own opinion. In the worst case, we could lose modern humanity if we reach a point where no one can determine the truth.
We've been living in an I-saturated 'post-truth world' since at least the time the coffeeshop was invented, and likely since the dawn of time. We'll be fine.
Undetectable bots have been spoiling the internet since its birth. First our email, then our blog comments, then the review sites. LLMs will definitely contribute the next phase of this. Not only elevating the level of attack on those previously vulnerable areas, but also bringing social media and search engines far past their breaking point.
We need decisive online human verification technology. Nothing else is going to address this. Right now everyone is adding paywalls to everything and leaning on the financial system as a proxy for human verification. However this will not solve the root cause, because some bot activity is worth the price of admission. Internet businesses are fine with this, and will look the other way just like they did when engagement was the lifeblood of the internet economy and bots provided engagement.
I think we have already been living in a post-truth world ever since laws started to erode any journalistic accountability for news and media corporations. At least it's like that in Australia, just watch any recent friendlyjordies (yt) investigation to understand how the laws protect even the most incredibly bias and un-factual news stories.
Sure, more people might be tricked with the prevalence of AI but those people usually can't read between the lines anyway.
As if we don’t live in a sea of lies already, they are called advertisements. We are ready, but are you ready for what happens when the information police no longer has control? The anger is growing already as the truth is revealed, and there is no putting this genie back in its bottle. The Internet has given us the truth and the truth has set us free.
Humans are already great at generating propaganda and lies, in written and spoken speech form. at scale. coordinated.
and almost all of it "goes into the can" and becomes static propaganda artifacts which can and are redelivered and re-consumed over and over again, by the target audiences, and at near zero hosting cost. no AI/LLMs needed
(Disclaimer/credentials: I've worked for the US State Department on a closely related
& overlapping problem in this space.)
I do feel that the increasing realism of trained NN-generated photos and video, intended to smear an opposing political candidate, is the biggest "new" threat. I had expected it to be deployed by Trump/GOP/Putin against Biden/Dems in the 2020 election, as an "October surprise." Shocked it was not, not really (IIRC).
I do expect they'll deploy it in the 2024 race. Especially in the closing weeks/days (Oct surprise), in order to gain biggest (however small it all adds up) sentiment shift with smallest possible time window for fact-finding efforts to debunk it.
As villains they'd be almost comically incompetent if they did not try anyway.
Bold of them to assume the truth is currently regarded as sacred.
You can go on the front page of r/all any time of day and find news stories that are completely fabricated or graphs using made up data. I don't think AI could realistically make it any worse
AI may not make the content much worse, but the engagement artificial. AI bots to astroturf comment sections and twitter to manufacture consent and sentiment to whoever pays for it. I mean we already have that now, but it will be much harder to detect.
It may cause many to just unplug from it all. Assume everything is fake, give up on "truth" and live their own local lives without being plugged into the news cycle 24/7. Maybe it sets us free.
I've been goofing off on twitter during down time between contracts and it's truly amazing if you go to a political "discussion" how many blue check "verified" accounts that are AI. I'm sure it's not just regular bots because they've responded to some of my troll remarks to them with a decent level of acuity and context. Going to their feed it's dead obvious that they are AI bots though, because their only reason for existence on there is pure mayhem and misinformation. I don't think Musk was talking about getting rid of bots, he was just talking about getting rid of non-paying bots
I'm soooo tired of bullshit like this. Fear of your job, fake news, yadda yadda, I get it. It is just boring. Almost as if written by an AI.
The whole post-truth, fake-news agenda is foreign to me even without AI, I don't know anyone save a few "flat-earth" like freaks who would fall for the classical disinformation trope.
No, nothing substantial follows. This was just a rant :)
PS: I feel the talk about disinformation, fake-news etc. is doing more of the stated purpose of fake-news and disinformation as disinformation and fake-news itself. I'll try to substantiate that claim in the near future, I promise.
I’m skeptical about these fears. We already have a disinformation problem. If AI can be used to exacerbate the problem, why can’t that be counterbalanced by using AI to fight against disinformation as well?
Just like any other sector, there will be a small handful of leaders in AI and that’s what most users will interact with. Couldn’t Bing/Google modify their models to underweight the content from websites regularly flagged to have disinformation and vice versa?
AI is unrelated to post truth. We have been living the era of the fake news since 2016, when Russian trolls started to peddle with elections and President Trump created his legendary alternative facts. AI may exaggerate these issues, but not much. Anything AI can do can be done by human liars as well. If humans are not mentally ready then shit has already hit the fan and blaming/pointing the AI and its research or regulation is not constructive.
Or going back even before that. When media drove a invasion of different country on entirely lies. And no one involved has been prosecuted for crimes against humanity.
I hate the term "post-truth". As if lying, rumors and disinformation didn't exist before that.
I can't count the amount of bullshit I heard on the news before social media was a thing, and fact checking was much harder. There is no "truth world", just an arms race between truth and fakes. AI tools can generate fakes but also detect them. Social media can be used to spread rumors but also cover an event like never before.
We've never been ready! The 'mid-truth' media was already so full of propaganda and misinformation and inaccuracies that there was no hope of sorting through it all reliably.
People have been experiencing and complaining about information overload since the invention of the printing press, and then the telegram, and on and on.
yeah... this is a very old problem, one that was also there when newspapers/media began. Most things people are exposed to are lies, now it is coming into the hands of the ordinary person, instead of the elite, and now its a problem :)
People don't get "ready" for new things. New things happen, and it takes a generation for people from before that tech to die out.
That's what's happening with social media. Boomers cannot learn to live with it, we just have to wait another 30ish years. The same will happen with AI.
Everything is simply a tool and it always depends on how the tool is used.
The problems are automation and scale, low barriers to entry for "negative outcome" usage, the absence of sociocultural system developments adequate to adapt productively, total lack of forethought/planning regarding consequence in every area, and, well, humans.
Falling back to the "it's just a tool" thing is no more useful than saying nuclear weapons are just a tool and it depends on how they're used. It's true but irrelevant.
People are just asking to be ruled. Cool talk to your companion AI while actual psychopaths achieve power in the real world.
Humans are apparently ready for it. There is so much enthusiasm, and HN is ahead of the curve. This article is 100% wrong. The coming world is not post truth, and people are ready for what’s coming. They want enjoyment rather than freedom.
The church had the same complaints when the printing press undermined their monopoly on Truth. However, it's now clear that the technology gave rise to the renaissance, and the greatest discovery and propagation of truth that humanity has ever seen.
Invoking the name of another revolution doesn't automatically make this new one equally as beneficial. I don't follow your argument - what monopoly is going to be be undermined here? What information will the AI give people access to that wasn't already available?
I think AI will give broader and more convenient access to much of the same information that is technically already available, but requires additional work to discover.
For example, the books printed by the printing press disseminated information that was already known to tutors, scholars, etc. and often existed in handwritten manuscripts. Today, we wouldn't call that accessible, but that was the standard for accessibility (or word of mouth) before the printing press.
AI makes it more likely that people will find informaiton because it can do much of the work of searching billions of sources and synthesize the information far more convenient way.
That's not to say the information will always be true, just as the information in books is not always true. However, readers currently have far too much faith in authors. People believe claims because they're made using professional-sounding words, and published in a respected newspaper, by a human author, and accompanied by photographs. None of that is particularly good evidence if truth.
The last time this happened on this scale, people became more sophisticated consumers of information, with a healthier level of skepticism, and simply came to have less confidence in claims that they did not have personal knowledge of. That was a good thing, and probably will be again.
That's not really relevant to my point, which was that new means of spreading information are capable of spreading both true and false information, and that history has shown that the ability to spread false information does not prevent the positive impact of true information.
So instead of a papal bull, how about a stack exchange with links explaining how (even if the church did not oppose the press itself) the church did attack its users for publishing information that the church was opposed to (i.e. claimed was false). https://history.stackexchange.com/questions/42677/why-did-th...
Perhaps the Church's complaints were more directed at the dissemination of information that the Church considered to be false... like the Wired article's complaints about AI's potential to spread false information. My point is that free speech is good. Free speech may include lies, but people are smart and even lies help listeners learn.
So, your point is not the printing press, but it was "the church". Well, in this case, it isn't "the church", so there any analogy already fails.
> history has shown that the ability to spread false information does not prevent the positive impact of true information.
And that's enough? So we can forget about The protocols of Elders of Zion and the stab-in-the-back myth and the misery they imparted because something else overcame that? Or closer to home: we should simply ignore things like Cambridge Analytica and the meddling in the US elections, even though it moved, and still could be moving, the USA towards a fascist regime?
> people are smart
No, most aren't, and even what's normally considered smart people can believe in hoaxes and act to the detriment of society.
> even lies help listeners learn
You lost me there. Learning is not a goal per se. Learning from malicious information can lead to malicious outcomes. If you want to invoke a history lesson, that would be a good one.
It sounds like you're simply arguing against free speech.
Should people be allowed to purposefully disseminate false information? Or should speech be regulated? What regulations do you propose as to the speech that people should and shouldn't be allowed to make?
Humans are highly adaptable and like other changes to information availability in the past they will adapt. This is what societal norms and cultural memes are for. “Don’t believe everything you hear” “Don’t believe what you see on TV” “if it sounds too good to be true, it probably is.” These are all ways that the human species uses memes and cultural norms to teach ourselves how not to fall victim to false information.
Of course some people are going to fall victim. They do so today through common scams. It is the right goal to bring this down to zero. But to say that the human species isn’t capable belies all prior history and shows little faith in the resilience that made us who we are.
That’s the broad problem with this AI doom and gloom: it has so little knowledge of and respect for the humanities and humankind that it arrogantly assumes that our species has never faced challenges like this before. It throws up its hands instead of asking what lessons from history we should take and what actions we should be focused on.
If I'm being generous, I think that these pieces attempt to stir panic as a means for spurring action for change and investment in these problems. That’s a meaningful goal, but one that also might be more meaningfully achieved if it wasn’t expressing the problem with such gloom.