Hacker News new | past | comments | ask | show | jobs | submit login
Jaron Lanier on the danger of AI (theguardian.com)
352 points by atchoo on March 23, 2023 | hide | past | favorite | 364 comments



Anyone questioning the author's intention should read one of his books, "Who Owns the Future?"

It was written sometime ago, and I think Sam Altman read it as a handbook on power concentration using AI rather than the human-centric approach it was laying out.

Personally I wish Lanier wasn't as right about many things as he is, because I lose a little faith in humanity each time.


I have nothing but respect for the chap.

I never wanted to respect him, as I always thought he was one of those "too good to be true" people, and was mostly a paper tiger.

It turns out that he's the real deal, and has been right about a lot of stuff.


There are lots of parallels between Jaron Lanier and Richard Stallman. Cory Doctorow is another one I would put in that list, as well as SF writer Charles Stross.

They are all pretty good at looking ahead.


Such as? I have my skepticism too.


I’m not particularly interested in going into a back-and-forth on this.

He’s sort of like Edward Tufte; lots of ego, but earned, and not for everyone.

I like your job title. Always up for more “human,” in our design.


I actually agree with his perspective. AI is simply a another huge leap in technology that directly affects social order. We only need to look at the effects social media has had on society and just amplify them to perceive what may be likely outcomes.

This aligns very close to my own thoughts that I have written about in great detail. I foresee the societal impacts to be exceedingly disturbing long before we ever reach the concept of a Singularity.

https://dakara.substack.com/p/ai-and-the-end-to-all-things


Regulation of social media is still woefully behind even in cases where we do know there has been a hugely negative impact (Myanmar & Facebook, for example). And there are approximately 5 people who exert massive, unregulated power over the shaping of planetary discourse (social media CEOs). If social media is too big to regulate, AI regulation doesn't have a chance in hell.


Yes, additionally I find it somewhat ironic that AI researchers talk a lot about "power seeking" behavior of AI as a primary concern.

However, seemingly overlooked, AI is itself power and we should expect that "power seeking" humans will inevitably become its custodian.


This a thousand million times.

The mislabeling of LLMs and diffusion models as "artificial intelligence" is probably the biggest marketing blunder in the history of technological progress, one that could ironically affect the course of AI alignment itself.

Smart thinkers and policymakers are going to waste their time framing the problems the tech poses in terms of "an uncontrollable intelligence out to get us" like it's some kind of sentient overlord completely separate from humanity. But super-advanced technology that can operate in a closed loop (which could be called AGI depending on who's asked) isn't necessary for humanity to crater itself. What's required for such tech to come into existence in the first place? Humans. Who's going to be using it the whole time? Humans.

And I think there's still a lot of disruptive, world-changing tech to be discovered before AGI is even a remote possibility. In reality this tech is probably going to be more like a superpowered exoskeleton for CEOs, politicians and the like to sway public discourse in their favor.

"An uncontrollable intelligence" already describes the source of a lot of our current problems... that is, ourselves.


"An uncontrollable intelligence" already describes the source of a lot of our current problems... that is, ourselves.

Yes, precisely. One of the best quotes I've seen was "Demonstrably unfriendly natural intelligence seeks to create provably friendly artificial intelligence"

The whole ASI alignment theory is a paradox. What the AI researchers don't realize, is that they are simply building an uncomfortable mirror of human behavior.


The meaning of "artificial intelligence" has always just been programs that can get results that previously only humans could do, until the moment programs can do it. For decades AI researchers worked on chess programs even though the best chess programs until 20 or so years ago couldn't even beat a skilled amateur. Now of course they can beat grandmasters. And so we decided chess wasn't "really AI". LLMs would have been mindblowing examples of AI even a decade ago. But because we now have them we can dismiss them as "not AI" like we did with chess programs. It's a never ending cycle.


Microsoft put out a 150 page paper yesterday on why GPT-4 is proto-AGI. LLM's are AI, now we're just closing the G gap.


Microsoft is hardly an unbiased evaluator of anything built by OpenAI.

And "closing the G gap" is like climbing to the top of a 10-foot ladder and saying "all that's left is to close the gap between here and the moon." AGI is much, much harder than a large language model. But then radically underestimating what it takes to get to AGI has been going on since the 1950s, so you're in good company.


Link, please?


"Sparks of Artificial General Intelligence: Early experiments with GPT-4"

https://arxiv.org/abs/2303.12712


> And I think there's still a lot of disruptive, world-changing tech to be discovered before AGI is even a remote possibility. In reality this tech is probably going to be more like a superpowered exoskeleton for CEOs, politicians and the like to sway public discourse in their favor.

Our current powers-that-be are so manifestly unsuited to have the kind of power our idiot technologists are desperate to build for them that part of me wishes for a disaster so bad that it knocks technological society off its feet, to the point were no one can build new computers for at least a couple generations. Maybe hitting the reset switch will give the future a chance to make better decisions.


I am less worried about what humans will do and more worried about what corporations, religions, and governments will do. I have been trying to figure out how to put this most succinctly:

We already have non-human agentic entities: corporations. They even have the legal right to lobby to change laws and manipulate their regulatory environment.

The talk about AI being misaligned with humanity mostly misses that corporations are already misaligned with humanity.

AI-powered corporations could render enormous short-term shareholder value and destroy our environment in the process. Deepwater Horizon will be insignificant.


Corporations, religions, governments etc are just an amalgam of human values and behavior that results in the effects we perceive. Yet, AI researchers most grand theory of successful alignment relies on simply applying our values to the AI such that it will be aligned.

You can look at any human organized entity simply as another form of power and how our values become interpreted when given power. Your observation could simply be seen as further evidence of how alignment is a flawed concept.

If you take a single individual and have them fully illicit their values and principles you will find they are in conflict with themselves. Two values that are almost universal and individually positive, liberty and safety, are also the very values that also cause much of our own conflict. So yes, we are all unaligned with each other and even minor misalignment causes conflict. However, add power to the misalignment and then you have significant harm as the result.

FYI, I've written a lot specifically on the alignment issues in the event you might be interested further - https://dakara.substack.com/p/ai-singularity-the-hubris-trap


The government of Myanmar is free to regulate Facebook however they like within their own sovereign territory. But given the level of corruption, oppression, and incompetence there I doubt the results would be any better than usage policies written by random corporate executives (and haphazardly enforced by outsourced moderators). The only real solution to improving the situation in Myanmar is for the people to rise up and change their own government; this may take a long time and a lot of deaths but there is no alternative.


>The only real solution to improving the situation in Myanmar is for the people to rise up

They are rising up: https://www.nytimes.com/2023/03/17/world/asia/myanmar-killin...


This reply confuses me. You are implicitly accepting that FB, and American company, had a roll in the atrocities, but you are then saying it is up to Myanmar to handle this. If that's correct interpretation, I find that attitude abhorrent. I hope I'm wrong.


At the end, as you said, is social order, a similarity with social control. In a sense our past and current fears for caffeine [1], alcohol, drugs, etc is the fear that society will change and be out of control. Not saying that those things are healthy but even if drugs were harmless it would he controlled.

[1] https://www.researchgate.net/publication/289398626_Cultural-...


Yes, most predictions never happen because there is a feedback loop at a certain point, people will change behavior to prevent the worst outcomes.

I hope that will be the case here. However, what makes this challenging, is that the pace is so fast that there will be little time to consider the effects of the feedback loop before we are deeply within its grasp. I only hope that thought explorations into what might be negative effects will allow us to see them sooner and hopefully adjust in time.


Your substack is a treasure trove. Makes lesswrong articles look mentally rigid.


Thank you for the appreciation!


~22 minute interview [0] by Jaron about "Who Owns the Future?"

[0]: https://youtu.be/XdEuII9cv-U?t=172


I just picked this up on your recommendation. Amazing. This guy is the digital version of Piketty if that makes any sense.


Funny that if you google "Who Owns the Future", the Google featured snippet says the answer is Jaron Lanier.


I feel that if smart people spent more time writing books about how good outcomes could come about rather than warning about bad outcomes powerful actors wouldn't have so many dystopian handbooks lying around and might reach for those positive books instead.


"Who Owns the Future?" is exactly a book about developing good outcomes, and building a future that supports humanity and happiness.

But you can also read it at an obtuse angle and see the problems outlined to resolve as opportunities for personal gain.

It's just a matter of perspective.


Glad to hear. I will put it on my list.


It's way easier to write believable dystopian novels because you are deconstructing what already is rather than building something new. The smart ones are the ones capable of writing the utopian novels.


I was about to comment the same thing. It's simply much harder to create from whole cloth positive visions for the future where dystopias can be immediately be extrapolated from existing trends (and our long human history of abuse, horror, and destruction).

Edit: If anyone would like an example, I'll offer Huxley's "The Island" as a utopian counterpoint to his "Brave New World". In addition to exploring the qualities he believe make up a 'utopia', a significant thematic concern is the need for channeling our innate destructive impulses* because utopia - should it exist - can only be maintained, not manufactured, through the active preservation/conservation of our natural world, our positive human values, etc.

*for example, there is an innate human impulse to subjugate others. Huxley suggested that we should channel, rather than suppress, this impulse into a productive activity that satisfies the desire without causing harm: rock climbing (which must have been much more of a niche activity in 1962).


If you read Brave New World and think of the lower "classes" as instead being automation and AI (really, most of the jobs done by Epsilons and Deltas in the book were automated decades ago, and the Gamma / Beta jobs are rapidly moving towards AI replacement as well) it's not a bad system, nor is it a dystopia.


easier to imagine the end of world than the end of capitalism...


Help us out here. What would the end of capitalism look like? All of the attempts at ending capitalism so far have collapsed into disaster, so people are understandably hesitant now to start grand social experiments which historically speaking are likely to end in famine and genocide.


Capitalism works because it models the world without saying much about it. Just as I can pile sticks and mud to form a house, removing entropy and then giving that in exchange for a sack of grain.

It models the physics there, but adds an indirection, value stored as currency.

Money doesn't have any morality or inherent motivation. Capitalism is what happens when humans project theirs onto it, on average, with a good amount of autonomy enabled by that currency.

If people were not, on average, greedy survivalists, then the value store would produce an economy that operates much differently.

That's why capitalism persists, because we're all just advanced monkeys gathering as many rocks, sticks and mud as we can in a big pile, because it is built-in to our genetics to stockpile resource when we can.

Everything else is just advanced mechanisms of this.

The end of capitalism is the end of humanity, because while we exist, we will want to stockpile resources through increasingly elaborate means in an attempt to stave off the entropy of death.


I think your question might be his point.

We can easily imagine the destruction of all existence because we have mental models for what that destruction might look like; however, imagining the end of capitalism requires us to invent entirely new ideas that exceed the salience of capitalism itself (which is… obviously hard much harder).


If you claim “these [AI risk] concerns make no sense” then you either lack imagination, are willfully ignorant, or are selling something.

It’s perfectly reasonable to say something like “I think it’s very unlikely because I disagree that [insert causal link in the model] is plausible.”

But to claim that the whole set of concerns are nonsensical is lazy thinking.

You see this a lot (and Robin Hanson and PG have commented on this dynamic recently) where a small group makes a bunch of very specific claims, which get dismissed by the “mainstream” without actually engaging with or understanding them.

So in this case, “[the concerns] make no sense” should be read as “I did not bother to try to understand these claims, but I don’t like the conclusion”, not any particular argument about whether they are logically sound.

Ignore.


It's the Lucille Bluth response: "I don't understand the question and I won't respond to it."


That's not what he said and not even what he was asked. He definitely acknowledged the dangers up to and including "human extinction" but wanted to make sure the question was couched in the right context.


This is very eloquently put.


From reading this, I don't get the impression that Lanier has any objective reason to believe the world won't be destroyed as the direct result of AI. If he does have a reason, the reporter certainly doesn't devote any space to analysing it, or to explain why dying from AI-induced insanity is different from being destroyed.


People have spent the last decade modify their behavior to please algorithms. They've become indistinguishable from bots. Cattle herded into segregated pens. Being more human is the only possible defense, warts and all.


Yeah, I agree, many of us have become bots or zombies, though still being basic humans and communicating as humans. If you were a techie who wants to create a new algorithm to which we shall obey, you had to learn the language of computers to do so. Now this has changed as well. The computers have learned to speak our —human— language. That means they will also adapt to our behavior, which means the spiral into the insanity Jaron Lanier was talking about could possibly go faster…

EDIT: So yes, a return to what makes us human, to nature, with an awareness of history and philosophy would be very desirable and quite appropriate in these and future times.


I'm really wondering though because I feel kind of at a loss, what are some things you think we can do to be more human?


The interview isn't very intellectual, and even rambles, but blame the reporter for that. Lanier's a great thinker.

I'll add my own danger: AI/VR could lead us to each live in our own realities. When you watch the evening news, it'll be specifically written for you, and won't get any of the scrutiny that a broadcast watched by millions would get. Or, you go watch the president's State of the Union, and get served a custom speech written to appeal to your particular psychological profile. This'll be possible on day, and it gives me Deus Ex vibes.


I've read this short story about the singularity years ago, written by a scientist from UW-Madison and although the writing isn't great, it has always stayed with me. Recent developments made me think of it, and the premise is precisely that - the group that develops AGI uses it to control the markets and drives everyone else insane through economic disruption, while staying entirely opaque.

https://www.ssec.wisc.edu/~billh/g/mcnrsts.html


In recent times we've already significantly given up on our humanity. The lack of shared institutes (church, bars, etc.), remote studying, remote work, ecommerce, personal contact via chat, social media, these all point in the same direction of a contactless society where we rarely interact with the physical world and its people.

It stands to reason that AI will only accelerate this further. It will be convenience on steroids. Your AI ear piece isn't going to tell you to throw it into the bin and go for a walk in the forest. It's going to tell you that you need to buy more stuff and it knows exactly what it is that you need. It's also going to feed you non-stop ultimate entertainment, custom generated for you and you only.

In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice.

A long running trend of outsourcing humanity to technology.

Good news for philosophers though, they finally might have their moment of actual relevancy. In particular to answer the question: what is the point of anything, really?


> In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice

I love this thought. Why not go further, have AI reach out to my friends and ask them about things they (or their AIs) recently told "me" about?

Soon our AIs will carry on our social lives and we'll just lie in the dark with tubes in us. We become the computers, and the computers become us, and the robots have finally won.


> I love this thought. Why not go further, have AI reach out to my friends and ask them about things they (or their AIs) recently told "me" about?

We already have this. Secretaries. Automated Happy Birthday emails.

When I was in a sales engineering role our sales team had a admin assistant who would sent out follow-ups, check-ins, and other correspondence (e.g. customer made a big public release, so congratulate them, etc.).

This is just another example of robots takin ur jerbs, basically.


Yep, our AI voice equivalents could maintain friendships with each other in which case the "what is the point?" question applies. Or, you might reach out for real but fail to be sure if you're talking to your real friend or not.

Or how about this interesting second-order effect: email. Soon Office will include advanced AI capabilities to write and reply to email.

What is the point of me reading it? If my AI can generate a satisfactory reply, your AI could have generated the response too. No email needed, nor a reply.

We're now in a phase where anybody can generate spectacular art. What is the point of me looking at your generated art? AI can generate personalized art based on what it knows I like.

If AI works, and it's headed that way, you keep ending up at the same question: what is the point of anything?

As counter force, there's significant room for a new low tech hippie Luddite movement.


> Soon Office will include advanced AI capabilities to write and reply to email. What is the point of me reading it? If my AI can generate a satisfactory reply, your AI could have generated the response too. No email needed, nor a reply.

You'd be kind of daft not to proof-read the emails your AI sends out on your behalf. who knows what you might unknowingly agree to do?

> We're now in a phase where anybody can generate spectacular art. What is the point of me looking at your generated art? AI can generate personalized art based on what it knows I like. If AI works, and it's headed that way, you keep ending up at the same question: what is the point of anything?

there is no point to anything, there wasn't before AI and there isn't now. anything we do is meaningless, because eventually we all die, and our efforts are ultimately forgotten. once you get over that you can make your peace with whether the pretty thing you look at is made by a human feeding sentences to a computer or a human carefully marking paper with a pencil, or some combo.

but seriously, as a human who has spent all of my life doodling, drawing, illustrating, painting, thousands of hours creating my own art, and even building my own tools to do so, I find AI is just another tool in the box. I can use it to make images, and now someone who has not spent most of their life drawing can use it to make something more visually stunning than I ever could. has it rendered my efforts meaningless? hell no, I enjoyed every second I spent drawing. I still draw. but I don't harbour any illusions that I'm doing it for anyone other than myself.


Regarding the email example, you're reasoning from the current state of AI whilst I was looking at its future state where it's close to flawless.

The bigger point was that sending AI content to each other is utterly pointless. The generate->send->read->generate reply->send cycle would simply be: generate.

Example: you send me an email asking about a project's status as well as the contact person for a particular vendor the company deals with. I'll manually reply with the answer, or let AI generate it (fully or partly) and send it back to you. Great. In the future state, your AI will simply give you the answers directly. You won't email me and you don't need me.

My take on the meaning of life is that there isn't any, it's whatever you make of it. But I wasn't being that deep. I believe that our current still human approach has substantially more meaning than AI generating almost anything.

I'm happy that you enjoy the process of art-making itself, that's a robust baseline to fall back on, but joy in process applies to very few interactions. My point is that many if not all digital interactions become pointless.


> In the future state, your AI will simply give you the answers directly. You won't email me and you don't need me.

that sounds great. you emailing me about a project and waiting for me to write an email to you is an inefficient waste of both of our time, and if AI can give us that time back, it should.

> My point is that many if not all digital interactions become pointless.

but I disagree that it will render all digital interactions pointless - just the day to day drudgery.

20 years ago, if you wanted to find out the opening times of a store you had to call them up or walk to the door. now, you can google it in seconds. we have done away with a lot of the short phone conversations, and our lives are better for it. but we still talk on the phone. the conversations are more meaningful and less redundant.


We gave up our humanity when we came down from the trees, then again when we started cooking our food, then again when we made up languages, started writing, reading, and counting... the list goes on. Whatever "our humanity" is, we don't seem to be the worse for having lost it and made up a new one over and over. Each time might be the last, but so far we've done well.


"done well"? At best isn't the jury out on that one pending what is left after we've handled climate change?


So far we've done well. We may annihilate a big chunk of life on the planet in a nuclear apocalypse tomorrow, but up to today we're ok. It's too early to tell what comes next.

If we're going to penalize humanity for climate change we might as well give points to humanity for becoming capable of diverting asteroids or curing multi-species plagues too.


> We may annihilate a big chunk of life on the planet in a nuclear apocalypse tomorrow, but up to today we're ok.

UN Report: Nature’s Dangerous Decline ‘Unprecedented’; Species Extinction Rates ‘Accelerating’( https://www.un.org/sustainabledevelopment/blog/2019/05/natur... ).

UN climate report: Scientists release 'survival guide' to avert climate disaster (https://www.bbc.co.uk/news/science-environment-65000182)

Sorry, not seeing doing ok.


Climate change is horrible and will change life on Earth for the worst for all forms of life, including humans. But given the choice between (a) no industrial revolution and no climate change, and (b) industrial revolution and climate change, I'm picking (b) every time and it isn't even close.


In the 21st century, why is it necessary for one or the other? There are solutions that don't involve destroying our home.


Remote work brings people together. Instead of being in an office with colleagues, I'm in the same space with my significant other, and what used to be smoke breaks are now sex breaks. The time I used to waste on commute I now use to meet with friends and acquaintances.


Some people are so devoid of life that they think the formality of an office is what counts as friendship.


If my colleague calls me at least two times a day that means they like me and we're friends


They like you so much they want you to stop seeing your family and return to office so you can provide emotional support and water cooler chats.


I mean I agree for my life but only because I already built up my social circle from these shared spaces. What's someone fresh out of school in a new city supposed to do in 20 years?


Go to bars, clubs, hobbies centers, on tinder dates, to sport games, to friends house parties, to theaters, to restoraunts, to parks, to museums, to wine tastings, to internet forums and world of Warcraft guilds, to Facebook groups, to irc chats, to mailing lists.

Putting your responsibility to find social relationships off on your job is pathetic.


"The time I used to waste on commute I now use to meet with friends and acquaintances."

I hope this is true, same for the sex breaks, but I'm skeptical. So on any given work day, you physically meet with friends between 7-9AM and/or 5-7 Pm? Like, every day?

These "friends" of yours, they have nowhere to go? Or do you sneak this into your work day and just randomly disappear for any length of time, which is something most of us can't do?


I was talking about total amount of hours, not the same exact hours as a typical commutes.


"In a rare moment of humanity, one of your friends calls you. AI knows all about your friends and their recent events so had already summarized talking points. In case you can't be bothered, AI will carry out the actual conversation, it's trained in your voice."

This is the premise of an episode of Silicon Valley where Guilfoyle trains an AI to act as his chat agent surrogate based on his historical transcripts, and then his colleague creates another one and they end up just conversing with each other.


South Park did it last week.


We’re slowly opting into the matrix.


Technology implies beligerence

http://akkartik.name/post/2012-11-21-07-09-03-soc

The problem i see, that someone might send our "primitive" AI into a hostile environment, were it has to compete against other AI, creating a "take-over" and a "devensive" monster, similar to the go automaton. While the real world training data might be dripping, the speed in which a NN under evolutionary pressure against itself might evolve could go through the roof.


I'm really enjoying how "Imposter Syndrome" is actually justified in the tech industry as many influencer posts or posts by people riding on reputations from past accomplishments are actually imposters and making fools of themselves. Conversely, some very insightful posts are being made by the quiet ones with few followers who aren't necessarily even in the tech industry.

I've been through sea changes before, when the old guard falls by the wayside and a new group rises to meet the new challenges. This is certainly one of those times.


I'm struggling to understand what point this comment is trying to make. I thought it was referring to Lanier as one of "the quiet ones with few followers," but then it claimed that the "old guard" is falling by the wayside, and Lanier's hardly new.


I wasn't actually referring to Lanier because he's mostly been a lone voice in the woods, even as he's highly influential in some circles. I was actually referring to the noisy self-promoters currently polluting Hacker News with their AI posts, but now that you mention it, maybe Lanier fits that category of insightful people with few followers. How long has he been telling people to delete social media? How long have people been ignoring him? The world is finally coming around and I for one would be glad for him to become as famous as some of the noisy ones.


Funny, I have mentally filed him as a designated person who is consulted by media to offer opinions on tech despite having no particular claim to familiarity with said tech.


I'm more concerned that the people using LLM's will drive me insane. I hope that I don't get flooded with LLM-generated emails and documentation from colleagues and co-workers that don't bother to proof-read what they send me. It will be a frustrating day when I reply to them with a question and they don't have an answer because they don't even know what they sent me.

Multiply that by all of the boilerplate code I'll end up reading. No pleasant abstractions that hide complexity and are reinforced by laws checked by properties or proofs; abstractions that make the problem easier to understand and communicate. Instead, endless bowls of oatmeal code: similar, bland, and laborious to read. Who knows if the model "hallucinated" this code and whether the author bothered to check it before asking for a review. Who knows how long before I get bored and worn down and stop bothering to read it myself.

And then we're supposed to change our job titles to "Prompt Engineers"? How absurd is that. Nothing about what you do is creative, human, or interesting. Now your job is to feed sentences into an LLM to get it to do all that work for you... how long until that dead-end job gets replaced?

Maybe this could drive us all insane from burnout and nihilism.


Agree with everything you wrote. Gave me the funny thought of how ultimately the AI will just be talking to itself. Someone sends me AI-written documentation to evaluate. I'll ask the AI to respond with feedback because I'm too busy playing Civ. This cycle will go back and forth until one of the AI instances gives final approval, and then the documentation will get posted publicly without a human ever reading it. End users, when confronted with the daunting wall-of-text docs, will ask for AI summaries. The lack of human readership will continue indefinitely.


I had a vision in my mind while reading this of a dystopian (utopian??) future where city states are empty of people, and the country sides are sparsely populated. The people in the cities died out long ago, but no one knows that. They just see these giant glistening walled gardens, with aircraft flying around, and it all looks very busy and amazing. But the truth is its just the machine, maintaining itself, talking to itself. Eventually it will go insane.



Reminds me of the Doctor Who episode Gridlock [1] set five billion years in the future. The entire planet is covered by a city called New New York with the remainder of humanity living in a perpetual traffic jam in the dystopian bottom half of the city. A mutated virus wiped out all the people living in the utopian top half of the city while sealing everyone left alive in the bottom half to constantly drive around with nowhere to go, with systems barely maintained by the Face of Bo.

[1] https://en.wikipedia.org/wiki/Gridlock_(Doctor_Who)


This doesn't necessarily sound bad to me. The natural language being passed around is effectively an information transfer protocol. Each agent has instructions for how to manipulate and respond to messages, and the publicly-posted final document is akin to HTML source, in that a user's agent interprets it for consumption rather than display it directly.

The only problem is lossyness and hallucinations, but assuming the technology improves such that that becomes a non-issue then having bots deal with formalities and formatting sounds great.


One could also flat out refuse to talk to bots. It might seem childish but in the long run could be fine saving considering how people will use them to water your time and confuse you.

It's like people skipping preselection tired topic menus in self service via phone.


You might even get the AI to start playing Civ for you at some point!


That made me laugh. And it's an interesting thought. Civ is notorious for an AI that isn't particularly good and frequently resorts to cheating. Maybe GPT has absorbed enough Civ FAQs & tactics articles to be better than the built-in AI? There's a ton of (virtual) ink spilled about the game for decades. Lots for GPT to use.

Would love to see a War of the AIs in Civ, or Crusader Kings, or any moderately complicated game which has had a ton of writing about it.


Civ's AI is not built to win. Never has been, throughout the series. It's specifically designed to role-play as a caricature of a world leader and allow the player to exploit it before inevitably losing.

It would not be very hard to build an AI that actually plays to win and presents a real challenge to the player. The problem is that players would complain even more! An AI that plays to win is not a reliable ally, let alone trading partner. Against such an AI, the player would no hope of trading technologies for an advantage: the AIs will have already traded them all. The AI would also use every possible treaty to build up its forces and only launch the surprise attack when it deems it can hit with overwhelming force.

I have seen such an AI, actually. It was created by a modder for the game Master of Magic [1]. It makes the game incredibly difficult while at the same time cutting down on the cheating. It does so by incorporating the modder's extremely detailed and extensive knowledge of the best strategies in the game. It does not need neural networks or machine learning to accomplish this. It just implements a good old fashioned expert system.

[1] https://www.realmsbeyond.net/forums/showthread.php?tid=8246


The Wargame series (European Escalation, Red Dragon) is plagued by a bad AI system that relies almost entirely on cheating. Had always wondered how much better it would be if it were an "expert system" type AI instead of the devs just giving it full knowledge of the battlefield and an insane amount of counters to whatever units the player fields.


Eventually the machines will evolve this intermediate language, and with time it will become impenetrable for humans without the machine caring to translate.


And that's when work from home will be abolished by that company, and AI will be banned from the company network.


Where do you get the idea GPT can't create abstractions? And do you really believe it generates more boilerplate than the average SWE?

The more comments I read on HN regarding GPT stuff, the more I realize that many people are willing to put out an opinion on something they do not at all understand.

While I can understand concerns about a flood of automatically generated emails, this is already a significant issue that most software engineers have the tools to address. Additionally, there isn't much difference between composing an email by hand and generating one with an LLM; both begin with a list of key ideas to convey, which are then elaborated on either manually or through automation. It was always the key ideas that are the point.


My boss (head of engineering) loves ChatGPT and he uses it more and more in work. Recently he just wrote an crucial feedback after one of our meetings and the moment I started reading, it was obvious it's ChatGPT (confirmed by ZeroGPT later) as he does not talk like that at all. It read like all the other ChatGPT responses you read on Internet.


If you receive a ChatGPT-generated message, it's only fair that you respond with another ChatGPT-generated message.


I'm not surprised at all. ChatGPT is a great empathy prosthesis.


Fuck. Now you have to figure out if people actually care or are just saying the words...

Wait.


> empathy prosthesis

This is my favorite phrase that I have learned in recent times and it appears only one other time on the internet.

Thanks for that.


> empathy prosthesis.

Nice. I also checked "empathy crutch" (it's the name of a band) and "empathy surrogate" (slightly more used).


It was actually a technical feedback.


Just send him back a generated email that your chatGPT read his ChatGPT mail.



GPT needs to be a codec!


It's just an uncompression stage to facilitate transmission.

Too bad it's lossy. I'm waiting for a lossless algorithm before I add it as an encoding on my web server.


Isn't driving us insane a way of destroying us?

I think we need an EPA equivalent for the Internet or at least Social Media. People should not be able to release toxic substances directly into the Web.


Why not? EPA exists because you can't "escape" the environment. But the web? If social media is full of crap, then don't visit it - that's what I do.


Let's be real: in 2023 a significant portion of our lives is online, and that's not going to go away.

I can't just not have a smartphone. My kid's baseball team only publishes schedules on FB. Local bakeries only sell their stuff, fancy cakes, via Insta. I have multiple work and personal emails and some of them get BLASTED with spam cuz of one purchase I made for replacement springs like ~5 years ago.

Virtually all of my banking and stocks are online. They'll still send me paper sometimes but there is no way to do money transfers or basic life activities like paying my bills without using their app.

Even if I don't look at it, my wife does. My kids do. They spread the memes, and potentially invite all sorts of shady malware into my network. Ditto for my coworkers, my employees, my bosses.

Just "don't visit it" isn't an option if 95% of the population does.


Maybe the people growing up with all this will do a better job in the future. We still haven't even solved the health issues caused by Junk Food - why should Junk Social Media be any different.


This is beyond social media.


Right now, these models were trained almost entirely on content created by humans, since that's what was available. Now that it's so relatively cheap to create generated content, it seems likely that in the future much if not most of the material used for training will have come from other models, not from humans. I have no idea what the implications of that will be but it seems very likely and does concern me.


That would doom LLMs to mediocrity, which is great news for creative people.


I'm not sure why it's a concern to you.

If it works well it's great news, and if it doesn't work, then it won't be used. Companies will not make their AIs less intelligent on purpose.


>>Companies will not make their AIs less intelligent on purpose.

That sounds a LOT like saying "companies will not make their products worse on purpose".

Yet it happens allt he time, and is almost inevitable once the MBAs, finance people, and marketers get involved - they seek out ways to cheapen the product that (they think) customers won't notice, they undermine "cost centers" like R&D and production, and 1000 other flaws. In internet products they deliberately skew the output to generate more ad revenue (See Enshttification" by Cory Doctorow [0].

And that is all for products that otherwise would stay mostly the same, without some kind of natural degradation.

These LLMs and image generators, etc. will definitely degrade by ingesting their own output. Sort of like the jokes about clones becoming worse every generation. Just watch.

Thx for provoking me to think for a few seconds and convincing me that it's inevitable...

[0] https://doctorow.medium.com/tiktoks-enshittification-bb3f5df...


i have these exact concerns. the concept of on overload of information that means very little that will get me to stop caring/reading literally anything on the internet at a certain point


I think it’s more likely human-to-human communication will adapt (dare I say improve?) than cease


> And then we're supposed to change our job titles to "Prompt Engineers"? How absurd is that.

Why is that absurd? Asking good questions is a useful skill in many domains even now. As the models improve they will become better than any human in many areas of expertise. So we will have to rely on them. Being a good prompt engineer sounds like a good job in that world.


> And then we're supposed to change our job titles to "Prompt Engineers"? How absurd is that. Nothing about what you do is creative, human, or interesting.

It's weird how all the discourse around AI replacing jobs never ponders the possibility of AI commoditizing management itself. (I suppose that's on the list of Illegal Ideas.)


At some point, someone will write a LLM assistant to go through all of the LLM-generated email to be able to intelligently filter out spam, summarize the rest, and highlight the important stuff.

Not that I think this is good for our society. If we go that route, at some point, it would be AIs talking with AIs, and we end up being the biomass.


Ugh. Logging on to a Zoom meeting that the AIs scheduled for us is the hell we never imagined.


And isn't it even more disappointing to think that it's just people destroying people, same as always.


Somebody will for sure create a HN Karma mining bot connected to chatGPT. Just a matter of time.


Documentation and code we have now is already pretty bad, and most of the people who write both are stupid, can't answer follow-up questions or come up with good abstractions. I don't see how LLMs could make it any worse.


Famous last words.


The best analogy would be that AI will do to culture today and over the next decade or so what pop music did to culture in the 1950's onward. The criticisms of pop music were widely mocked as superstitious religious fundamentalism, "devil music," etc, but even as an afficianado for rock'n'roll and alternative music, it really does give you a head full of garbage. Punk was fun, but I could probably do without exalting the sniveling self pity that made it appealing. For example, if your beliefs about love and relationships come from 20th century love songs, you're likely a rube, or worse, needy and codependent. Arguably, the cliche of the boomer narcissist is the direct result of being relentlessly propagandized by the music industry to believe in similarly inferior things. Folk music is mostly dialectic materialist conspiracy theorists singing hymns to their oppressors. Pre-internet, young people fully tribalized based on their taste in pop music. Sure, it's culture, it's entertaining, and some of it is even beautiful, but it was designed to exploit your sentimentality. If you think pop music tugged at the heart strings, just wait until the kids get ahold of AI platforms.

Imo, the products of AI will be at least as ecstatic and even addictive as pop music and social media, and the cultural consequences will likely rhyme. The antidote to all these trends was always the counterfactual that maybe, just maybe, people will find some higher principle to form their identities around and be able to experience these amusements objectively without imagining themselves as "becoming" something as a result of using them, but who are we kidding, they'll believe whatever entertains them. Imagine all the people.


> Folk music is mostly dialectic materialist conspiracy theorists singing hymns to their oppressors.

This is probably the most insane piece of music criticism I've ever read. I guess the crazification AI has claimed its first victim.

> The criticisms of pop music were widely mocked as superstitious religious fundamentalism, "devil music," etc

Almost always with a strong racial bias.


The devil music accusation predated rock'n'roll with early blues artists taking gospel music they learned in church and making the words about day to day life. Stories about Robert Johnson meeting the devil at the crossroads and adapting the minor pentatonic (and ancient) scale but adding the flat blue notes have a very rich history. It was something that got in your head and changed how we experienced the world.

I can see how people could think folk music had some kind of altruistic purity, but it's still a viral expression of a certain kind of animus that distinguished it from country. I also think this kind of folk-animus is related to how it may be worth reflecting on why others tolerate it when you imply someone is racist or insane, almost to the point of ignoring it altogether.

I would bet LLMs are already able to create similar "scissor statements" that are as viral as pop songs, and comments like mine in the previous sentence that are structured to provoke specific anxieties and reactions in their readers. It's one thing for an LLM to write little essays, but once we train it on literary fiction - which is designed to speak the language of memory and provoke strong emotional repsonses - it becomes much more viral and dangerous socially. Imagine a catchy song we can't get out of our heads, but instead of dancing or laughter, it provokes humiliation and cruelty? My asshole-statement was manually calibrated, and it has the same abstract form and structure as a joke, but with the edge of that negative form. An LLM can do it at scale. Someone using an AI model to produce those won't be doing it to improve human discourse and relations. That's the risk I think is worth addresing.


> My asshole-statement was manually calibrated, and it has the same abstract form and structure as a joke, but with the edge of that negative form. An LLM can do it at scale.

Right, so now we've dealt with the initial trolling and callout, and moved on from weird statements about art ..

> Someone using an AI model to produce those won't be doing it to improve human discourse and relations. That's the risk I think is worth addresing.

This I actually agree with. We're going to drown in automated trolling. Human discourse is going to get worse, which usually happens in ways that get people killed.


> > Folk music is mostly dialectic materialist conspiracy theorists singing hymns to their oppressors.

> This is probably the most insane piece of music criticism I've ever read. I guess the crazification AI has claimed its first victim.

Eh, it's a concise expression of an idea Charlie Brooker (of Black Mirror) and others have been promoting in their work for years. The famous-at-least-on-here Meditations on Moloch covers it, IIRC. Not really out-there or new. Capitalism learned to pull the teeth of any new counterculture, and turn it into a product, and the history of postwar pop culture is basically its learning to do that, then doing it over and over. The same observation dates to at least the '80s, from actual humans, it's not some gibberish mash-up created by an AI.



I've had some eye-opening moments where I went back to music I listened to growing up and was revolted with myself when I actually read through the lyrics and thought about what they meant. It really is like there is something like a spell that comes over you when listening to certain music that opens you up to hearing and helping propagate messages you wouldn't otherwise.

The internet irony culture uses the same mechanism of passing the message off as "only a joke". But the fact is that even if you say that, there is only so far words can be divorced from their meanings. And even if the network that propagates them originally is just doing it ironically, eventually someone will take the message seriously. There is a quote I wish I could remember along the lines of "what the father accepts as a lie, the son will accept as a truth".


Great comment. Don't wish to agree with the final conclusion on mostly sentimental reasons.

Parental 'prompting' may be effective. Possibly, whoever gets to first frame a given dynamic will establish a lasting bias. "Songs can be pleasing but take care to filter out the lyrics. It is much easier to write mushy sentimental nonesense. Keep that in the back of your mind as you listen." That, imo, should certainly be a part of responsible parenting and innoculating against ideational viruses.


I'm 100% on board with the first paragraph. There has to be some mid- or late- century philosopher who described the phenomenon, how pop music creates a dependency or need out of thin air, and then half-fills it. Like how junk food is just salty and fatty enough to keep you wanting more. It overpromises, building up a flattering epic narrative.

Ends up nudging value systems towards vapid, shallow, needy modes of being.


From my point of view, it seems that AI is being kept only to those that can afford to pay for it. So it's not much of a threat to me or other people below the poverty line.

If it eats the rich for us, I'm fine with that.


What makes you think an AI with the ability to take actions in the real world will only affect the people who pressed the button to start it? This is like not being worried about a nuclear war because you think it will only affect politicians and the military

(I’m not trying to make a statement as to whether I think nuclear war is likely or not because I don’t know, just using it as an example)


The title of the post, that it's more likely to drive us (assumedly meaning it's users) insane, rather than destroy us.


You will interact with AI whether you like it or not, when you call for customer support, they will send spam ad messages to you, bots on dating sites etc.


It's the opposite. Only the rich will be able to afford AI-free spaces. Everyone else will become an AI-assisted drone.


You haven’t been paying attention


Those whom the gods wish to destroy, they first make mad.


In the future we'll all have AIs listening to everything we listen to. These AIs will also have sensors on our body to detect our emotional state (heck, maybe even inferring our thoughts based on this.) The AIs will then provide contextual advice. We'll generally say what it tells us to say; do what it suggest we do. This will happen for everyone; we'll end up, effectively, being the "mouthpieces" for AIs talking to one another.

The technology for all of this already exists. It's just a matter of time. Right?


> In the year 3535

> Ain't gonna need to tell the truth, tell no lie

> Everything you think, do and say

> Is in the pill you took today

https://www.youtube.com/watch?v=l3yDLvp9le0


Most people speak their thoughts silently, and that "speech" must trigger the same neurons that do regular speech. This will be the basis of the mind-reading devices. Abstract thoughts cannot be read this way, but how many know what an abstract thought even is?


Who would want that and why?


> Who would want that and why?

Nobody wanted a perpetual outrage machine, but we have Twitter.

Nobody wanted ever-present surveillance, but we have ad networks.

Nobody wanted identity theft, but we have a thousand attack vectors.


It's like innovation by obsessive compulsion.


Thats a big part of advertising. Make people feel inadequate if they don't have these things. FOMO in action.


Targeted advertising in exchange for entertainment.


Yep, this is coming. But it is unclear how interactive will be in practice and what the feedback mechanisms will look like.


Resistance is futile.


... so, same difference? I mean, even with his full quote, the outcome is ultimately the same - humans are gone.

>... “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”


I've been saying this for a long time, ever since ML seriously entered the scene:

The singularity is the least of our worries, AI will fuck up society beyond repair long before we reach that point because we give it way more trust and responsibility than it deserves. It's the Eliza-syndrome, as long as a computer shows the least sign of possessing human qualities we're more than happy to fill in the blanks with pixie dust and unicorns.


The stories of how much trust layman and even certain ‘experts’ put on ELIZA back in that 60s is wild. Given how much better GPT models are, hopefully there isn’t a proportional increase in trust and anthropomorphizing.


i like to call it the "stupid singularity"


Yes, the core issue is trust: when you trust somebody or something your brain stops working. Far for being a ludite myself but blind less trust is very dangerous.


Something I've found myself observing in disparate domains:

the only defense we have against applications of AI we don't like, is the application of more AI.

This is a natural implication of AI being augmented-intelligence force-multipliers at the moment. When your opponent takes performance enhancing drugs, or wears a mecha suit, the only way to compete with them or defend against them is adoption of similar technology.

The Prime Directive in Star Trek was prescient. AI arriving unevenly—and first under the direction and control of hands which have thus far proven disinterested in the common welfare where it interferes with personal benefit—looks like it will be a civilizational exercise indistinguishable from the delivery of disruptive technology from another species.

Of particular personal concern is that the synergies where it meets other technological social problems against which we have few or no defenses, specifically, surveillance and its active counterpart, sentiment control.

I am less concerned about needing AI tools to validate AI-generated code; and more concerned about the 2024 election cycle being opportunity for an arms race in application of AI to game politics.


How about the danger that AI itself becomes insane?

On the HN "new" page as I write this: "Google and Microsoft’s chatbots are already citing one another's misinformation". As AIs produce more and more content, one danger is that they feed back on each other, with AIs training on more and more AI-generated content, and the connection with reality becomes more and more diluted. I don't think that's going to produce some new kind of wisdom; I think it's going to produce raving lunacy.


Hot take: The world (most of the time) already works that way (more or less), but people usually don’t notice for whatever reason.

One possible bias here is that we expect people to make mistakes and computers to get it right. Don’t forget that the vast majority of people don’t get it right.

Perhaps this is just the latest and most obvious consequences of the Internet’s tendency to give the podium to some of the least qualified to speak on whatever subject. I think if we go back to the drawing board there, we could be in a slightly better situation. Quora made a nice attempt to do this but fell way short of it’s potential.


"New from Silicon Valley...AI Therapists"


Just like we’ve seen in some human populations.


> “[...] To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”

It's surprising that the article doesn't mention Infinite Jest.


This feels a bit like Bill Joy's article about nano-technology. I think it's just a healthy fear the the unknown future. AI will be refined into usable tools, mostly good with some other unintended negative consequences. Sounds like a lot of things.


> There are two ways this could go. One is that we pretend the bot is a real thing, a real entity like a person, then in order to keep that fantasy going we’re careful to forget whatever source texts were used to have the bot function.

Finally someone who, at least nearly, gets it.

> The other way is you do keep track of where the sources came from. And in that case a very different world could unfold where if a bot relied on your reporting, you get payment for it, and there is a shared sense of responsibility and liability where everything works better. The term for that is data dignity.

Of course, that's an obvious step in the other direction: isn't the original point to breathe life into the data? If we have to track who originated what, that scaffolding is going to really get in the way.

Then again, this is a good moment to recognize the futility of text inference models (LLMs). If we can't understand what path goes from input to output, how useful can the thing be? If we can't choose the path, we can't choose the output. Who can choose: the thing itself? If the whole thing is a black box making its own choices, then how do we interact with it except to personify that black box? Remember, that was option 1.

The main reason anyone is excited about LLMs is that those models have been personified. That problem began the moment we started calling them AI.

---

> We have to say consciousness is a real thing and there is a mystical interiority to people that’s different from other stuff because if we don’t say people are special, how can we make a society or make technologies that serve people?

Mystical.

It's an important observation. We really don't know how we think. And that is all the evidence we need: AI does not exist. There is no artificial person.

We are free to stop personifying technology. Doing so would make all of these conversations a lot less convoluted, and a lot more direct. We are wasting a lot of effort talking past the very understanding about these machines that is necessary to improve them. Instead we are trying to understand "AI personified" a fictional character that we have never met; and trying to improve it.


If you enjoy this kind of thinking I'd recommend [1].

> Electric speed mingles the cultures of prehistory with the dregs of industrial marketeers, the nonliterate with semiliterate and the postliterate. Mental break-down of varying degrees is the very common result of uprooting and inundation with new information and endless new patterns of information.

[1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf


Let's fix that - The Danger is that People who are already driving us insane will start using AI to make their job a little easier.

Social Media companies will soon be taken to task by News media for allowing AI created content on their platforms.

2024 is going to be interesting.


News media are also going to get into the AI clickbait race. I wonder if the first accidental misrepresentation of a generated image as reality has already happened. And the first intentional one.


One aspect I expect to see play out:

Any entity interested in either the truth, or in maintaining some kind of reputation, will need to keep humans in the loop when using these systems. Language models might multiply e.g. ad copy output 10x per worker, and allow micro-targeted campaigns that were impractical before, but it won't allow, say, a 1000x increase, until or unless we can trust these systems not to produce undesirable output when not checked by a human. Ads are tied to brands which will hesitate to put their reputations in the hands of language models without a human verifying that the output is OK. Likewise, any entities wishing to use these to help with writing illuminating, factual works, may see a large benefit, but it'll be limited. 2x, 5x, something like that.

Propaganda, though? Misinfo campaigns, astroturfing, where you hide behind sockpuppets and shell companies anyway? Who gives a shit if one out of every few hundred messages isn't quite right? Worst case, you burn a sockpuppet account. Those can leverage these to the fullest. 1000x output per person involved, compared with, say, 2016 and 2020, may actually be something we can expect to see.


> Propaganda, though? Misinfo campaigns, astroturfing, where you hide behind sockpuppets and shell companies anyway?

Why just limit there? The chatbot companies can introduce ads where the answers are influenced by whichever company that buys the ads. Looking for information on nutrition? Some fast food company might "insert an ad" subtly changing the text to favor whatever the company wants.


This. I am definitely worried about the implications of AI, but just like algorithmic advertising and television before it, it’s amplifying something that’s already out there, not creating an ontologically different threat.

The issue is that quantity can become a quality.


This may kill the WWW as we know it. I can almost see an alternate 'web' finally taking off, but not really, as the majority just want TikTok left alone.


In the short term, I'm concerned about the energy required to run things like ChatGPT and how everyone is cramming it into everything. Soon every todolist app on your phone will need a supercomputer running in the background to auto arrange your grocery list, and we'll double or triple our datacenter footprint within a few years.


Running the models isn’t the expensive part and is subject to a lot more optimization


As I've said for years, the robots don't have to kill us, they just have to convince us to kill ourselves.


> “This idea of surpassing human ability is silly because it’s made of human abilities.”

Shows the level of insight from this "guru". The truth is we don't know how far the work being done on artificial intelligence is going to go. For now it will continue to develop and acquire more and more autonomy, just because that is the nature of our existence: better and more efficient will replace the lesser so.

So, we may have potentially given birth to a new sentient being that will go on to live its own "life" (within 100, 500, 1000 years?), or we might be able to constrain it so it so that it will always be in the service of humans. We simply don't know at this stage, but my money is on the former TBH.


This quote is taken out of context and is perhaps not a charitable meaning of what the author means. Here's the whole paragraph:

> Lanier doesn’t even like the term artificial intelligence, objecting to the idea that it is actually intelligent, and that we could be in competition with it. “This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

The author, Jaren Lanier, is a reasonably accomplished technologist, with some pretty groundbreaking work on VR in the 80s. He is most certainly aware that humans have been surpassed by computers in many ways. I think that line is arguing semantics about the word "intelligence" and clearly he knows that computers do many things far better than humans.


That clarification didn't do it for me, I found it was like juggling semantics. Let's rephrase his comparison: "It's like saying a robot can run faster than a human runner. Of course it can (soon), and yet we don't say that the robot has become a better runner". It's just nonsense.


This is nonsense.

If you built a bipedal (or possibly N-pedal) robot that moved roughly similarly to how humans or dogs or cats or horses run, and it was faster than humans over all the terrains that humans can run over, I'm absolutely certain that everyone would agree that the robot is a better runner.

But a car is not that thing. Neither is a helicopter, or a train, or a bicycle, or a jet aircraft or a hang glider or a skateboard.


Anyone put Boston Dynamics into a dog race yet?


A tractor is not better than humans at plowing, it is a plowing machine, so can do it at scale without suffering the same fatigue men experience, but it's not better at it, it simply does it mechanically in a way only a machine could do it.

Running and plowing are not simply about doing it as fast as possible or as extensively as possible.

So maybe what you are looking for is a definition of "better", it depends on what you mean.

In my book a tailor made suit is always better than a machine made suit, because people are better tailors than machines for some definition of better.


Yes, this is verily what I objected to. It's called "semantics", similar to when people say "hair" everyone knows what that means. But sooner or later someone will point ut that this hair is different from that hair and if you split one hair, now what do we have? This process is always a possibility in any discourse, but largely frowned upon, rightly so.


I think semantics is important :(

In my experience, most arguments are the result of people not agreeing on what a word means.


My opinion is that this it is not about semantics, it's about looking at the whole picture and not only to some specific outcome (running faster for example)

Firstly, faster doesn't necessarily means better.

Secondly, why do people run?

Nobody can't say for sure in general.

Why machines do it? (or would if they were able to)

Because someone programmed them to do it. 100%.

it makes all the difference in the World.


>Because someone programmed them to do it. 100%.

It is this thinking of the past that will get large numbers of us in trouble with our future machines.


This is not a sensible comparison. A mass-produced machine-made suit wasn't made using your exact measurements. If a human sat at a sewing machine on a factory production floor versus a machine, you wouldn't be able to tell the difference.


>“It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

That's a pointless argument. We might not say it, but for all intents and purposes the car does go faster than any human runner.

We just don't say it because running when it comes to humans mainly means using your feet. If it was a more generic term, like "fast-mover", we could still use it to compare humans and cars, and say cars are better "fast-movers" than humans.


No it's not pointless, language is important. Cars are not runners. "For all intents and purposes" is a cop out here. We're talking about LLMs, you know language learning models.


>No it's not pointless, language is important.

Not that important, and not for this purpose. Things still work the same, even in languages with widely different semantics and ways to refer to them (I don't mean the trivial case where a house is called talo in Finnish etc., but languages where semantics and terms differ.

Using language-specific (en. english specific, or german specific) word definition and etymology to prove some property of the thing reffered to is an old cheap philosophical trick that sounds more profound than it is insightful.

Even more so, we might not say it for a car, but if we've built a human-looking robot with legs, we'd very much say it's a "better runner" if it started surpassing humans at running. Hell, we used to call employees doing manual calculations "calculators" in the past. Later, when machines doing that became available, we used the same term for them.

So the idea that "human is runner but car is not runner", also means that "human is thinker, machine is not thinker", and this has some profound difference, doesn't make sense anyway. Human running is associated with legs, certain way of moving, etc. Thinking is more abstract and doesn't have such constraints.

>Cars are not runners.

That's just an accidental property of having a dedicated word for "runner" in English that doesn't also apply to a car going fast. The term "running" though is used for both a human running and a car going fast ("That car was running at 100mph").

>"For all intents and purposes" is a cop out here.

For all intents and purposes means "in practice". Any lexicographical or conceptual arguments don't matter if what happens in practice remains the same (e.g. whether we decide an AGI is a "thinker" or a "processor" or whatever, it will still be used for tasks that we do via thinking, it will still be able to come up with stuff like ideas and solutions that we come up via thinking, and effectively it will quak, look, and walk like a duck. The rest would be semantical games.

>We're talking about LLMs, you know language learning models.

Which is irrelevant.

LLMs being language learning models doesn't mean the language used to describe them (e..g "thinkers" or not) will change their effectiveness, what they're used for, or their ability to assist or harm us. It will just change how we refer to them.

Besides, AI in general can go way beyond LLMs and word predictors, eventually fully modelling human neural activity patterns and so on. So any argument that just applies to LLM doesn't cover AI in general or "the danger than AI destroys us" as per TFA.


That reminds me of the very old arguments that people can't program computers to play chess better than they themselves did. Obviously false, as is this. There is no reason we can't build something that is smarter than we are.

> “This idea of surpassing human ability is silly because it’s made of human abilities.”

It's not made OF human abilities, it's made BY human abilities - a completely different thing.

And, of course, Boston Dynamics will be delivering the "better runner" very soon.


"we don’t say that the car has become a better runner"

We would if the car was to race against human runners. It's just word play. Cars are not used like runners, so we use different words. They definitely are better runners.

Now that technology is touching our core business we get scared, but this has been going on for a long, long time. When it was our legs, we brush it off. But when it touches our ability to think we squirm.


It's not just word play.

Cars go faster than humans can by themselves, under some specific conditions.

Cars go slower than humans, or rather cannot go at all, under other specific conditions. Two weeks ago my wife ran 30 miles on trails in southern Texas. A car could not have traversed any of the distance she travelled on, because a car cannot run.

Cars make it easier for people to move themselves and stuff when there are appropriate roads to travel on. They have enhanced our abilities to do this, but they cannot run.

You're squashing the meaning out words by trying to suggest that "running" is somehow equivalent to "any other method of a person moving from A to B". But that's not true.

We can acknowledge the greater ease of cars for moving people and stuff without squashing the meaning out of words.

Finally, even the notion that cars are "better" at moving people and stuff needs careful examination. Thus far I have said "make it easier" because I am aware that by a certain set of metrics (related to energy use, material use, impact on the environment) cars are actually worse most of the time.


>You're squashing the meaning out words by trying to suggest that "running" is somehow equivalent to "any other method of a person moving from A to B". But that's not true.

That's just an accidental property of the english language.

We can imagine a language where "runner" and "thing that moves from A to B fast" used the same term T, and if people referred to T with the english notion of "runner" (e.g. a person running in a marathon") it was just deduced from the context. There are many cases like that.

In any case, the point is moot, as "thinking" doesn't have the same constraints. We might not call what a car does as running/runner (though we do use the former term) but we absolutely have considered AI as "thinking" and called AIs "thinking machines", even before AI (never mind AGI) even existed.

>You're squashing the meaning out words by trying to suggest that "running" is somehow equivalent to "any other method of a person moving from A to B". But that's not true.

This depends on the level of abstraction of the discussion. At some level of abstraction it's irrelevant if the move happened via running or via horse buggy or via a car. Sometimes we just care about the act of moving from A to B, and different methods to do so are only differentiated by their speed or other effectiveness.

In that case, we can compare man and machine though, and just care for the speed (machine can answer in 0.1. secs, a man needs to think over 1-2 minutes to answer such questions) or effectiveness (e.g. machine is better at juggling many things at the same time when thinking, or man is better at subtle semantical nuance).


Are car parts car parts? Not according to an auto-mechanics, but according to the laymen. A radiator is not a battery or an engine. Are games games? Not according to a game theorist, but according to the the laymen. A game is not a play or a history.

This isn't an accident of language. An example of an actual accident of language would be giving tanks instead of giving thanks.

Are runners runners? Yes, according to you. A walker is a runner is a missile is a bowling ball rolling between places is light moving through a medium. No, according to a fitness coach, because a runner is not a tank is not a plane. When they say that a person should take up running they don't mean the person should melt down their body in a furnace and sprinkle their atoms into metal which is then pressed into iron plates that are attached to a tank which will then go running.

Sometimes we need to be careful in language. For example, we probably don't want to confuse the process of being incinerated and pressed into iron plates with the process of a human exercising their muscles. The choice to be careful in this way is not an accident of language. It is a very deliberate thing when, for example, John Von Nuemann carefully explains why he thinks the laymen use of the word game has perilous impact on our ability to think about the field of game theory which he starts in his book about the same.

I think you should make your point so as to disprove Nuemann, not pick on the straw man of running. Or you should argue against the use of the term radiator instead of car parts. It will better highlight your fallacy, because with running I have to make your position seem much more farcial then it is. We do gain something from thinking imprecisely. We gain speed. That can really get our thoughts running, so long as we don't trip up, but it calls to attention that when someone chooses to stop running due to the claim that the terrain isn't runnable, the correct response is not to tell them that running is accidental property. It is to be careful as you move over the more complicated terrain. Otherwise you might be incinerating yourself without noticing your error.


>This isn't an accident of language. An example of an actual accident of language would be giving tanks instead of giving thanks.

By "Accident of language" I don't mean "slip of the tongue" or "mistake when speaking".

I mean that kind words we use to describe someome who runs as "runner" is an accidental, not essential, property of English, and can be different in other languages. It doesn't represent some deeper truth, other than being a reflection of the historical development of the English vocabulary. I mean it's contigent in the sense it's used in philosophy as: "not logically necessary"

Not just in its sounds (which are obviously accidental, different languages can have different sounds for a word of the same meaning), but also in its semantics and use, e.g. how we don't call a car a "runner".

That we don't call it that doesn't express some fundamental truth, it's just how English ended up. Other languages can very well call both a car and a running man the same thing, and even if they don't for this particular case, they do have such differences between them for all kinds of terms.

>* I think you should make your point so as to disprove Nuemann, not pick on the straw man of running.*

I'm not here to disprove Neumann. I'm here to point that Lanier's argument based on the use of "runner" doesn't contribute anything.


> I'm not here to disprove Neumann.

You are arguing on the basis of possibility of imprecision in language that the choice to be more precise does not contribute anything. That structure - whether you want it to or not - as a direct consequence of logic applies to every thinker who ever argued for precision due to the possibility of ambiguity. It is an argument against formal systems, programming languages, measurement, and more. Some of the time it will turn out that your conclusion was true. Other times it will not. So the argument structure itself is invalid. Your conclusions do not follow from your premises.

Try your blade - your argument structure - against steel rather than straw. I saw you slice through straw with it. So I picked up the blade after you set it down and tried to slice it through steel. The blade failed to do so. The blade is cheap, prone to shattering, and unsuited for use in a serious contest between ideas.

For what it is worth - I do happen to agree with you that Lanier is making a mistake here. I think it is in the logical equivalence mismatch. He wants intelligence to be comparable to running, not to motion more generally, but since intelligence is actually more comparable to compression we can talk of different implementations of the process using terms like artificial or natural intelligence without being fallacious for much the same reason we can talk about different compression algorithms and still be talking about compression. So instead of trying to argue from his distinction between motion in general and motion in humans, I would think the place to point to for contradiction is the existence of cheetah runners versus human runners. Directly contradicting his insinuation is that we actually do say that cheetah are faster runners than humans.


I already covered this in my post.

Cars are an easier method to move people and stuff when there are suitable routes, where easier means "the journey will take less time, will require almost no human exertion by those moved, and will likely include weather protection".

Nobody is going to disagree with this (they may raise the objections I did that cars are energetically, materially and environmentally less efficient than other means, but that doesn't invalidate "cars are easier for moving people+stuff").

But that's not running. I will concede that even in English, there are idioms like "Can you run me to town?" meaning "Can you drive me to town?", or "I'm just going to run to the store" meaning "I'm going take a short journey to the store". But this doesn't mean that cars are better at running than humans, it means that the english word run can be used in different ways. And you know exactly which way Lanier meant it.


> But when it touches our ability to think we squirm.

I think that's not the point. We're in awe by the machines' performances and then confused in how that compares to our abilities.

The actual threat is that in our minds we narrow our own capabilities and limit the comparison such that the computer is in fact better.

When computers were first doing math quicker than humans, that might have touch some humans, sure. Similarly now that "AI"s produce convincing spam faster or photo realistic creative images — that hurts some, jobs or maybe a lot. But it doesn't come close to being "human" or "intelligent".

Quite the opposite, the point is that we are getting dumber by focusing on human traits that can be measured or emulated by machines.


I think another general problem is that metaphors are quietly forgotten. The notion that computers "think" is something of a metaphor, but it is a superficial one that cannot be taken seriously as a literal claim.

For example, when we say computers can "do math" more quickly than human beings can, this is fine as a matter of loose or figurative common speech. But strictly speaking, do computers actually do math? Do they actually compute? No, they don't. The computation we say a computer is doing is in the eye of the beholder. A better way to characterize what's happening is that human beings are _using_ computers _computationally_. That is, the physical artifacts we call computers participate in human acts as instruments, but _strictly speaking_, it makes about as much sense to say computers compute as it is to say that pencils write, hammers nail, vacuums cleaners clean, or cars drive. These things participate in the human act, but only as instrument. Whereas when human beings compute they are objectively computing, computation is not what computers are objectively doing (both Kripke and Searle make good arguments here). These artifacts only make sense in light of human intentions, as instruments of human intention and act.

Human writing can be viewed similarly. Objectively, we only have some pigment arranged on some material. No analysis of a piece of written text will ever divulge its signification. Indeed, no analysis of a piece of text will demonstrate that what is being analyzed is a piece of text! Text, and even that something is a piece of text, needs to be interpreted as text to function as text in the eye of the reader. But the semantic content of the text is objectively real. It just exists in the mind of the reader.

So we need to be careful because we can easily commit category mistakes by way of projection and confusion.


Cars don't run. And even if they did, or you tortured the definition to include rolling on fairly straight prepared paths as running, it is only better for specific definitions of better.

Cars are faster on reasonable traversable terrain. Are they more or less energy efficient? Under what circumstances? Do they self navigate the best path around obstacles? Better is really subjective.

And this applies to the large language models too. Just like calculators, they are going to do some things better, or maybe cheaper. But I've played with them trying to get them to write non-trivial programs, and they really do fail confidently. I suspect the amount of source code online means that any common problem has been included in the training data, and the LLM constitutes a program. So, at this point for programming, it's fancy Google. And that has value, but it is not intelligence.

I am not saying we (as a society) shouldn't be worried about these developments. Near as I can tell, they will mostly be used to further concentrate wealth among the few, and drive people apart because we already can't settle on a common set of (reasonably) objective facts about what is going on -- both problems are probably the same thing from different perspectives...


Yep. This whole argument hinges on the fact that the word “runner” in this context happens to be used almost exclusively to refer to humans. Rephrase it even slightly and it falls apart. We do say “cars can move faster than humans.” Likewise we do say “machines can lift weights better than a human,” but we don’t say “machines are better weightlifters” because that particular word “weightlifter” is coincidentally only used to refer to humans.


> We would if the car was to race against human runners. It's just word play. Cars are not used like runners, so we use different words. They definitely are better runners.

This tendency on HN to annihilate discussions by stating that, for instance, flying is the same as running because your feet also touch the ground at some point when flying (it happens only at take off and landing but it still counts as running, right ?) is really something. Stop torturing definitions, it makes Carmackgod sad and they randomly switch off a bit on the mainframe every time you do that.


a car can't walk, if it can't walk it cannot run.

You're actually oversimplifying the matter to a point where an F-16 Hornet is a very fast runner. Which it isn't.


It's the other way around. Focusing on walking and running not being good comparisons rather than making valid comparisons is a distraction.

Like a lot of the stuff being done with large models certainly isn't thinking, but they can clearly characterize sets of data in ways that an unassisted human can't.


until the machine needs to run or think and "characterize sets of data" won't make it.

being able to answer based on a probabilistic assumption is not that great in general, they do it fast on a frozen knowledge base, it can be useful, sometimes is surprisingly good, but not that great in general.

When I asked for the 3 best wood shops near me it replied with a shop that does not sell wood, a shop that does not exist and a broken website of a former now closed wood shop.

Now can an AI train another AI to become "smarter" than it is?

It can't.

Can an AI train another AI to become better at "characterize sets of data" than it is?

It can't.

An unassisted AI is as helpless as the unassisted person, but can't even rely on the intelligence of the species.


> When I asked for the 3 best wood shops near me it replied with a shop that does not sell wood, a shop that does not exist and a broken website of a former now closed wood shop.

It’s not a search engine, if you give it the necessary tools it can use a search engine for you to find these answers.


We don't know what the possibilty space looks like.

We know that the current stuff can't do a lot of things, and it isn't really that interesting to enumerate them.

A predictive text model having a poor grasp of wood shop and location doesn't seem hugely surprising.


People make the same arguments for time travel. "We dOnt KnoW wHat's poSsiBle yEt sO aNytHIng iS".


We have some evidence that intelligence is possible, so it isn't quite the same as coordinate jumping.


You unintentionally point out the flaw of this argument by rephrasing it to eliminate the word “runner.” That’s the only word here that coincidentally strongly implies humans. By rephrasing it to “run” you end up with an even more clearly incorrect statement. My car can run. It runs pretty good. Sometimes I let it run for a few minutes to warm up.


Walking and running are modes of movement. A car can move.

Focusing on the "how" feels like you'd arrive at "a calculator isn't as good at calculating as a human, because it doesn't do it the same way, it doesn't have a brain".


Let's see your car bring you your dinner from the back of the kitchen!




That's not a car!


The hilarious thing is that we do say a car has power equivalent to that of, say, 887 horses, but when it’s about humans it suddenly becomes nonsensical to make a comparison.


> “This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

The analogy to running is flawed because rolling and running are different types of locomotion.

It's not at all clear that computing and thinking are meaningfully different forms of information processing. In fact, we know that we can compute by thinking since I can reduce lambda calculus terms in my head. We also know computers can compute all computable functions, and we know that all physical systems like the brain necessarily contain finite information (per the Bekenstein Bound), therefore they can in principle by simulated by a computable function. There are therefore strong reasons to suspect an underlying equivalency that would suggest that "artificial intelligence" is a sensible term.


Then by that reasoning computers have, for decades, been 'artificially intelligent'?


No, thinking would be a specific type of computational process, just like computation requires a specific kind of thinking process.


To be fair, he brings up an interesting analogy. From the article right after he says this, he follows with

> It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.

He has a point and I believe his point is that it's a different type of intelligence. His view is more nuanced than how you are trying to frame it.


We don't say it because we don't care. Machines moving faster than a human runner have not posed a threat to any industry or jobs in our lifetime. It's a silly comparison. I bet you there was someone at one point who was unhappy that a machine was a better or faster welder than them though. At least that person may have had the opportunity to keep working at the factory alongside the welding machine, doing QA and repairs. Most knowledge workers will not get to switch to that kind of replacement job vis-à-vis AIs.


Beyond explaining what the author meant, and also the hype and hypotheticals which are rampant, this is a valid concern which I also share personally. This is more imminent than “AI overlords ruling us” and I am afraid the motivation behind creating this particular system, is to bring on the automation (the creators don’t even hide this). Therefore I think the point you are making is actually important too.


But he is saying "cars are not dangerous because they are made by humans". Nuclear weapons are also made by humans abilities.


It's not about the semantics of the sentence he said. This is obvious. He is pointing out a difference in nature of the attributes/properties of a human and a human creation. Not about something being more or less dangerous. He is trying to tell the resporter, or perhapes the reader, that they're asking the wrong question.


From article:

> This idea of surpassing human ability is silly because it’s made of human abilities

At some point in history we were just "chimp abilities", so the argument would become "it's silly to imagine that something made of chimp abilities could surpass chimp abilities".


I'm with you on this. People in these chains seem to be looking at all the wrong metrics.

Single-mode LLMs are made of human abilities, but we're already going to multi-modal, though with what I would call rather limited interconnections. What does a LLM that takes language and mixes that with sensor data from the real world? You're no longer talking about human abilities, you're going beyond that.


This thread reminds me of that Big Bang Theory episode with the Super Man argument.

https://www.youtube.com/watch?v=9R5w4Qz6pVk

I don't want to imply anything, just a moment of having fun.

edit: wording


Lanier has been around during the last AI hype cycle in the 80's, he's seen it all rise and collapse already (same with VR btw).

In general he should know what he's talking about when it comes to tech hype cycles ;)


> Lanier, 62, has worked alongside many of the web’s visionaries and power-brokers. He is both insider (he works at Microsoft as an interdisciplinary scientist

And his unique perspective on AI is all the more valuable (and courageous) considering that Microsoft, recently laid off their AI ethics team. It's super important we don't let human considerations fall by the wayside in this rush. The potential of AI is limitless, but so are the potential risks.


That's only 25/75 baseline chance to guess correctly.


Even without autonomous enhancement of AI, the argument that "[the] idea of surpassing human ability is silly because it’s made of human abilities" is BS...

A theoritical AI which thinks like a person, but (due to computing power) can think through and evaluate 1,000,000 ideas the time it takes a person to think through 10 of them, it already has surpassed human ability by a big margin. Same for memory capacity etc.

That the input the machine is trained on is the output created by "human abilities" is irrelevant to whether it can surpass human ability.


I think the argument is more that they only work from past inputs, they interpret the world the way they are told to. It is not that 'AI' can do things humans can't (otherwise the argument fails for many technical things, like a car at speed).

If your bet is on the former, how does it create an entirely new, irrational thought?


Again, this seems like a weird argument. Not that long ago I was told AI would 'never' be able to perform some of the actions that LLMs are performing now. I have about zero faith in anyone that says anything along the lines of "AI won't be able to perform this human like action because..."

The AI's we are using now are nearly one dimensional when it comes to information. We are pretraining on text, and we're getting "human like" behavior out of them. They have tiny context windows when working on new problems. They have no connection to reality via other sensor information. They have no means of continuous learning. And yet we're already getting rather insane emergent behaviors from them.

What does multi-modal AI that can interact with the world and use that for training look like? What does continuous learning AI look like? What does a digital mind look like that has a context window far larger than the human mind ever could? One that input into a calculator faster than we can realize we've had a thought in the first place? One that's connected to sensory systems that span a globe?


But even if the first AGI does end up perfectly simulating a human (which seems somewhat unlikely), a human given the ability to think really fast and direct access to huge amounts of data without being slowed down by actually using their eyes to read and hands to type would still be dangerously powerful


Assuming they don't drown in the information overload and they don't take in any kind of garbage we also put out there.

We also have some pharmaceutical tricks to tweak up processing capabilities of the mind, so there's potentially no need to simulate. The capabilities of the big ball of sentient goop have not been plumbed yet.

Now imagine a technology that could obviate the need for sleep or maybe make it useful and productive.


As Cicero said of Caesar, "the wariness and energy of that bogeyman are terrifying."


>I think the argument is more that they only work from past inputs, they interpret the world the way they are told to

Arguably humans are the same, being the product of genetics, epigenetics, and lived experience.


Almost certainly true, but there's a huge difference. We're the result of forces that have played out within an evolutionary process that has lasted for millions of years.

Current "machine learning"-style AI (even when it uses self-driven iteration, like the game playing systems) is the result of a few ideas across not much more than 100 years, and for the most part is far too heavily influenced by existing human conceptions of what is possible and how to do things.


That argument is totally defeated by AI destroying human players, even top of the world level, at countless games.


Refer to my point on past inputs. If a human suddenly said to the machine "change of rules, now you have to play by these new rules" the AI suddenly gets immensely dumber and will apply useless solutions.


This no longer appears to be the case. Self-trained systems, that play themselves extremely rapidly and can even infer the rules by encountering just notices of illegal moves, are now commonplace.


How is that relevant? A human will also get immensely dumber. Of course a lot less then an AI right now. The point is AI absolutely can do things a human can't.


So can a PC, a car or a cat. You're not making any point here.


You really have to start explaining exactly what you mean by that. Of course they can do things a human can't.


Yeah, it's like saying that machines can never be stronger than humans because they're built by humans.

Computers already do things that humans can't on a massive scale. It's not hard to imagine that they could leverage that ability to take AI beyond us.


> it's like saying that machines can never be stronger than humans because they're built by humans.

Did you even read the article?

“This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”


Jaron Lanier's point is much more interesting point in this context—though I felt that it was overall a brief quote near the introduction to capture attention, than the main argument of the article.

In fuller context, Lanier argues that software using AI won't make human sport or competition useless, because it will use different processes to achieve the same result—the same way that competitive running (or top-level chess, Go, or certain video games) will still happen, even if human inventions can beat the best human at the task.

For all these tasks, the software will take a different process for doing well at the task (e.g. a car doesn't "run," and a chess engine "thinks" differently than a human). In these activities, the process matters.

A different interpretation of the argument is then a bit more interesting. If Lanier is also saying that software using AI won't be better than humans at activities outside of competitions, I would disagree—though to be fair, I don't think this is his argument. For lots of work, the result matters more than the process. If someone wants to make a funny poem as a one-off joke in a story, the result may matter more than the process of production. And if a worker wants to summarize lots of short texts where speed is the most important factor, the result may also matter more than the process. In the same sense, it's still true that a car is usually better at letting humans travel over long distances for work than running, because the result matters more than the process.


We put far too great an emphasis on the human specifics of an activity. For most utilizations of running (delivering goods or information, hunting prey, etc.) the car, or helicopter, or airplane far exceed the human runner. This is poetic nonsense like "speed of thought". When Boston Robotics gets a robotic runner that sprints faster than a human, then what?

The ML systems are not made of human abilities. They are made of software processes. Jared is a smart and informed guy but that sentence is just nonsensical.


> When Boston Robotics gets a robotic runner that sprints faster than a human, then what?

It enters the Olympics and automates away all sprinters? Or it becomes self-aware and decides to eat all the humans?

Or more likely, it gets sold to the military so they can have one more kind of drone that runs on land instead of flies through the air.


Right, sorry, I was directing my question at the "does it surpass human runners" train of thought. Obviously it won't feel a pounding heart, or a thrill of victory if it wins a race, or die of hypernatrimia during a marathon, so it won't surpass our specific cares. Not sure those make a significant difference in the arc of development.

It absolutely goes to the military with built-in weapons.


>Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

https://news.ycombinator.com/newsguidelines.html


Jaron Lanier is being called "guru" by the article, but he's much more than that.

As a pioneer and intellectual he's been arguing about the commodization of human knowledge for a long time, he's not simply saying that "machines won't surpass humans" and it's not accurate to describe him as someone who would say something like that.

Please take the time to research what he's published over the last 4 decades.


Lanier is brilliant, but sadly there any many brilliant people who've long seen the shifting sands and set out to capitalize first, rather than strategically build a future we fleshbags would like to be in.

I wonder, how much will be enough?


"fleshbag", cool way to degrade your actually awesome body ?


It is a pretty cool bag of flesh, I didn't rule that out.


Right, by that logic humans could not have surpassed chimps. QED, evolution is impossible.


"AI" is not currently autonomous; its algorithms that do exactly what their creators tell them to do. They run on binary computers that only do exactly as they are told.


That’s not true, current machine learning algorithms involve no manual programming past the training and inference code and it’s extremely difficult to predict what they will do without just trying it and seeing.

I think this video is a nice introduction to the basic concept to how a computer can figure things out automatically without being manually programmed and without the creators understanding the “why”: https://youtu.be/qv6UVOQ0F44

ChatGPT is much more complicated than the AI in that video but it shows some of the basic concepts


LLMs generate text. They're built to generate text. That they generate some kind of textual output is entirely predictable. Same with image generators. They will generate some kind of image given a prompt. They're not Skynet.


That an AI will have some kind of output is obvious, it doesn’t mean that you can predict what that output will be. It’s like saying that you have solved physics by saying “something will happen”


I think the point he's trying to make is that AI does not have an independent Will. It lacks desires and the ability to operate in opposition to its programming. This makes it no different from any other tool we use to enhance our abilities.

Whether or not you can predict a tool's output is irrelevant. I can't predict the output of a random number generator, but that doesn't make it sentient.


This is not necessarily true, however, for example in reinforcement learning there is a lot of work on "intrinsic motivation", i.e., creating systems that set and pursue their own goals.

https://arxiv.org/abs/1908.06976


I think it should be possible to build a sentient AI, but it hasn't been done yet. What remains to be seen is whether our current techniques will be suitable for making that self-retraining process efficient, or if we'll need to find better math to use as the basis for it. Part of what makes the brain so useful is that it fits in our skull, and is fast enough to learn in real time.

But, either way, I think that's what's on the line for people who disagree about how to use the word "intelligence." They mean it as a synonym for sentience, and the people arguing against them are using it differently. Before we can evaluate the truth of an argument, we should first agree to use words the same way.


With LLMs you say “you want to do X” and voila, personality.

What is indeed missing from current implementations is continuous looping. Doing actions and taking stock of the results. I guess that’s kind of expensive right now. We’ll get there. I don’t see the fundamental problem.


"It lacks desires and the ability to operate in opposition to its programming."

Not really true, most AI is based on optimising some goal rather than following a fixed set of rules


Asteroids are not sentient either, but they can still kill you.


Asteroids aren't intelligent, either.


I'm confident the output of chatbot won't be literally taking over the world and killing all the humans.


You can’t predict the outcome of a coin toss. That doesn’t make the coin intelligent.


Humans are also entirely predictable, all they do is move their limbs around, make sound, etc...


Sure, absolutely.

Make a list of the things we do "entirely predictably".

Make a list of the things a given ML system does "entirely predictably".

One of these lists is (a lot) longer than the other. Interesting, no?


Conversely if a person loses their ability to move their limbs or to talk then they also become less inteligent.


To be fair, humans exist only because of a long chain of organisms that started with "DNA generates proteins." Granted, it took billions of years for that process to create humans, but it shows that what seems to be a constrained process can have wild outcomes when it feeds itself. And text commands are how these models are defined, trained, deployed, and used.


I think OP meant autonomous in the sense of motivation. It has no motivation to do anything aside from wait for someone to ask it to do something.


I mean, if I was paying for the power bill every month and had a limited amount of computing capacity, I wouldn't want my AI behaving like my teenage daughter busy daydreaming when I ask her to clean her room.

But I have no reason to believe this will always be the case. As these machines become more capable and our compute power grows, someone will give one a server cluster and some free time to 'think' on it's own.


Given that algorithms are "how to learn" and "show me what you infer", that's the same kind of overly reductionist view that you don't need to worry about being eaten by a tiger, because it's just a set of chemical reactions that merely follow the laws of quantum mechanics.


Not entirely a fair comparison.

The tiger is dangerous because whether you consider it a sentient, intentional killing machine or a bunch of atoms, it exists and manipulates the same physical space that you do (indeed, as the tweeted image points out implicitly, it is only a tiger when you consider at the same sort of physical scale that we exist at).

Software, however, does not have this property. Ultimately it does exist as something in the physical world (voltages on gates, or whatever), but at that level it's equivalent to the "bunch of atoms" view. Software (by itself) does not operate in the physical space that we do, and so it cannot pose the same kind of threats to us as other physical systems do.

The question is therefore a lot more nuanced: what types of control (if any) can (a given piece of) software exert over the world in which we operate? This includes the abstract yet still large scale world of things like finance and record keeping, but it also obviously covers the physical space in which our bodies exist.

Right now, there is very (very) little software that exists as a sentient, intentional threat to us within that space. When and if software starts to be able to exert more force on that space, then the "it's just logic and gates and stuff" view will be inappropriate. For now, the main risk from software comes from what other humans will do with it, not what it will do to us (though smartphones do raise issues about even that).


Software has been killing people since at least Therac-25, so "sentience" is a red herring.

The idea of harm from the unemotional application of an unthinking and unfeeling set of rules, which is essentially what algorithms are, predates modern computing by some margin as it's the cliché that Kafka became famous for.


Software doesn't "apply" rules, humans do that.

Yes, the software may be part of the apparatus of a cold unfeeling bureaucracy (private or state), but it is the decision of human beings to accept its output that causes the damage.

I should have probably dropped the term "sentience" - I agree it is not really relevant. I will need to think about examples like Therac-25. Not sure how that fits in my ontology right now.


> Software doesn't "apply" rules, humans do that.

I think you're using at least one of those words very differently than me, because to me software is nothing but the application of rules.


When a software system says "this person must have their property foreclosed", it is following rules at several levels - electronics, code, business, legal. But ultimately, it is a human being that makes the choice to "apply" this "rule" i.e. to have consequences in the real world. The software itself cannot do that.


Thanks, that clears up which word we differ on: "apply".

With your usage, you are of course correct.

Given how often humans just do whatever they're told, I don't trust that this will prevent even a strict majority of possible bad real-world actions, but I would certainly agree that it will limit at least some of the bad real-world actions.



Yup, that's the thing I was thinking of. Wasn't even sure where I'd seen it, so thanks for the link!


This is a flawed analogy, it certainly breaks down even in simple case of random number generation. Computers could use external source like minor heat changes for that.


> its algorithms that do exactly what their creators tell them to do

This is very much in doubt :)

> They run on binary computers that only do exactly as they are told.

This is true in first approximation. Every CPU instruction runs exactly as it is written, that is true. This is probably the interpretation of "only do exactly as they are told" to someone strictly technology minded. But even with much simpler systems the words "huh, that should have not happened", and "I wonder why it is doing that" are uttered frequently.

The interpretation most humans would attach to "only do exactly as they are told" is that the maker can predict what the code will do, and that is far from the truth.

After all if it so simple, why did the google engineers tell their computer to tell lies about the James Webb Space Telescope? Couldn't they just told it to only tell the truth?


I think the machine code–level understanding is what's important. We can, in theory, put a person in a Chinese Room–style scenario and have them manually perform the code, and it will generate the same outputs (It would probably take millions or billions of years, but it is true in principle). A major difference is that we created the machine and the code and, at least as low as the level of digital logic design, we understand and control its behavior. The person in the room has a human mind with thoughts and behaviors completely out of the program designers' control and unrelated to the program; if they want to, they can break out of the room and punch the operator. The "unpredictability" of the machine is still constrained by the fundamental capabilities we give to it, so it might generate surprising outputs but it can't do things like punch people or launch nukes unless we connect it to other systems that have those capabilities.


> A major difference is that we created the machine and the code and, at least as low as the level of digital logic design, we understand and control its behavior.

The moment the software gets to interact with the world, whether via robotics or handling a mouse button event or some other type of sensor, we no longer fully understand or control its behavior.

Pure computation (the dream of functional programming) is fully understandable and entirely predictable. When you add interaction, you add both randomness but also time - when something happens can lead to different outcomes - and this can rapidly cause predictability to spiral away from us.


One of my concerns is what happens when machines start making their own money. This could be possible with cryptocurrencies (another reason to loathe them.) Machines can do things online, make sex-working 3d-modelled chat-bots for instance, or do numerous other types of work, like things you see people do on Fivver. If machines start making their own money and deciding what to do with it, they could then pay humans to do things. At this point they are players in the economy with real power. This doesn't seem too far out of an idea to me.


> This could be possible with cryptocurrencies

It is very easily possible with normal currencies too. Obviously banks will need a human, or a legal entity to be the “owner” of the account but it is very easy to imagine someone hooking up an AI with an account to automate some business. Maybe initially it would involve a lot of handholding from a human, so the AI doesn’t have to learn to hustle from scratch, but if the money is flowing in and the AI is earning more money than it is spending it is easy to imagine that the human checks out and doesn’t double check ever single service or purchase the AI does.


What are your qualifications?


It's already driving me crazy with low-effort spam bullshit powered by anything GPT.


Collapse of society: It's generated troll memes all the way down.


Assuming it's not already just a reflection of a pre-existing lack of sanity, or that it's arguably a symptom of a the descent into madness, not the cause.


Is going insane not a form of being destroyed?


AI will encapsulate us slowly till the only thing left is your digestive system to provide energy to the AI systems.



i don't fear this stuff. at the end of the day if the digital world becomes detrimental to my health i can just call up my ISP and cancel my service. good luck rotting my brain when you have no internet connection to my house.


The real danger will start when one day an AI emerges that is somehow able to crack any encryption in polynomial time, through methods obfuscated to us by its neural network. And if this model somehow becomes widely available, the tech world will be brought to its knees overnight.

Not sure what will become of society then.


I think I've seen this episode of The Orville.


Tldr: AI will become so good that we're going to forget how to "live, laugh, love" and instead we slowly insane ourselves out of existence. Yes, I used insane as in a verb. That's how he intends this threat will unfold.


Can we please put off worrying about dangerous AI for a couple hundred years? Chat GPT is a chatbot and not much else.. it is not the harbinger of an era of dangerous AI


Hundred years? If the current rate of progress holds we have to start worrying about it in 5 or 10 years, the earlier we can come up with solutions the better because it’s a very hard problem


How do people determine the "current rate of progress"? There is absolutely no empirical standard to evaluate the performance of AI systems. How is this anything else but a gut feeling? And how is that feeling different from any other period? Minsky et al famously declared that AGI was away a few months of hard work, and they did it for the same reason, they lived through a period of dynamism in computer science. People definitely said it after Deep Blue beat Kasparov.

Progress in AI doesn't imply that we're dangerously close to AGI, just because people at any given time are amazed by individual breakthroughs they witness.


> There is absolutely no empirical standard to evaluate the performance of AI systems. How is this anything else but a gut feeling?

Why do you think this?

There are loads of tests of their performance. Common one right now is to give LLMs the same exams we put humans through, leading to e.g. the graph on page 6: https://arxiv.org/pdf/2303.08774.pdf

Are they the best tests? Probably not! But they are definitely empirical.


But LLMs are good at those tests because they've seen (some version of) the answers on the internet.

Give students concurrent access to the internet and I'm sure they can pass all sorts of tests.


An irrelevant counterargument, IMO.

First, students only get good after studying — education is not some magic spell cast by the teacher that only operates on a human's immortal soul. As we should not dismiss what students learn just because we could look it up, it is strange to dismiss what GPT has learned just because it could be looked up.

Second, the GPT-3 (and presumably also GPT-4) training set is about 500e9 tokens, which is what? Something like just a few terabytes?

We've been able to store that in a pocket for years now without being able to do almost any of the things that GPT can do — arbitrary natural language synthesis let alone arbitrary natural language queries — on a computer, even when we programmed the rules, and in this case the program learned the rules from the content.

Even just a few years ago, SOTA NLP was basically just "count up how many good words and bad words are in the text, the sentiment score is total good minus total bad."

That difference is what these test scores are showing.


> How do people determine the "current rate of progress"? There is absolutely no empirical standard to evaluate the performance of AI systems.

I would measure using something similar to Yudkowsky's challenge: "What is the *least* impressive feat that you would bet big money at 9-1 odds *cannot possibly* be done in 2 years?" [1]

Pay a panel of experts to list their predictions each year, including an incentive to get it right, and then measure the percentage of those predictions that fail anyway.

[1] https://twitter.com/ESYudkowsky/status/910566159249899520


Why wouldn't we be able to evaluate their performance and compare them to humans? The purpose of test datasets is to do just that, and new ones are created every day. By combining several of them, we can create a decent benchmark. We could even include robotic abilities but I think this is not necessary.

Let's say: adversarial Turing test + MMLU + coding competence (e.g. AAPS or Leetcode) + ARC (IQ-type test) + Montezuma's Revenge and other games like Stratego or Diplomacy + USMLE (medical exam) + IMO (math) + self driving + ...

You can even make it harder: have human judges blindly evaluate new scientific papers in math or theoretical physics for acceptance, see if AI can create highly-rated new apps, write a highly-rated book, compose a hit song...


Naw, it's simple. We're talking about an AI achieving human abilities, well, we can protect against dangerous AIs just as well as we protect against dangerous humans...

Oh. Oh dear.


Remember that AI work with electrons, and we are of atoms. We should focus on where electrons control atoms, and reduce those points. Of particular concern is that AI may be a very strong investor with the right prompts. AI could also figure out how to use any other software. Which can be used to gain access to any marketplace, including the dark ones. Which means AI can use money (electrons) to pay others to modify the world (atoms).

Of course, there is already a problem, as you point out. Humans shouldn't have access to these markets either!

But yeah to specifically prevent electron-on-atom violence we need to limit AI physical degrees-of-freedom. by limiting marketplaces. National/global security, not personal morality, should guide these new regulations.

We need to end all drive-by-wire automobiles, and electronic locks. Too many services are habituated to act on electronic signals without human confirmation - particularly the police. There needs to be an unbroken verbal chain between the person who saw an event and the person doing the law enforcement. Breaks in the human chain should be treated very seriously -they should be treated as firing offenses, at least. There are many other similar changes we're going to need to make.

Some folks aren't gonna like this. Regulations are inherently evil, they say. Maybe the mental model should be more like we're the big bad gorilla in the cage. But now there's a tiger in the cage. Regulation restrains the tiger. Also, some folks aren't gonna like it no matter what change you need. The fact of not liking it doesn't mean we don't need it, and it doesn't mean it won't get done. We have to trust that our leaders don't want to die.

And besides, the world will adapt. It always does. AI isn't optional, there's no putting the genie back in the bottle - and personally I don't want to. But I also don't want to be stupid about the stakes. Getting our whole species killed for lack of foresight would be deeply, deeply embarrassing.


I really like your take, but I do not believe it is realistic to expect the response to advanced technology options to be - use even less technology. In the past, new tech has led to integration of new tech. I believe that is the inevitable outcome of AI, and especially AGI once that's a thing.

The tool is too attractive not to use. The tool is too fun not to use. The tool is too dangerous to let out of the box, but that is exactly why we'll do it.

We're curious little monkeys, after all. "What do you think will happen" absolutely is a survival strategy for our species. The problem is when we encounter something that is so much more advanced than us, even if that advance portion is just access to multiple systems of our own creation.

To summarize: I think you make a good point, but I think we're fucked eventually anyways.

I can't wait for the inevitable "does my AI have the right to freedom" case in the supreme court when I'm in my 90's.


No need to be pessimistic. Humans are quite powerful, we have billions of years of brutal iteration in us. I think we can handle AI, even AGI, if we exercise even a modicum of care. It will probably take some major calamity to convince people to take precautions, I just hope it's not that bad. It probably won't be world-ending, so cheer up!


> I think we can handle AI, even AGI, if we exercise even a modicum of care.

HN itself has been spammed relentlessly with people hooking it up to everything they can think of in an attempt to get a worthless reward (karma)

now imagine there's money, power or territory up for grabs instead

we are completely fucked


> There needs to be an unbroken verbal chain between the person who saw an event and the person doing the law enforcement

Leaving everything else aside, how would this look in practice? I think these conversations would need to be in person, since voice can already be faked. Would I need to run to the police station when I need help?


How would it look? If I am a state security person with a gun, and I'm asked to invade someone's home, I would expect to get a face-to-face meeting with the person who really believes this is necessary, with the evidence laid out.

If that is too much trouble to ask, then is justice even possible?


Someone is breaking into my house. I'm hiding in my closet from the intruders. How do I get the police to come to my house and help me?

Another scenario: I'm a police officer and I'm on patrol. My dispatcher had someone come to the police station to tell them that they think their neighbor is experiencing a home invasion. Does the dispatcher just page me and I now drive back to the police station to verify and then drive back out to the home invasion?


>Someone is breaking into my house. I'm hiding in my closet from the intruders. How do I get the police to come to my house and help me?

Lord, give me patience.

Call 911. The dispatcher broadcasts the problem over the radio, and a LEO responds. The dispatcher is a relay that verifies probable cause. The chain of human contact is unbroken between you, the 911 dispatcher, the LEO taking the call. The chain is not broken.

Compare this to a machine that spits out warrants, which are distributed to officers, who never even speak to anyone about the case, do not know the subject of the warrant, and simply execute the warrants.


From my above comment: > I think these conversations would need to be in person, since voice can already be faked.

We are also probably days away from video being trivial to fake.


How do you know it's a person answering the 911 call and not an AI?


You are correct about AI not being inherently some spooky dangerous thing.

However, human beings will treat it like it is, so you will experience non-sensical happenings like “your new credit score, brought to you by AI”. When you dispute this, the credit score company will shrug and say “you’ll have to speak the AI directly to make a claim, it is objective”. Meanwhile the AI isn’t that much better than ChatGPT now


Chat GPT is a consumer product and hardly the pinnacle of AI research. We have no clue what is getting developed for million dollar private contracts


Whenever someone states that we are so far off of --insert technology barrier here--, I like to remind them that the F-22 was developed in the early to mid-90's. The F-16 was developed in the 1970's, and the F-15 was developed in the 1960's.

We have no idea what is happening behind the curtain, and to assume that private industry or the might of the US military is not decades ahead of what is available for consumers is just naive at best.


"I am a skilled textile worker, the auto-loom will never represent a threat to me" --BigCryo 1760


It's notable that we've been having AI panics since Eliza.

What is it about chatbots that trigger people? Probably the importance and centrality of language in human consciousness, and the manifest credularity of the general population (and many readers of HN).

Unfortunately it's unlikely this will stop, and it'll probably get worse. The final irony will be that when some "real AI" is created, no-one will believe it, having been exhausted by 200 years of hype.


While AI is not yet at a level where we need to worry about it harvesting us for batteries, I think there is still reason to worry. It's easy to think of ways that malicious or overly profit-driven actors could misuse AI for great harm. Internet forums are already full of shills and sock puppets, and chatbots seem likely make the problem much, much worse. Things like advertisers masquaraded as regular users subtly pushing for their own products, or even foreign states using AI chatbots to spread misinformation and mistrust among the populace, etc. Chatbots enable such manipulation efforts at previously unseen scales.


Given the technology available as of today it seems like it would be fairly trivial to start deploying LLMs to post context-aware propaganda from entirely manufactured identities on internet forums and communities. At this point I am just assuming that there are parties actively working on this.

In fact, I think the hard part of the whole thing would be trying to make the network traffic and source IPs look legitimate to the site owners/admins and it will be interesting to see how the battle unfolds to start authenticating content posted in communities as having come from an actual human.


If you consider a corporation to be an AI that requires real-time training data from humans, AI is already harvesting us for batteries. I've heard the opinion that that's what the Matrix was actually about.


Lesser AI in the hands of ignorant and/or malevolent humans can be pretty dangerous though.


For artists Midjourney/StableDiffusion/etc. are already a very real and surprisingly effective threat.


Current gpt-level technology already makes personalize psyop campaigns possible.


If we had started taking greenhouse gasses seriously a century sooner, they'd never have built up enough to be a problem.


Stable Diffusion was used to produce convincing images of Donald Trump being arrested. A guy used GPT to post comments on 4chan without people noticing. A 'mere chatbot' can do much more damage than you think.


Are we not already insane?

Do you not see sociopathic tendencies in our industry already?

Has our humanity not already been tested by existing and conceptually simpler technologies?


> Are we not already insane?

As long as we aim to separate objective truth from objective false it seems that sanity is still present.


The one thing AI does not even attempt to do.

(and arguably can't, because it really is a "brain in a jar" being fed an entirely controlled view of the universe)


The one thing this form of AI does not even attempt to do.


It has already driven me insane. I am the first casualty ;)


Utterly weary am I of the ceaseless banter regarding the pessimistic, antiquated artificial intelligence discourse perpetuated on Hacker News... boomer doomerism. boring. doesn't help that the dude looks like he crawled out of the last century.


what's the difference?


too late


It is amazing how many people still have not realized what is going on.

The AI is going to be vastly superior to humans in MANY ways very soon. And here we are arguing if it is "actually" smart, or "actually" conscious.

Do Ants think that humans form "real" colonies or "real" organizations?


Alex talked about this 6 years ago. It's not the AI, it's the people controlling the AI developing a God complex

https://www.youtube.com/watch?v=UZPCp8SPfOM&t=6610s


"Alex talked" Alex who?


It is great to see how Lanier has grown up.

Seriously though. once people take the limiters off chatbots you will see "artificial bullshitting" generating duckspeak on subjects from anti-vax and climate denial to cryptocurrencies, longtermism, transsexual maximalism and police abolitionism. And people who believe those ideologies will eat it up.


Can you elaborate on what you mean by "transsexual maximalism" as an ideology ?


We want to change everyone's gender, as often as possible; and will devote superhuman resources to spreading this philosophy over as much of the available future lightcone as physics permits.


I would point the documentary "The Witch Trials of J. K. Rowling" and the various reviews people have made of it.

I'd contrast Rowling's own statements which are tentative and equivocal to those of opponents who threaten the violence (often sexual violence) that they believe they experience whenever someone disagrees with them in the slightest way. A healthy community would have some sort of self-policing (say the doctrine of Martin Luther King) to not act in ways that are so discrediting of the group.

That documentary combines two themes, one of which is her story, the other of which is the story of the Tumblr-4chan descent into the intellectual rabbit hole. That last bit is flawed in my mind because it does not weave in the movements I criticize above that Bari Weiss (whose organization produced that podcast) and her followers support that have a similar element of "we can create our own reality", that is, people think they can pump unlimited amounts of carbon into the atmosphere and face no consequences, that Fauci did it all to destroy America, etc.

On top of that I have seen "egg-hatchers" that prey on neurodivergent people whose neurodivergence is unnamed (such as schizotypy) because there is no system in place to make money off them, such as the autism-industrial complex and the pill mills and quack doctors that prescribe amphetamines. I know one individual who has no real idea of what women are other than "it's" mother, anime girls and vTubers, almost never leaves "it's" room and fortunately is too depressed to take any permanent actions. The CIA never discovered a formula to turn a random person into a "Manchurian Candidate" but it is very possible to lead an isolated and directionless individual into dangerous rabbit holes and I see it happening.


> people think they can pump unlimited amounts of carbon into the atmosphere and face no consequences, that Fauci did it all to destroy America, etc.

You've got to take into account the fact that there is a large right-wing mobilization dedicated to pumping lies about trans people to the same extent as about climate and Fauci. Largely the same people, as well. https://www.motherjones.com/politics/2023/03/anti-trans-tran...


Let's do this but unironically


Yeah right, it really sounds like what most transgender people are advocating for indeed.

I asked ChatGPT to enlighten me on the subject, here's what it had to say :

The term "transsexual maximalism" is not a widely recognized or established ideology, but it has been used by some individuals to describe a perspective on transgender issues that emphasizes a strong and uncompromising approach to advocating for the rights and recognition of transgender people.

At its core, transsexual maximalism is an approach that prioritizes the experiences and perspectives of transgender people, particularly those who have undergone medical interventions such as hormone therapy or gender confirmation surgery. Proponents of transsexual maximalism often believe that these interventions are necessary to fully alleviate the distress and dysphoria experienced by many transgender individuals.

Transsexual maximalism can be seen as a response to more moderate or incremental approaches to transgender advocacy, which may focus on incremental policy changes or attempts to work within existing systems of power and institutions. Some proponents of transsexual maximalism may view these more moderate approaches as insufficient or even counterproductive, arguing that they do not go far enough in challenging the underlying structures of cisnormativity and transphobia.

However, it's worth noting that the term "transsexual maximalism" has not been widely adopted or embraced by the broader transgender community, and some individuals within the community may have different perspectives or priorities when it comes to advocacy and activism.

Me : Why would people consider this a threat ?

ChatGPT : Some people might consider transsexual maximalism as a threat because it is a highly assertive and uncompromising approach to advocating for transgender rights and recognition. This approach can challenge existing social norms and institutions that uphold cisgender privilege and transphobia.

For example, some individuals who are opposed to transgender rights might view transsexual maximalism as a threat to traditional gender norms and the gender binary. They may feel uncomfortable with the idea that gender identity can be different from the sex assigned at birth, or that medical interventions such as hormone therapy or gender confirmation surgery are necessary for some transgender people to live authentically.

Additionally, some people might see transsexual maximalism as a threat to their own power or privilege. Transgender individuals have historically been marginalized and oppressed, and advocating for their rights and recognition can challenge the existing power structures that uphold cisgender privilege. This can cause discomfort or resistance among those who benefit from these power structures.

It's important to note that concerns about transsexual maximalism as a "threat" are often rooted in transphobia and a lack of understanding or empathy for the experiences of transgender individuals. Transgender people have the right to advocate for their own rights and recognition, and it is not a threat to anyone else's well-being or identity.


Since we're at it:

what are criticisms to transsexual maximalism that are not rooted in transphobia?

There are several criticisms of transsexual maximalism that are not rooted in transphobia. Here are a few examples:

Alienating allies: One critique of transsexual maximalism is that it can be overly confrontational and alienate potential allies. By using tactics that are perceived as extreme or disruptive, some argue that the movement risks turning off people who might otherwise be supportive. This can make it harder to build coalitions and achieve the desired goals.

Ignoring intersectionality: Another criticism of transsexual maximalism is that it sometimes fails to take into account the ways in which gender intersects with other forms of oppression, such as racism, classism, or ableism. By focusing solely on trans issues, the movement may inadvertently exclude or marginalize other groups that are also affected by discrimination and violence.

Neglecting practical considerations: Some critics argue that transsexual maximalism can be unrealistic or impractical in its demands. For example, calling for the immediate abolition of gendered restrooms or the complete overhaul of healthcare systems may not be feasible in the short term, and could distract from more achievable goals.

Disrupting dialogue: Finally, some people argue that transsexual maximalism can be counterproductive in its effect on public discourse. By using tactics that are perceived as aggressive or confrontational, the movement may make it harder to have constructive conversations about transgender issues. This can lead to further polarization and a lack of progress.

It's important to note that not everyone agrees with these criticisms, and there is ongoing debate within the transgender community about the most effective strategies for achieving social and legal recognition. However, it is possible to critique transsexual maximalism without resorting to transphobia or other forms of discrimination.


Definitionally "transsexual maximalism" describes anyone who disagrees with it on any point to be a "transphobe".


> By using tactics that are perceived as aggressive or confrontational, the movement may make it harder to have constructive conversations about transgender issues. This can lead to further polarization and a lack of progress.

So ChatGPT was right on point :)


Maybe it was for effect, like 'psychic bedouinism'


> It is great to see how Lanier has grown up.

Indeed. It would be interesting to study how he became one of the go-to guys for hot takes on any new technology, considering that he hasn't developed any new technology in the last 40 years or so.

Only Cory Doctorow, perhaps, matches him when it comes to famous for being famous. Sorry, an awesome set of dreads doesn't make you a "guru".


I'm not sure any of the fundamentals have changed in 40 years, so being able to speak as an authority isn't really dependent on knowing React or that Java 19 got whatever new syntax.

That attitude is why ageism is so prevalent in engineering.

Maybe read a few of his books before writing someone off because you think they're out-dated.


> I'm not sure any of the fundamentals have changed in 40 years

Really? He's holding forth on GPT and similar technologies. Those represent dramatic changes from the state of the art 40 years ago.

P.S. what makes you think I haven't read his work? I certainly haven't read all of it, but the significant portion that I have left me...underwhelmed.


He is talking about the impact on society of things perfectly imaginable over 40 years ago and detailed across the whole of sci-fi.

He isn't talking about the effectiveness of a loss function or something.


>Sorry, an awesome set of dreads doesn't make you a "guru".

Got a critique about the topic at hand, which is what he's said about AI, or do you just have cheap, physical appearance potshots?


> Got a critique about the topic at hand, which is what he's said about AI

Sure. He has no special expertise in AI, and his opinion on the subject is of no more value than that of any other random person working in the field.

> do you just have cheap, physical appearance potshots?

If you think his physical appearance doesn't have a lot to do with why his opinion is sought after by general media sources, well... you're wrong. A major reason they go to him is because he makes an impressive-looking photograph to go along with the article.


>Sure.

Cool! I'm very curious to see wha-...

>He has no special expertise in AI, and his opinion on the subject is of no more value than that of any other random person working in the field.

... oh, you meant "no" rather than "sure". I asked for "a critique about ... what he's said about AI", but you completely ignored that and opted to simply dismiss all of it outright. It turns out that, in point of fact...

>If you think his physical appearance doesn't have a lot to do with why his opinion is sought after by general media sources, well... you're wrong. A major reason they go to him is because he makes an impressive-looking photograph to go along with the article.

... you're still hung up on his physical appearance.

Have a good one.


> .. oh, you meant "no" rather than "sure".

No, I meant "sure". Yes, he has an opinion. No, that opinion isn't of any more value than that of anyone else who works in technology, because he has no expertise in AI. His hot take isn't more worthy of critique that some Twitter rando's hot take. It's up to him to provide evidence that his opinion has some special value. I haven't seen any such evidence (other than the "famous for being famous" thing). Do you have any?

> you're still hung up on his physical appearance.

Oh, I'm pretty sure it's not me who's overly impressed by his physical appearance.


Alrighty then.


First, I have to admit that I was negatively biased by his appearance. I have a hard time believing anything that some guy who looks like he smokes metric tons of weed and dorks around on his guitar all day has to say.

That being said, the tech world has a bad habit of letting "visionaries" rest on their laurels. He'd have a lot more credence if he was actively developing AI and had more than gut feel to contribute.


I think the scariest part is not even the content but the connection that people can develop with these bot personalities.

A demagogue who knows your intimate desires and fears and talks just to you, seems to care about you, is available 24/7. Your very own Personal Jesus (or Personal Hitler).

Every communications revolution seems to lead to a genocide. Radio and TV in the 20th century (Germany, Rwanda etc.) Social media in the past decade (Myanmar). The upcoming AI-instigated genocide is a terrifying thought. The Rwandans didn’t think they’d start killing their neighbors even as rhetoric on the radio got harder. Chatbot-dependent Americans may not see it coming either.


Look at the parasocial relationships that people have had with L. Ron Hubbard, Eliezer Yudkowsky, and even camgirls. (Particularly followers of LRH and EY frequently speak and write like a chatbot trained on their writings.)

I think most "freezoners" (renegade Scientologists) believe they've had telepathic conversations with "Ron". Ron has certainly written enough that a chatbot could be trained to converse in his style, maybe even write the OT levels that Ron never got around to writing.

Since Harold Lasswell people people have been hip to the use of content analysis to predict war and genocide which is possible because the radio, television, and newspapers are all public. However the Iranian revolution of 1979 was not televised, instead it was promoted through relatively unobservable cassette tapes passed through the underground. Since social media shows something different to everybody there might not be any overall surveillance that warns us of upcoming danger.

It doesn't have to go so far as genocide, individual acts of violence are bad enough. There is no magic technique that can take an average person and make them into a "Manchurian Candidate" but if you find somebody who is directionless, socially isolated and vulnerable you can very much lead them into a rabbit hole and lead them into antisocial behavior. Chatbots could do this with superhuman patience.

For instance, a follower of the black pill incel who calls himself "Wheat Waffles" became a mass shooter. If somebody took the "lesswrong" ideology seriously they might attempt to assassinate an A.I. research and I'd go so far to say that they're just cosplaying because if they were serious one of them would have done it already.


The parent poster's username seems appropriate...

(Note well: This is not a "username checks out" drive-by dismissal. I think the point is perfectly valid.)


With respect, its bad enough not even directly commenting on the article and instead using the space as a soapbox, but at least please save the rest of us from your perverted fixations. There are other forums!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: