Hacker News new | past | comments | ask | show | jobs | submit login

Anyone questioning the author's intention should read one of his books, "Who Owns the Future?"

It was written sometime ago, and I think Sam Altman read it as a handbook on power concentration using AI rather than the human-centric approach it was laying out.

Personally I wish Lanier wasn't as right about many things as he is, because I lose a little faith in humanity each time.




I have nothing but respect for the chap.

I never wanted to respect him, as I always thought he was one of those "too good to be true" people, and was mostly a paper tiger.

It turns out that he's the real deal, and has been right about a lot of stuff.


There are lots of parallels between Jaron Lanier and Richard Stallman. Cory Doctorow is another one I would put in that list, as well as SF writer Charles Stross.

They are all pretty good at looking ahead.


Such as? I have my skepticism too.


I’m not particularly interested in going into a back-and-forth on this.

He’s sort of like Edward Tufte; lots of ego, but earned, and not for everyone.

I like your job title. Always up for more “human,” in our design.


I actually agree with his perspective. AI is simply a another huge leap in technology that directly affects social order. We only need to look at the effects social media has had on society and just amplify them to perceive what may be likely outcomes.

This aligns very close to my own thoughts that I have written about in great detail. I foresee the societal impacts to be exceedingly disturbing long before we ever reach the concept of a Singularity.

https://dakara.substack.com/p/ai-and-the-end-to-all-things


Regulation of social media is still woefully behind even in cases where we do know there has been a hugely negative impact (Myanmar & Facebook, for example). And there are approximately 5 people who exert massive, unregulated power over the shaping of planetary discourse (social media CEOs). If social media is too big to regulate, AI regulation doesn't have a chance in hell.


Yes, additionally I find it somewhat ironic that AI researchers talk a lot about "power seeking" behavior of AI as a primary concern.

However, seemingly overlooked, AI is itself power and we should expect that "power seeking" humans will inevitably become its custodian.


This a thousand million times.

The mislabeling of LLMs and diffusion models as "artificial intelligence" is probably the biggest marketing blunder in the history of technological progress, one that could ironically affect the course of AI alignment itself.

Smart thinkers and policymakers are going to waste their time framing the problems the tech poses in terms of "an uncontrollable intelligence out to get us" like it's some kind of sentient overlord completely separate from humanity. But super-advanced technology that can operate in a closed loop (which could be called AGI depending on who's asked) isn't necessary for humanity to crater itself. What's required for such tech to come into existence in the first place? Humans. Who's going to be using it the whole time? Humans.

And I think there's still a lot of disruptive, world-changing tech to be discovered before AGI is even a remote possibility. In reality this tech is probably going to be more like a superpowered exoskeleton for CEOs, politicians and the like to sway public discourse in their favor.

"An uncontrollable intelligence" already describes the source of a lot of our current problems... that is, ourselves.


"An uncontrollable intelligence" already describes the source of a lot of our current problems... that is, ourselves.

Yes, precisely. One of the best quotes I've seen was "Demonstrably unfriendly natural intelligence seeks to create provably friendly artificial intelligence"

The whole ASI alignment theory is a paradox. What the AI researchers don't realize, is that they are simply building an uncomfortable mirror of human behavior.


The meaning of "artificial intelligence" has always just been programs that can get results that previously only humans could do, until the moment programs can do it. For decades AI researchers worked on chess programs even though the best chess programs until 20 or so years ago couldn't even beat a skilled amateur. Now of course they can beat grandmasters. And so we decided chess wasn't "really AI". LLMs would have been mindblowing examples of AI even a decade ago. But because we now have them we can dismiss them as "not AI" like we did with chess programs. It's a never ending cycle.


Microsoft put out a 150 page paper yesterday on why GPT-4 is proto-AGI. LLM's are AI, now we're just closing the G gap.


Microsoft is hardly an unbiased evaluator of anything built by OpenAI.

And "closing the G gap" is like climbing to the top of a 10-foot ladder and saying "all that's left is to close the gap between here and the moon." AGI is much, much harder than a large language model. But then radically underestimating what it takes to get to AGI has been going on since the 1950s, so you're in good company.


Link, please?


"Sparks of Artificial General Intelligence: Early experiments with GPT-4"

https://arxiv.org/abs/2303.12712


> And I think there's still a lot of disruptive, world-changing tech to be discovered before AGI is even a remote possibility. In reality this tech is probably going to be more like a superpowered exoskeleton for CEOs, politicians and the like to sway public discourse in their favor.

Our current powers-that-be are so manifestly unsuited to have the kind of power our idiot technologists are desperate to build for them that part of me wishes for a disaster so bad that it knocks technological society off its feet, to the point were no one can build new computers for at least a couple generations. Maybe hitting the reset switch will give the future a chance to make better decisions.


I am less worried about what humans will do and more worried about what corporations, religions, and governments will do. I have been trying to figure out how to put this most succinctly:

We already have non-human agentic entities: corporations. They even have the legal right to lobby to change laws and manipulate their regulatory environment.

The talk about AI being misaligned with humanity mostly misses that corporations are already misaligned with humanity.

AI-powered corporations could render enormous short-term shareholder value and destroy our environment in the process. Deepwater Horizon will be insignificant.


Corporations, religions, governments etc are just an amalgam of human values and behavior that results in the effects we perceive. Yet, AI researchers most grand theory of successful alignment relies on simply applying our values to the AI such that it will be aligned.

You can look at any human organized entity simply as another form of power and how our values become interpreted when given power. Your observation could simply be seen as further evidence of how alignment is a flawed concept.

If you take a single individual and have them fully illicit their values and principles you will find they are in conflict with themselves. Two values that are almost universal and individually positive, liberty and safety, are also the very values that also cause much of our own conflict. So yes, we are all unaligned with each other and even minor misalignment causes conflict. However, add power to the misalignment and then you have significant harm as the result.

FYI, I've written a lot specifically on the alignment issues in the event you might be interested further - https://dakara.substack.com/p/ai-singularity-the-hubris-trap


The government of Myanmar is free to regulate Facebook however they like within their own sovereign territory. But given the level of corruption, oppression, and incompetence there I doubt the results would be any better than usage policies written by random corporate executives (and haphazardly enforced by outsourced moderators). The only real solution to improving the situation in Myanmar is for the people to rise up and change their own government; this may take a long time and a lot of deaths but there is no alternative.


>The only real solution to improving the situation in Myanmar is for the people to rise up

They are rising up: https://www.nytimes.com/2023/03/17/world/asia/myanmar-killin...


This reply confuses me. You are implicitly accepting that FB, and American company, had a roll in the atrocities, but you are then saying it is up to Myanmar to handle this. If that's correct interpretation, I find that attitude abhorrent. I hope I'm wrong.


At the end, as you said, is social order, a similarity with social control. In a sense our past and current fears for caffeine [1], alcohol, drugs, etc is the fear that society will change and be out of control. Not saying that those things are healthy but even if drugs were harmless it would he controlled.

[1] https://www.researchgate.net/publication/289398626_Cultural-...


Yes, most predictions never happen because there is a feedback loop at a certain point, people will change behavior to prevent the worst outcomes.

I hope that will be the case here. However, what makes this challenging, is that the pace is so fast that there will be little time to consider the effects of the feedback loop before we are deeply within its grasp. I only hope that thought explorations into what might be negative effects will allow us to see them sooner and hopefully adjust in time.


Your substack is a treasure trove. Makes lesswrong articles look mentally rigid.


Thank you for the appreciation!


~22 minute interview [0] by Jaron about "Who Owns the Future?"

[0]: https://youtu.be/XdEuII9cv-U?t=172


I just picked this up on your recommendation. Amazing. This guy is the digital version of Piketty if that makes any sense.


Funny that if you google "Who Owns the Future", the Google featured snippet says the answer is Jaron Lanier.


I feel that if smart people spent more time writing books about how good outcomes could come about rather than warning about bad outcomes powerful actors wouldn't have so many dystopian handbooks lying around and might reach for those positive books instead.


"Who Owns the Future?" is exactly a book about developing good outcomes, and building a future that supports humanity and happiness.

But you can also read it at an obtuse angle and see the problems outlined to resolve as opportunities for personal gain.

It's just a matter of perspective.


Glad to hear. I will put it on my list.


It's way easier to write believable dystopian novels because you are deconstructing what already is rather than building something new. The smart ones are the ones capable of writing the utopian novels.


I was about to comment the same thing. It's simply much harder to create from whole cloth positive visions for the future where dystopias can be immediately be extrapolated from existing trends (and our long human history of abuse, horror, and destruction).

Edit: If anyone would like an example, I'll offer Huxley's "The Island" as a utopian counterpoint to his "Brave New World". In addition to exploring the qualities he believe make up a 'utopia', a significant thematic concern is the need for channeling our innate destructive impulses* because utopia - should it exist - can only be maintained, not manufactured, through the active preservation/conservation of our natural world, our positive human values, etc.

*for example, there is an innate human impulse to subjugate others. Huxley suggested that we should channel, rather than suppress, this impulse into a productive activity that satisfies the desire without causing harm: rock climbing (which must have been much more of a niche activity in 1962).


If you read Brave New World and think of the lower "classes" as instead being automation and AI (really, most of the jobs done by Epsilons and Deltas in the book were automated decades ago, and the Gamma / Beta jobs are rapidly moving towards AI replacement as well) it's not a bad system, nor is it a dystopia.


easier to imagine the end of world than the end of capitalism...


Help us out here. What would the end of capitalism look like? All of the attempts at ending capitalism so far have collapsed into disaster, so people are understandably hesitant now to start grand social experiments which historically speaking are likely to end in famine and genocide.


Capitalism works because it models the world without saying much about it. Just as I can pile sticks and mud to form a house, removing entropy and then giving that in exchange for a sack of grain.

It models the physics there, but adds an indirection, value stored as currency.

Money doesn't have any morality or inherent motivation. Capitalism is what happens when humans project theirs onto it, on average, with a good amount of autonomy enabled by that currency.

If people were not, on average, greedy survivalists, then the value store would produce an economy that operates much differently.

That's why capitalism persists, because we're all just advanced monkeys gathering as many rocks, sticks and mud as we can in a big pile, because it is built-in to our genetics to stockpile resource when we can.

Everything else is just advanced mechanisms of this.

The end of capitalism is the end of humanity, because while we exist, we will want to stockpile resources through increasingly elaborate means in an attempt to stave off the entropy of death.


I think your question might be his point.

We can easily imagine the destruction of all existence because we have mental models for what that destruction might look like; however, imagining the end of capitalism requires us to invent entirely new ideas that exceed the salience of capitalism itself (which is… obviously hard much harder).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: