Hacker News new | past | comments | ask | show | jobs | submit login

> It is written in a very accessible way

Many have expressed my sentiments far better than I can, but Superintelligence is quite frankly written in a very tedious way. He says in around 300 pages what should have been an essay.

I also found some of his arguments laughably bad. He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.




He did write an essay [0]. Because it was very short and not deeply insightful due to such length, he wrote a longer book talking about the concepts.

[0] https://nickbostrom.com/views/superintelligence.pdf


> frankly written in a very tedious way.

Ok? I don't see the contradiction. When I say "It is written in a very accessible way" I mean to say "you will understand it". Even if you don't have years of philosophy education. Which is sadly not a given in this day and age. "frankly written in a very tedious way" seems to be talking about how much fun you will have while reading it. That is an orthogonal concern.

> He says in around 300 pages what should have been an essay.

Looking forward to your essay.

> I also found some of his arguments laughably bad.

Didn't say that I agree with everything written in it. But if you want to understand what the heck people mean by AI safety, and why they think it is important then it has the answers.

> He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

So wait. Is your problem that the argument is bad, or that it doesn't cover everything? I'm sure your essay will do a better job.


> He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

I've not read the book, so I don't know the full scope of that statement.

In isolation, that's not a big issue and not an existential threat, as it depends on the details.

For example, a handful of trillionaires where everyone else is "merely" as rich as Elon Musk isn't a major inequality, it's one where everyone's mid-life crisis looks e.g. like whichever sci-fi spaceship or fantasy castle they remember fondly from childhood.


Haven't read the book either, but a handful of trillionaires could be that the "upper 10 000" oligarchs of the USA get to be those trillionaires, and everyone else starves to death or simply can't afford to have children and a few decades later dies from old age.

Right now, in order to grow and thrive, economies need educated people to run it, and in order to get people educated you need to give them some level of wealth to have their lower level needs met.

It's a win-win situation. Poor/starving people go to arms more quickly and destabilize economies. Educated people are the engineers, doctors and nurses. But once human labour isn't needed any more, there is no need for those people any more either.

So AI allows you to deal with poor people much better now than in the past: an AI army helps to prevent revolutions and AI engineers, doctors, mechanics, etc, eliminate the need for educated people.

There is the economic effect that consumption drives economic growth, which is a real effect that has powered the industrial revolution and given wealth to some of today's rich people. Of course, a landlord has the incentive for people to live in his house, that's what gives it value. Same goes for a farmer, he wants people to eat his food.

But there is already a certain chunk of the economy which only caters to the super rich, say the yacht construction industry. If this chunk keeps on growing while the 99% get less and less purchasing power, and the rich eventually transition their assets into that industry, they get less and less incentives to keep the bottom 99% fed/around.

I'm not saying this is going to happen, but it's entirely possible to happen. It's also possible that every individual human will be incredibly wealthy compared to today (in many ways, the millions in the middle classes in the west today live better than kings a thousand years ago).

In the end, it will depend on human decisions which kinds of post-AI societies we will be building.


Indeed, I was only giving the "it can be fine" example to illustrate an alternative to "it must be bad".

As it happens, I am rather concerned about how we get from here to there, as in the middle there's likely a point where we have some AI that's human-level at ability, which needs 1 kW to do in 1 hour what a human would do in 1 hour, and at current electricity prices that's something humans have to go down to the UN abject poverty threshold to be cost-competitive with while simultaneously being four times the current global per-capita electricity supply which would drive up prices until some balance was reached.

But that balance point is in the form of electricity being much more expensive, and a lot of people no longer being able to afford to use it at all.

It's the traditional (not current) left vs. right split — rising tides lifting all boats vs. boats being the status symbol to prove you're an elite and letting the rest drown — we may get well-off people who task their robots and AI to make more so the poor can be well-off, or we may have exactly as you describe.


Or imagine if AI provides access to extending life and youth indefinitely, but that doing so costs about 1% of the GDP of the US to do.

Combine that with a small ruling class haveing captured all political power through a fully robotic police/military force capable of suppressing any human rebellion.

I don't find it difficult to imagine a clique of 50 people or so sacrificing the welfere of the rest of the population to personally be able to live a life in ultimate luxery and AI generated bliss that lasts "forever". They will probably even find a way to frame it as the noble and moral thing to do.


What does AI, or even post-singularity robots do for the 50 richest people? They already live like it's post-singularity. They have the resources to pay people to do everything for them, and not just cooking and cleaning, but driving and organizing and managing pet projects while they pursue art careers.


People 300 years ago would not be able to imagine what life today is like, even for the working class.

Multiply that difference by 100, and a post singularity world might be so alien to us that our imagination would not even begin to grasp it.

What individuals (humans, post humans or machines) would desire in such a world would be impossible for us to guess today.

But I don't think we should take it for granted that those desires will not keep up with the economy.


> Or imagine if AI provides access to extending life and youth indefinitely, but that doing so costs about 1% of the GDP of the US to do.

That's a bad example even if you meant 1% of current USA GDP per person getting the treatment (i.e. 200 bn/person/year), because an AI capable of displacing human labour makes it very easy to supply that kind of wealth to everyone.

That level is what I suggested earlier, with the possibility of a world where everyone not in the elite is "merely" as rich as Elon Musk is today ;)

> I don't find it difficult to imagine a clique of 50 people or so sacrificing the welfere of the rest of the population to personally be able to live a life in ultimate luxery and AI generated bliss that lasts "forever". They will probably even find a way to frame it as the noble and moral thing to do.

I do find it difficult to imagine, for various reasons.

Not impossible — there's always going to be someone like Jim Jones — but difficult.


> That's a bad example even if you meant 1% of current USA GDP per person getting the treatment (i.e. 200 bn/person/year), because an AI capable of displacing human labour makes it very easy to supply that kind of wealth to everyone.

Clarification: I meant 1% per person of the GDP at the time the wealth is generated. NOT present day GDP. Medicine is one area where I think it's possible that costs per treatment may outpace the economic development generated by AI.

Any kind of consumption that the ultra rich may desire in the future that also grows faster than the economy is a candidate to have the same effect.

It's the same as for ASI X-risk: If some entity (human, posthuman, ASI or group of such) has the power AND desire to use every atom and/or joule of energy avaialble, then there may still be nothing left for everyone else.

Consider historical wonders, whether it's the Pyramids, the Palace of Versailles, Terracotta army, and so on. These tend to appear in regimes with very high levels of concentration of power. Not usually from democracies.

Edit, in case it's not obvious: Such wonders come at tremendous costs for the glory of single (or a few) individuals, paid for by the rest of society.

Often they're built during times when wealth generation is unusually high, but because of concentration of power, medium wealth can be quite low.


Once the police and military do not need a single human to operate, the basis for democracy may be completely gone.

Consider past periods of history where only a small number of soldiers could dominate much larger number of armed citizens, and you will notice that most of them were ruled by the soldier class. (knights, samurai, post Marian Reform Rome).

Democracy is really something that shows up in history whenever armed citizens form stronger armies than such elite militaries.

And a fully automated military, controlled by 0-1 humans at the top, is the ultimate concentration of power. Imagine the political leader you despise the most (current or historical) with such power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: