Hacker News new | past | comments | ask | show | jobs | submit login

The corporation as superintelligence — NVIDIA, Apple, Microsoft, Facebook, Google, Amazon

Fortunately, none of these qualify as paperclip maximizers




Several of those are humantime collectors, maximizing the time human spend looking at their content/ads.


You assume that profit maximizers are somehow less harmful.


They self-evidently are. Profits are at some stage related to fulfilling a demand. No matter what, in the end the corporation has given a group of people what they wanted. If you think there is any scenario where that is worse than consuming all the matter in the universe to make paperclips, you must not be human.

Just to clarify, I do mean what I say. Even if the corporation produces for the most reprehensible people you can imagine, how is that worse than everything ending for no reason?


> in the end the corporation has given a group of people what they wanted.

Has given an entity with spending power something that (it thought) it wanted. Context considered, there might be no humans involved.


> there might be no humans involved.

the corporation is not autonomous. Some humans decided what it wanted - mostly profits.


> Context considered

The "context" I'm referring to, which you've omitted from your quote, is that this is a discussion about the book "Superintelligence". In this context, it's entirely possible that a corporation could be autonomous.


> No matter what, in the end the corporation has given a group of people what they wanted.

For example, environmental destruction and labor abuse. There is always "a group of people" that want that kind of thing. Not a majority, but that doesn't matter.


Yes, profits are the result of fulfilled demands but maximized profits turn the whole thing into a net negative deal for all other (on a long enough time span, for all) parties involved and not all those who are not involved.


Yes, but that is still better than the paperclip maximizer ending it all. That was all I was saying.


I got that part. Here's my issue tho: the paperclip maximizer turns it's programming, it's vision, into a net negative for everyone else and is thus indistinguishable from the many people who are, while potentially sharing A - or THE greater goal - are turning the achievement of their sub-goals into a net negative for everyone, including themselves.

But an 'advanced' artificial intelligence wouldn't do that anyway, because 'advanced' means that you 'understand' - are aware of - the emergence and self-organization of 'higher-dimensional' structures that are build on a foundation.

Once a child understands Legos, it starts to build more and then more out of that ...

A lot can be build out of paperclips, but an 'advanced' AI would rather quickly find the dead end and thus decide - in advance - that maximizing the production of paperclips is nonsense.


Arguably every corporation that pollutes is a variation on the paperclip maximizer


> Fortunately, none of these qualify as paperclip maximizers

I think it's weird that the maximum-paperclip hazard of super intelligence receives wide credence given that the purpose of burying the world in paperclips is so obviously stupid.

And even weirder, that the maximum-paperclip hazard can only serve as a bone of contention over what constitutes the nature of intelligence within discourse for a discipline which by definition continually begs the question of intelligence.

To rephrase this into its obvious fallacy:

A hazard of super intelligence is that it will be super stupid. And not only this, we are truly worried about the malevolence this stupidity.

Sounds like a guaranteed income for life!

And there are legion of ivory tower prognosticators who are wholly ignore this idiocy as they trouble everyone over its implication...

https://cepr.org/voxeu/columns/ai-and-paperclip-problem

//The notion that artificial intelligence (AI) may lead the world into a paperclip apocalypse has received a surprising amount of attention. It motivated Stephen Hawking and Elon Musk to express concern about the existential threat of AI. It has even led to a popular iPhone game explaining the concept.//

...but there's another legion who build institutional academic careers by foisting such idiocy upon a credulous technogentsia:

//Joshua Gans is a Professor of Strategic Management and Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management University Of Toronto//

But it's not fair to pick on a few grifters, because there's societal pattern of diseased thought. The AI maximum-paperclip-hazard fallacy is not an example of isolated lunacy among a few crazy outliers, it's an example of large class of contradictions that are going unchallenged to the point of risk the fate of organized human activity:

Synthetic currency that contrives value via a proof of stake that requires large, exponentially increasing commitments of energy to express;

Investor-driven, corporately mediated disruption of markets and workforces (implying an enormous range of negative determinations from such disruptions);

Mutually assured destruction, whereby enormous activity is committed creating the most dangerous known processes and substances to make ready for a purpose of cataclysm which must never be realized;

Growth economics on the face of a world already so terraformed that its thermodynamics have been disrupted to the point of threats to global ecological cycles;

A great democracy in which the federated will of the people is hamstrung by an utterly contrived and irrelevant contest for leadership between two men who represent the same policy.

The glare of these contradictions is so bright there's widespread blindness, yet we keep staring at the sun.


> so obviously stupid.

It's not stupid though, because there are no objective universal values that you can intelligently deduce. It's stupid to you and me because we don't want to bury the world in paperclips, we want to fill the world with art and laughter and adventure and kindness, and burying the world in paperclips is a stupid way to fail to achieve that. But if someone did want the paperclips then there's no argument you could use to change their mind, except to explain how it might deprive them of something else they want even more.


I don't really understand, "paperclips" is a stand-in for anything that would make the universe have near-zero value (when tiled/converted to this substance/pattern) when evaluated as a hypothetical in a public poll. If you can't break 50% on global control via AI, no matter how you phrase the question, what chance do you have for getting democratic support to tile the universe in microscopic patterns that vaguely resemble office supplies?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: