Hacker News new | past | comments | ask | show | jobs | submit login

> Fortunately, none of these qualify as paperclip maximizers

I think it's weird that the maximum-paperclip hazard of super intelligence receives wide credence given that the purpose of burying the world in paperclips is so obviously stupid.

And even weirder, that the maximum-paperclip hazard can only serve as a bone of contention over what constitutes the nature of intelligence within discourse for a discipline which by definition continually begs the question of intelligence.

To rephrase this into its obvious fallacy:

A hazard of super intelligence is that it will be super stupid. And not only this, we are truly worried about the malevolence this stupidity.

Sounds like a guaranteed income for life!

And there are legion of ivory tower prognosticators who are wholly ignore this idiocy as they trouble everyone over its implication...

https://cepr.org/voxeu/columns/ai-and-paperclip-problem

//The notion that artificial intelligence (AI) may lead the world into a paperclip apocalypse has received a surprising amount of attention. It motivated Stephen Hawking and Elon Musk to express concern about the existential threat of AI. It has even led to a popular iPhone game explaining the concept.//

...but there's another legion who build institutional academic careers by foisting such idiocy upon a credulous technogentsia:

//Joshua Gans is a Professor of Strategic Management and Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management University Of Toronto//

But it's not fair to pick on a few grifters, because there's societal pattern of diseased thought. The AI maximum-paperclip-hazard fallacy is not an example of isolated lunacy among a few crazy outliers, it's an example of large class of contradictions that are going unchallenged to the point of risk the fate of organized human activity:

Synthetic currency that contrives value via a proof of stake that requires large, exponentially increasing commitments of energy to express;

Investor-driven, corporately mediated disruption of markets and workforces (implying an enormous range of negative determinations from such disruptions);

Mutually assured destruction, whereby enormous activity is committed creating the most dangerous known processes and substances to make ready for a purpose of cataclysm which must never be realized;

Growth economics on the face of a world already so terraformed that its thermodynamics have been disrupted to the point of threats to global ecological cycles;

A great democracy in which the federated will of the people is hamstrung by an utterly contrived and irrelevant contest for leadership between two men who represent the same policy.

The glare of these contradictions is so bright there's widespread blindness, yet we keep staring at the sun.




> so obviously stupid.

It's not stupid though, because there are no objective universal values that you can intelligently deduce. It's stupid to you and me because we don't want to bury the world in paperclips, we want to fill the world with art and laughter and adventure and kindness, and burying the world in paperclips is a stupid way to fail to achieve that. But if someone did want the paperclips then there's no argument you could use to change their mind, except to explain how it might deprive them of something else they want even more.


I don't really understand, "paperclips" is a stand-in for anything that would make the universe have near-zero value (when tiled/converted to this substance/pattern) when evaluated as a hypothetical in a public poll. If you can't break 50% on global control via AI, no matter how you phrase the question, what chance do you have for getting democratic support to tile the universe in microscopic patterns that vaguely resemble office supplies?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: