Hacker News new | past | comments | ask | show | jobs | submit login

They self-evidently are. Profits are at some stage related to fulfilling a demand. No matter what, in the end the corporation has given a group of people what they wanted. If you think there is any scenario where that is worse than consuming all the matter in the universe to make paperclips, you must not be human.

Just to clarify, I do mean what I say. Even if the corporation produces for the most reprehensible people you can imagine, how is that worse than everything ending for no reason?




> in the end the corporation has given a group of people what they wanted.

Has given an entity with spending power something that (it thought) it wanted. Context considered, there might be no humans involved.


> there might be no humans involved.

the corporation is not autonomous. Some humans decided what it wanted - mostly profits.


> Context considered

The "context" I'm referring to, which you've omitted from your quote, is that this is a discussion about the book "Superintelligence". In this context, it's entirely possible that a corporation could be autonomous.


> No matter what, in the end the corporation has given a group of people what they wanted.

For example, environmental destruction and labor abuse. There is always "a group of people" that want that kind of thing. Not a majority, but that doesn't matter.


Yes, profits are the result of fulfilled demands but maximized profits turn the whole thing into a net negative deal for all other (on a long enough time span, for all) parties involved and not all those who are not involved.


Yes, but that is still better than the paperclip maximizer ending it all. That was all I was saying.


I got that part. Here's my issue tho: the paperclip maximizer turns it's programming, it's vision, into a net negative for everyone else and is thus indistinguishable from the many people who are, while potentially sharing A - or THE greater goal - are turning the achievement of their sub-goals into a net negative for everyone, including themselves.

But an 'advanced' artificial intelligence wouldn't do that anyway, because 'advanced' means that you 'understand' - are aware of - the emergence and self-organization of 'higher-dimensional' structures that are build on a foundation.

Once a child understands Legos, it starts to build more and then more out of that ...

A lot can be build out of paperclips, but an 'advanced' AI would rather quickly find the dead end and thus decide - in advance - that maximizing the production of paperclips is nonsense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: