Hacker News new | past | comments | ask | show | jobs | submit login

The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield our
agency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us.

This sentence (and the ones that precede and culminate in this one) made me think of Ian Bank's Culture.




In practice though, the modern world has already done this to a degree, it's just distributed currently.

Most people have no clue how things work that they depend on or put their trust in daily - airplanes, cars, medicine, waste management, energy etc...


That's not really what he meant by "exponentially more powerful." The Culture is a helpful example, because it's a society administrated by ludicrously intelligent machines called "Minds" that are obviously and immediately more capable than a human could ever be. Their intellectual dominance cannot be overstated, and a single Mind is probably smarter than all of humanity put together. Each one is perfectly capable of observing a human's brain and accurately predicting their future actions, even though doing so is highly discouraged by social custom. They are truly "exponentially more powerful" and "potentially unknowable." A recurring theme in the stories is that the Culture's people essentially live as the Minds permit. They could not possibly sustain their post-scarcity lives without the Mind's superhuman abilities, and the Minds could trivially enslave the entire populace if they wished.

By contrast, airplanes aren't exactly black magic. Any regular adult human can figure out airplanes. They teach the prerequisite skills for flying, designing, and building aircraft in schools. The majority of humans alive today choose not to master the secrets of flight because there are only so many hours in the day and they have other stuff to do, but that's not the same as yielding any agency to someone "exponentially more powerful." They could just as easily have ended up the airplane guy if they'd made some different decisions in college. Cars, medicine, waste management, and energy are similarly things that anybody could potentially understand and work with given some reasonable amount of study. You'd run into trouble mastering all of them together, but that doesn't make the required mind "potentially unknowable." There are no supermen who enable modern human society, it's just regular chumps like me and you in organized groups. We could totally become two of those chumps! In fact, we probably are already two of those chumps.


Totally agree that they are different.

What I think is the same philosophically though, is ceding the power over fundamental activities/interactions for functionality. It's not even intentional or conscious - which in fact is what I think makes the metaphor even powerful.

When real AGI comes, if it comes well, then it has the possibility to look like the pinnacle of functionality, just like our tools are now. So it is definitely exponentially more powerful, but for the average person it will look like magic, much like most technology does now to the under-informed.

...and given that we would have built it, it won't be black magic either - just so totally far removed from the average person that it will look like magic.


>What I think is the same philosophically though, is ceding the power over fundamental activities/interactions for functionality.

I'm not sure what this means. If you're referring to the fact that groups are usually required to make everyday society work, that's not a useful example of ceding power. Everyone at the train company has essentially welded their power together in a Captain Planet or Voltron sort of deal to make rail transport work, but that doesn't make anyone else less powerful. It makes the train people highly interdependent both on each other and on everyone else to use the trains to go to the farms or the medicine factory or whatever and make the other parts of society work. They can't tell everyone else to sod off because they've got some magic spell that makes the trains work. The trains are run by regular chumps without any magic, and any other regular chump could be made to replace any one of them. Even if they were special, their special-ness wouldn't make them any less dependent on anybody else since they still just run the trains. By contrast, the Minds are totally required for the Culture to maintain their shiny post-scarcity status quo, no human effort could ever replace the Minds, and the Minds are not dependent on the efforts of any humans. The Culture's people have actually ceded power over all sorts of everyday stuff to the Minds. The Minds could tell everyone else to stuff it if they wanted to, and nobody would be able to do anything about it.

>When real AGI comes, if it comes well, then it has the possibility to look like the pinnacle of functionality, just like our tools are now. So it is definitely exponentially more powerful, but for the average person it will look like magic, much like most technology does now to the under-informed.

>...and given that we would have built it, it won't be black magic either - just so totally far removed from the average person that it will look like magic.

How much it resembles magic to the under-informed is irrelevant. The under-informed aren't uninformable, they just haven't been sufficiently informed yet. I know lots of people that are convinced they can't be sailors because spending weeks on a ship just seems totally beyond them, or that they can't be physicists because they had a hard time with calculus in school. It seems to them that those tasks require some ineffable qualities that can only be found in others, but they're wrong. Both calculus and seamanship are challenging to comprehend and highly impressive when applied, but they aren't "unknowable," they're just "unknown." Any given chump could learn to pull them off if they weren't doing something else. It's only unknowable magic if they couldn't ever figure it out, or if no group of people like them working together could ever replicate it themselves.


What does it mean to "maintain control of your own fate", if a Mind can predict your future actions and therefore you lack free will?


Simply predicting it is one thing and doesn't detract from your subjective experience of free will. You maintain control of your fate in so far as that the Mind will not change things to get a certain outcome.


Congratulations, you've nailed down the Culture's biggest quandary in one sentence. The best answer I can give you is "good question." The best answer Banks could give you is spread out over an award-winning series of nine books, so it's probably worth a look.

For starters, Minds are fully aware of how big a deal mind reading is, and virtually never actually do it. A situation has to be ethically ridiculous before a Mind will even begin to consider it, we're talking "this person was brainwashed into hiding a nuke on the puppy daycare planet and we can't find it" sort of situations. This somewhat changes the question: "If the Mind could read your thoughts and predict your future actions, but hasn't, what does it mean to... " Again, the best answer I can give you is "good question." I suppose it's worth pointing out that if the mere possibility is enough to deprive you of free will, your universe is totally deterministic, the Mind doesn't have free will either, and you're all in the same boat. The Mind doesn't meaningfully have any "control" over you, since nobody has control over anything. The realization that you don't posses free will won't help you gain free will since that realization was itself inevitable and the actions you take as a result are too.

Further, if you know the Mind can perfectly predict your future actions when they scan your brain, does that change how you'll act? What if they scan your brain in secret and don't tell you so you don't know when their prediction was formed? I'm going to take an example from a source I'm sure we're all familiar with: the 2007 film Next, starring Nicolas Cage. The premise is that Mr. Cage's character can observe his own future for the next two minutes. The big caveat is that it only gives him a highly useful idea of what the future is probably like and not a faultless prediction. By the time he's done looking, the future has changed because he looked at it. The end result is that he never actually knows the future. His power is subject to the "observer effect:" the act of detecting the future caused it to change, so he only knows what the future would have been at the moment he used his ability. From then on, the actual future is different, and the only way to know how it changed is to use his ability again, which will yet again change the future. A simpler but less Cage-tastic example is detecting electrons. You can observe when photons interact with the electron, but the electron's path was altered by the photon, so good luck figuring out where it is now. You could wait for it to interact with another photon, but that will cause... and so on. Perhaps the Mind shooting a bunch of futuristic radio waves or whatever inside your skull alters your thinky bits, and the very act of detecting your mental state alters your mental state. The Mind will then extrapolate from data that was outdated by the very act of producing it, and his prediction will be wrong. How wrong? Wrong enough to matter philosophically? "Good question."


I'd also note that to predict someone's future behavior, you'd have to predict everything in their future environment, and I'd assume simulating a large part of reality would cost a lot of energy - and if we're facing heath death, energy is all we, and the minds, have got. So economically it would be a minor form of suicide.


This is important to note, thank you. Capitalism is based on this, that someone is better at doing #thing so you can do #stuff, and don't have to worry about #thing. What holds it together is that anyone can do #thing and those that do it safer or better get the rewards. Granted, many of these #stuffs are highly regulated (air travel) for safety, and really only after we screwed up a lot. Interestingly, the harder or more destructive #thing is, the more we try to control it. I understand the argument that eventually we won't have the iteration time to develop the regulation necessary for #thing; moore's law and all that jazz. But I have hope this system will take the least destructive path eventually, as it mostly has thus far.


All people do not understand all the things society created and they use to maintain their control (not only the the hardware, but also economy - salaries, prices, politics - elections, military institutions).


A few notes from the author might be useful there: http://www.vavatch.co.uk/books/banks/cultnote.htm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: