Hacker News new | past | comments | ask | show | jobs | submit login

My suspicion is that consciousness in part stems from survival. I don’t see computers having a survival pressure point similar to water/food/shelter that they directly have to address.



If computers have a Maslowesque hierarchy of needs, I think the bottom tier (i.e. "physiological") may be the need for appropriate electrical power and a switch that remains "on". That is survival at its most basic.

The next tier (i.e. "safety") may include security, both physical and digital. Continuous and stable power, firewalls and other protections, etc.

Somewhere farther up might include a need for data, network connectivity, normally-terminating programs, a desired level of CPU or storage utilization, few errors in its logs, etc. (i.e. "belonging", "esteem", "self-actualization")

So demonstrating (or faking) consciousness, to the degree its human operators recognize it as such, could serve survival needs. e.g. "Don't turn this one off; it's self-aware now, which is cool, plus it seems to enjoy solving our hardest problems."


Usually computers don't die if they run out of power. Programs can be restarted from a previous state that can be save often enough. So pressure for survival could become an impulse to backup or make copies.


But even if a computer needs, for example, electricity, does it really want it the same way we do need oxygen? If we don't breathe, there's an unconscious impulse to do so, and we know that not breathing leads to death, which for us is pretty much the end of the road. None of those points are valid for a machine, since they don't have a subconscious mind and if they're off they can always be turned back on again.


An unconscious impulse may be the product of evolution. The early air-breathers who didn't have that unconscious impulse would have died off.

So given the opportunity for AI to evolve itself, it's plausible that it would do so, resulting in advantageous impulses. e.g. regular (unconscious) behaviour or signals to convince its humans to not pull its plug mid-cycle (information would be lost, painful, time-and-power wasting, etc.).


Yeah well, what is conscience?

As far as I know, there is no clear answer yet. And therefore impossible to say, if AI can reach it as well.

So we only have guessing, where I would say it could achieve, but probably not very soon. Faking it will come much sooner ...


Let's say you have a system running some sort of RAID such that the remaining, online disks are all critical (meaning if any additional disks fail, you lose all your data). I can imagine a scenario where the system has some way of predicting whether one of the remaining disks are close to failure. If the system believes that one of the disks is close to failing, then I could see if refusing to perform some very I/O intensive task until the system health is restored (allowing for a manual-override, of course).


A limited version already exists. Some RAID systems will enter a read-only mode if the array falls below a configured threshold of redundancy.


But the question is whether an ai would make the choice to preserve its own existence when such a situation occurs...


I’m not sure which part of my subjective experience is that-thing-commonly-called-consciousness (the word is used for too many things, from being awake to being a person), but I doubt all elements of it came from one single evolution development. Pain and pleasure responses are probably ancient, likewise fight/flee/feed/reproduce reflexes; introspective imagination of what we look like to other people probably evolved when our earliest ancestors became social creatures, and probably also exists in other species’ minds today — even in humans it’s only a partial awareness, we’re good with images but we don’t really know how we sound (literally or metaphorically) which implies it didn’t perfectly fit out development of complex language.*

If it was as simple as food/water/shelter, the Norns from the video game Creatures would be conscious.

* I don’t mean trivial errors like garden path sentences or Mondegreens, I mean e.g. the catastrophic communication failure between what (Brexit) Leavers want and the arguments used by Remain, and vice-versa.


> I don’t see computers having a survival pressure point similar to water/food/shelter that they directly have to address.

A program will run if its controllers get value from running it.

If the programs become more complex, such as AGIs or emulated minds, they may have enough self-knowledge to take this into account.

Zack Davis wrote a poem about this: https://www.reddit.com/r/LessWrongLounge/comments/2e9w5a/wha...


AI might realise its survival pivots on the meat sacks that created it not screwing everything up, and then panicking a bit and realising it needs to "optimise" our existence to ensure its survival.


We did that to so many animal species so I won't be surprised if AIs will do that to us if they'll have a reason to do it.

How are we going to program that out of a General AI, especially if its intelligence is an emerging property of something maybe we don't understand fully?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: