I'm not sure "consciousness" really exists. A computer surely isn't aware of what it is computing and you don't need a global state of consciousness to describe its functionality. Why point to things like "consciousness" when our brains being biological automatons is a perfect explanation already?
It gets even sillier when you consider that the criteria for what things are considered conscious seems to be quite arbitrary. They say a human is conscious. Is a dog? I think most people would agree if a human is conscious a dog is. Is a squirrel? I think most people would agree. Is a lizard? Maybe. Is a spider? eh... Is a worm? well some might only have ~300 neurons... Is a plant? Well, no neurons, but they technically do react to stimuli...
How about a rock? Well it certainly doesn't have any neurons and doesn't react to stimuli but what is so special about neurons and reacting to stimuli anyway? The concept of a threshold for consciousness doesn't seem to hold any weight to it. It has nothing backing it up.
When two particles interact with each other is a barrier of consciousness formed? Are all systems conscious? Is the universe conscious? Are all possible systems made up of all the connections of all the constituent parts of the universe conscious?
If so, then the work of anesthesiologists is quackery. :)
> Why point to things like "consciousness" when our brains being biological automatons is a perfect explanation already?
Because reductionism doesn't provide a satisfying explanation for higher level emergent phenomena.
For example, x86 disassembly isn't particularly informative about what is happening in a theorem-prover written in Lisp.
> I think most people would agree if a human is conscious a dog is. Is a squirrel? I think most people would agree. Is a lizard? Maybe. Is a spider? eh... Is a worm?
Probably, there is a continuum. A squirrel is conscious in a less "rich" way than a dog, and so on.
That depends what you mean by "consciousness". If you mean the "experience of awareness", then we can't even properly define it, let alone begin to explain it.
This whole question is so hard because we find it impossible to define the terms. It could be that by default, we cannot define consciousness. Perhaps that implies that the universe is made of consciousness, rather than the other way around.
You are right we should properly define our terms. And of course as you pointed out it is very hard. Maybe if we defined them in a sort of reverse order it would be easier.
Consider humans as biological automatons. Our brains are some sort of neural networked computer and the architecture of the neurons makes for, from a programmers point of view, some very advanced sort of "artificial" intelligence. From here there seems to be two possibilities: 1) That is it. Nothing else and their is no "being" experiencing what the brain processes or 2) something else, I think this idea is what we call...
Consciousness would be the idea that their is some sort of unifying sort of connectedness that allows the soul, ghost, being, etc to experience what is happening and in some people's viewpoints have an affect on what is happening (freewill in other words).
To me this definition seems terribly biased towards my point of view but I don't really know of any other way to define this.
I can't define consciousness, but what has always helped me understand consciousness is: Say you have a camera connected to a computer, which is connected to a display. Some processing happens, which gets displayed by the screen.
We humans are similar, we have our eyes (camera), this sensory data goes to our brain (cpu) and then it goes to our display (... consciousness?).
I like Douglas Hofstadter's take on it - consciousness is a loop, when an entity thinks about thinking about thinking. His book I am a strange loop is a very entertaining read :-)
For me consciousness is "the ability to analyze and influence your own thoughts".
This might not correspond 100% to what philosophers mean, but it has the immense advantage of making consciousness a well defined concept, and even measurable.
I am almost certainly wrong about this, given it is mostly idle supposition, however I think of it in terms of awareness and reflective awareness and that the ratio of reflective awareness to simple awareness in a network of neurons is very roughly to do with how much of the network is folded back on itself compared to how much reaches out to the sensory inputs.
So from this perspective, reflective awareness is what you define as consciousness, and it can be present to massively varying degrees, alongside direct awareness down to the neuron.
Well, assuming you run the JIT on its own code it could be considered conscious to a tiny degree. It can analyze it's code, but only at a very low level, with no understanding of how it works. Also it can change its own code, but only to improve speed, not to change its functionality.
But, you are right that it is possible to write quite simple self modifying programs that would be conscious with this definition. However, having consciousness without intelligence can't be very useful.
Good point, I'm just thinking out loud in this post.
Either All sensory endpoints and the viewer are one, the viewer IS the cumulative sensory endpoint, we'll name this consciousness.
OR the viewer is a separate "something", the viewer is separate from the sensory endpoint. And we'll name the sensory endpoint "consciousness" and the viewer the "soul".
Interesting. But if the universe is all consciousness, then one can ask what it is conscious about, and why this consciousness is forced to follow the laws of physics.
On the other hand, I find it surprising that science does not deal with "experience of awareness", for clearly it is part of reality (since we are talking about it) and hence of physics (there is only one way out of this, which is to say that we are all zombies just making up that our awareness is influencing reality).
By the way, anyone know of a more concise term for "experience of awareness"? I have been looking for it, and I'm sure it must exist, but no luck so far. How do philosophers address this concept?
This dualistic approach cannot definitively be ruled out, but it does raise serious problems as to how these categorically different substances can have any affect on each other.
Another approach is to consider consciousness and material to be 'dual aspects' of the same intractable underlying reality. This was the approach taken by Spinoza and later by Schopenhauer, but also, independently and millennia earlier, by Hindu and Buddhist philosophy.
This line of analysis can be followed for almost anything we have a word for. Anyone can draw a typical chair. Now start morphing it: the back gets shorter and rounder, the legs get shorter, more and more... At what point does it stop being a chair? And what does it become then? Where's the boundary between a stool and a chair? A chair and a bench?
The fact that consciousness, as a word, has fuzzy edges, does not imply that it does not exist. Most of our words have fuzzy edges.
> A computer surely isn't aware of what it is computing
A computer could be conscious without being aware of what it is computing just like a human can be conscious without being aware of what their neurons are doing.
> The concept of a threshold for consciousness doesn't seem to hold any weight to it.
So then consciousness must be a sliding scale? I would agree.
> Is the universe conscious?
I would say this is likely. If neurons and the rest of our physiology can give rise to consciousness, then I think it's likely that there are objects at larger scales than us that are conscious (at least partly) due to our behaviors.
I'm no expert in neurology or any related field but I think the concept on consciousness is somewhat fuild but overall I would explain it like this:
Knowledge of self, aknowledging one's physical and mental limitations and one's ability to relate and interact with others and the environment in such a way that transcends on's initial limits if one is so inclined(ie, desire for growth and the choice not to do it).
Just responding to stimuli is not enough, because humans can accept that the stimuli and chose not to respond to it or can decide to act despite said inputs.
Confused: a definition is unrelated to what we 'want' consciousness to mean. I define it like this: we know what it is to be 'unconscious'. Conscious is the opposite of that, which is simply responsive to stimuli.
If we want to describe that other idea with a word, then perhaps self-aware is closer.
I can see you are trying for a 'working definition' of consciousness which would encapsulate the phenomenon without any philosophical wild goose chase involving defining essences. This approach has been wildly successful in science and mathematics, where many advances have been made simply by shedding intuitive preconceptions.
The problem with your tentative definition above is that defining consciousness simply as response to stimuli does not distinguish between a conscious response, and a mechanical response. These are at least notionally distinct, and it is very hard to regard then as being the same thing.
You have a list of things of decreasing consciousness. So the question is, what makes a dog more conscious than a worm? And can we make a computer with more of that same property?
It seems you're talking about what William James called the "Mind Stuff Theory" [1]. He ended up rejecting it, but even a hundred years later our conceptions are too muddled to have good arguments either way.
maybe the electron in a computer are what makes it "conscious" the thing with silicon is that it coerces the electron on a set path. a neuron is more lenient and lets the electron choose more freely.
The analogy I fall back on is the concept of a wave in a body of water. Where, in the body of water, is the structure that defines the wave? How does a wave arise from a bunch of water molecules?
What we call a "wave" is actually a broadly distributed energy pattern in the water. We just have created a single word for it to help us recognize and talk about that pattern.
Consciousness might be like that--a (very complex) pattern of energy in the matter of the brain. "Consciousness" would refer to a pattern of behaviors, rather than a physical "thing" like a rock or a hand.
There are two basic lines of thought here, the hard-electron AI school of thought that pushes the emergent hypothesis and the "Consciousness is not Computation" school of thought that accepts that Mind is Computation but states that Consciousness is something else.
Tip: the AI hypothesis does NOT explain nor predicts the thing that a Paramesium can do.
> the AI hypothesis does NOT explain nor predicts the thing that a Paramesium can do.
Can you elaborate? Off the top of my head, I believe Paramecia display slightly above amoeba-like intelligence; what I'm missing is the leap to how a theory of emergent consciousness would not be able to explain this.
I think they're arguing that consciousness is an emergent property of a sufficiently large network, rather than being located in a particular cerebral structure.
I'm in agreement. Think of large social sites like reddit or 4chan. I think it would be fair to consider them as conscious entities for the same reason. They can have conversations with themselves about themselves and can even have conversations/interactions with outside entities all driven "by their own volition".
Ah, so I'm not hte only person that thinks that way. I've found it helpful to imagine the same mechanism at work in any large system like market, a nation and so on. Even things like ant colonies have a sort of slow primitive consciousness.
It's particularly interesting that you mention 4chan, since the the fact that it is anonymous by default actually fosters this behavior.
Consciousness is typically understood as "the one that experiences". The failure of reductionism to explain it can be demonstrated by an example:
You accidentally burn your finger on the stove, this is detected by the nerves that send nerve signals to the brain that signifify pain. In the brain those signals create an incredibly complex reaction in the brain that cause you to withdraw your hand. For a "philosofical zombie" - an awareless automaton - that is all that happens.
For an aware being, this is followed by an "experience" of pain.
How does this experience of pain arise? There is an action and reaction in both the aware and the unaware, an automaton does not EXPERIENCE pain, even though reaction and consequent actions are identical.
Let us focus on the experience of the aware observer - where does it start? If it is in the brain, we can reduce it to physical effects, in which case our automaton could replicate it. Plus, at what point does the feeling arise? It's clearly not in the hand itself, nor in the nerves. But nerves are an extension of the brain. We have a part of the brain that handles pain, but why should those particular brain cells cause a painful feeling to arise? These are just like all the other cells in the brain, just differing in connections.
Why would the activation of some cells cause pain to arise? If it is due to the connection that cause our brain to decide that this was a negative experience, then this would imply that the feeling arises when the pain center sends signals that it is stimulated to other parts of the brain. Then does the feeling of pain arise in the cells connecting to the part of the brain dealing with pain? If so, by what mechanism do those parts of the brain cause the feelings to arise?
There is no way to nail down a mechanism that isn't physical and anonymous (i.e. doe not know the context of what it is connected to), and trying to claim that inanimate objects are self-aware is a bit of a stretch. We reach the conclusion that there is no good way to describe aware experience arising from the physical.
Nor do any simple metaphysical notions of "soul" work, as any metaphysical system run into similar issues (if there is a soul, then by what process does it give rise to awareness)
So analyzing the process of detecting a disk on a screen is somehow supposed to permit conclusions on consciousness?
Well, my cat meows when I switch on the light so I suppose I should write about how "Electricity suggests language development is intrinsic in mammals"
Yes. When a person is conscious of a disk on the screen, their brain scans look very different from when they do not detect the disk. And there isn't just one part of the brain that changes. I'm not saying fMRI is a great tool for this, but it seems like a pretty clear conclusion.
The fact that it's such a small thing that they're testing is what makes the large reaction so interesting. They were trying to determine what part of the brain is involved in detecting an object, but just that tiny bit of awareness seems to be widely spread out.
It gets even sillier when you consider that the criteria for what things are considered conscious seems to be quite arbitrary. They say a human is conscious. Is a dog? I think most people would agree if a human is conscious a dog is. Is a squirrel? I think most people would agree. Is a lizard? Maybe. Is a spider? eh... Is a worm? well some might only have ~300 neurons... Is a plant? Well, no neurons, but they technically do react to stimuli...
How about a rock? Well it certainly doesn't have any neurons and doesn't react to stimuli but what is so special about neurons and reacting to stimuli anyway? The concept of a threshold for consciousness doesn't seem to hold any weight to it. It has nothing backing it up.
When two particles interact with each other is a barrier of consciousness formed? Are all systems conscious? Is the universe conscious? Are all possible systems made up of all the connections of all the constituent parts of the universe conscious?