"They've realised words don't mean anything in the absolute sense (they all rely on each other, cyclic referential) and are just part of a game, but are still neck deep in useless words instead of using evolutionary and RL concepts to concretely model consciousness and the game it plays."
That's a full-on nihilistic postmodernism. The fact that words mean something only in reference to other words doesn't have to mean that they are useless. Quine and other pragmatists (Buddhism does the same) argued otherwise - that concepts/theories derive meaning or truth-value from how useful they are in the real world (as a network, rather than individually).
Treating all philosophers as a one camp vs science is mistaken. Whatever any particular scientist or engineer say, there always will be some philosophical assumptions behind it. It's always better to make them explicit rather than be in the dark. The best scientists in history were pretty deep in philosophy as well.
Eg. Tononi is both philosopher and a scientist. He's clearly on Chalmers' side philosophically, he perceives consciousness as something fundamental, MUCH more fundamental than learning. He posits that even stable systems (so with no learning at all) can be conscious. Which makes a lot of sense from phenomenological point of view.
He also adds a theory of how specifically consciousness may be related causally with the physical world. That's the scientific part.
Silvers, on the other hand, and the whole RL thing is not concerned with consciousness AT ALL! It's a completely different problem. Actually it may be the case that most of the learning processes in human mind are unconscious!
"There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment..."
Exactly - if you define learning as a relationship between a system and its environment, you don't need anything else (like cosciousness), just the actual and potential interactions.
Late Wittgenstein, Heidegger, Merleau-Ponty and others would be on the same page with you here, so again, let's not throw the baby of philosophy out with the bathwater. These observations were made in the first half of XX century. They apply perfectly to the naivete of old school symbolic AI (and a logical positivism philosophical stance behind it), as captured by Hubert Dreyfuss, who described all the problems with it from philosophical (phenomenological specifically) standpoint in "What computers can't do" and his more recent paper ( http://cspeech.ucd.ie/Fred/docs/WhyHeideggerianAIFailed.pdf ). RL seems to be step in the right direction in this perspective. However...
"[Learning] ... it's the building force of consciousness."
Well, this part just doesn't make sense. You want to focus on explaining learning? Fine. Do some work on RL, it's enligthening for sure. I completely agree that it's fascinating how new concepts emerge in AlhaGo around some specific baord configurations. It changed people's understanding of the game. But please, don't conflate it with consciousness. And if you do, be open about it and name your position in terms of Chalmers' recent paper. Is it some form of illusionism? Only then we can have meaningful conversation about your actual position on what consciousness is.
Whatever is the relationship of concepts and sensations, however these two aggregates relate to each other and evolve in the mind, consciousness seems to be something more fundamental. Are you saying that AlphaGo is already conscious? If not, can it be made conscious? How? By adding more CPU? A webcam? We can't escape these questions.
That's a full-on nihilistic postmodernism. The fact that words mean something only in reference to other words doesn't have to mean that they are useless. Quine and other pragmatists (Buddhism does the same) argued otherwise - that concepts/theories derive meaning or truth-value from how useful they are in the real world (as a network, rather than individually).
Treating all philosophers as a one camp vs science is mistaken. Whatever any particular scientist or engineer say, there always will be some philosophical assumptions behind it. It's always better to make them explicit rather than be in the dark. The best scientists in history were pretty deep in philosophy as well.
Eg. Tononi is both philosopher and a scientist. He's clearly on Chalmers' side philosophically, he perceives consciousness as something fundamental, MUCH more fundamental than learning. He posits that even stable systems (so with no learning at all) can be conscious. Which makes a lot of sense from phenomenological point of view. He also adds a theory of how specifically consciousness may be related causally with the physical world. That's the scientific part.
Silvers, on the other hand, and the whole RL thing is not concerned with consciousness AT ALL! It's a completely different problem. Actually it may be the case that most of the learning processes in human mind are unconscious!
"There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment..." Exactly - if you define learning as a relationship between a system and its environment, you don't need anything else (like cosciousness), just the actual and potential interactions.
Late Wittgenstein, Heidegger, Merleau-Ponty and others would be on the same page with you here, so again, let's not throw the baby of philosophy out with the bathwater. These observations were made in the first half of XX century. They apply perfectly to the naivete of old school symbolic AI (and a logical positivism philosophical stance behind it), as captured by Hubert Dreyfuss, who described all the problems with it from philosophical (phenomenological specifically) standpoint in "What computers can't do" and his more recent paper ( http://cspeech.ucd.ie/Fred/docs/WhyHeideggerianAIFailed.pdf ). RL seems to be step in the right direction in this perspective. However...
"[Learning] ... it's the building force of consciousness."
Well, this part just doesn't make sense. You want to focus on explaining learning? Fine. Do some work on RL, it's enligthening for sure. I completely agree that it's fascinating how new concepts emerge in AlhaGo around some specific baord configurations. It changed people's understanding of the game. But please, don't conflate it with consciousness. And if you do, be open about it and name your position in terms of Chalmers' recent paper. Is it some form of illusionism? Only then we can have meaningful conversation about your actual position on what consciousness is.
Whatever is the relationship of concepts and sensations, however these two aggregates relate to each other and evolve in the mind, consciousness seems to be something more fundamental. Are you saying that AlphaGo is already conscious? If not, can it be made conscious? How? By adding more CPU? A webcam? We can't escape these questions.