If you’ve ever watched a QuakeCon talk, this is pretty much what you’d expect.
The one thing of note that I disagree with is his opinion on general AI. I think that could take centuries, and that it’s kind of like trying to predict when some unsolved math problem will be solved.
There is absolutely no reason to think the brain is non-algorithmic in any way, to the extent that you can even define such a nonsense statement without waving your hands about quantum idiocy like Penrose in his senility. The default assumption in science is that any phenomenon is explainable and predictable, not the opposite: you don't get to invert the burden of proof on that front just because it would make your point (intelligence involves non-algorithmic woo) easier to make.
Even neuroscientists, who are more pessimistic about the prospect of AGI than anyone else, generally agree that the brain is ultimately not doing anything involving woo (with the exception of a few notably crazy religious ones), and is effectively just a computer. Neuroscientists think it's doing more involved computations than AI researchers hope, but it's still just crunching data.
FWIW, Penrose first posited the idea of QM having some role in consciousness about 30 years ago. I don't remember much about his argument but it seemed plausible and not easily dismissed. BTW Penrose was also a guest on the Rogan experience.
Whatever the case, we are still very very very far from understanding the brain when it comes to consciousness. There's room for people to explore possibilities.
> The default assumption in science is that any phenomenon is explainable and predictable
Two points:
a) Not everything is explainable and predictable, and thus under the domain of science.
b) There's a huge (maybe infinite?) class of things that are explainable and predictable and yet aren't algorithms.
An 'algorithm' is a very specific mathematical concept with a very specific definition. It is quite possible that the brain is explainable and predictable and yet isn't an algorithm.
> ...but it's still just crunching data.
Only if you expand 'data' to mean every possible physical phenomenon under the sun, which is disingenuous. (Are hormones 'data'? Is electromagnetic radiation? Etc., etc.)
I'm saying that the inputs to the brain are data, and the outputs are data. The brain transforms that data in some way, and we have mathematical theorems that say yep, most of the ways data can be transformed can be expressed as an algorithm in any Turing-complete language.
If your argument is that the brain leaps past normal computation into hypercomputation or something like that, then you're making an extremely bold claim that doesn't match what we know about the physical universe (there is a long history of arguments about the physical possibility of hypercomputation, and most people don't think it's possible even in theory).
I know it sounds expansive to say that everything in the physical world (at least the bits accessible to our experimentation) can be modeled by an algorithm, but that really is the mainstream scientific view, and the edges where people argue about the fringe possibilities most definitely do not apply to the energy/time scales involved with the brain.
I upvoted you because I believe there is a case for the brain not being an 'algorithm' per se. (like ... here is the code:... ) i.e. can be run on any turing-complete computer. And that is it probably requires on timing and architecture too - lots of algorithms running at the same time and being in sync with themselves, the body, the environment. Also algorithm implies something we can understand and make a complete mental model of. Maybe the brain is more like the big ball of mud software we all hate to work on and want to refactor, except in the brain case, "refactoring" would de-optimise the timing I was referring to earlier and perhaps make the brain not work at all. Let alone we all have a different brain! And the brain is self-modifying hardware/software. I think it taking centuries to understand might be correct!
Assuming we have the right tools to understand how the brain works to the point we can reverse-engineer it and recreate it is pretty optimistic. We have gotten better at specific things but assuming that the most recent advances in "knowledge acquisition" are the last we will need is pretty optimistic. Assuming the pace of knowledge acquisition will continue to accelerate long enough to get to that point in less than centuries is pretty optimistic.
Most of the computing tech we have now is based on fundamental ideas from before 1980. https://stackoverflow.com/questions/432922/significant-new-i... I don't think the pace of new ideas in computing is accelerating right now. If that ability to capture the behavior of a human brain isn't already on the horizon, I wouldn't expect it to be doable in the next 300-400 years. We don't have the mental framework to structure a project like that.
On top of that, the engineering to even capture data from a brain to begin with is pretty far off.
There is no real evidence that computers can be programmed to intelligently invent and act on their own thought processes. All notable AI techniques to date specify in exhausting detail the thought processes that the AI should execute in carefully constructed algorithms. We're getting closer to the point where we can tell the computer to "think and act like a human" and in more and more domains it succeeds. But we're as far as ever from "think for yourself".
Any intelligent process has to emerge from the action of non-intelligent components.
Your argument is the same as saying that a hurricane could never form because air molecules do not contain wind, and water molecules do not contain rain.
I'm skeptical of any argument which hinges on the word "emerge" or "emergent". Consider: "consciousness is an emergent property of brains" vs "consciousness is a magical property of brains".
Does ‘macroscopic property’ make you feel better? Like heat, or entropy or any of the many properties in nature that are a consequence of the combined actions of their constituent material.
Any other explanation would have to rely on the property being explained existing all the way down to electrons and quarks.
Atoms aren’t a liquid, but room temperature water is. At some point, the property of being a liquid emerges through the combined behavior of collections of atoms.
We know that people are intelligent, we know that atoms are not. At some point, the property of being intelligent must arise from the combined action of their constituent parts. We have theories about how that happens, but no way to reproduce it as of yet using computers — it may not be possible. But there is no reason in principle that just because computers are comprised of non intelligent components, that they cannot give rise to an intelligent system. Humans are one example where such a thing has happened— I don’t think anyone would suggest that neurons have intelligence and even if you believed that, one certainly can’t believe that electrons and quarks do.
Of course one could posit a non physical entity or force which somehow exists in people and does not in machines which gives rise to intelligence, but to say the least that requires a great deal more evidence than we currently have for that before I would accept that as an explanation.
> All notable AI techniques to date specify in exhausting detail the thought processes that the AI should execute in carefully constructed algorithms.
I'm not knowledgeable about AI, but from some of of the game-playing AI research I've read about, it seems like we provide in exhaustive detail the rules and objectives and the AI figures out (though a very resource-hungry process) an "algorithm" (e.g. encoded as a neural network) to play the game well.
I'm not saying that's close to human thought, but it seems far beyond having to "specify in exhausting detail the thought processes that the AI should execute in carefully constructed algorithms."
I would imagine that it's entirely possible to generate creative thought using computers. Whether or not the thing driving that creative thought also generates consciousness(capability for qualia) is another matter entirely.
The simulation of a thing is not the thing itself. Perfectly simulating an apple falling from a tree only generates information about the thing, but the simulation hasn't caused an actual apple to fall from an actual tree. Computers are merely symbol manipulators, and symbols are nothing more than the material used to make them(ink on paper, electrons in a computer, etc...). I suspect that consciousness must follow the same logic -- We should be able to simulate something that looks exactly like creative thought(and probably is, technically), but if that same symbol manipulation results in an actual consciousness being produced in the actual world, then the universe has to be a much stranger place than we already know it to be...
> Computers are merely symbol manipulators, and symbols are nothing more than the material used to make them(ink on paper, electrons in a computer, etc...).
But a brain is also just a symbol manipulator, in essence. All it does is shift electrical current from one place to another.
I think a distinction can be made between the two. The brain can manipulate symbols, but it's not clear that it's the symbol manipulation itself that causes consciousness. A computer simulation doesn't actually fire neurons, it manipulates symbols representing neurons(to us) in a way that could be interpreted as firing. Those two things are very different, despite the fact that they both use electricity. Again, there's a difference between a physical thing vs an accurate representation of a thing by other physical means.
As a thought experiment, would it be possible to create consciousness using pen and paper calculation? Perform the same calculations by hand, multithread it by using billions of people performing the same symbol manipulation we would have otherwise performed on the computer. I argue all that project would produce is a large stack of paper and ink. I would argue that it's not the act of manipulating symbols that gives rise to consciousness(since it's the only thing the computer and paper methods have in common), rather that something qualitatively different is happening in the brain.
> Again, there's a difference between a physical thing vs an accurate representation of a thing by other physical means.
A late reply, perhaps, but:
- Does the physicality matter all that much here? For sake of argument, let's assume that I have an infinitely powerful computer, which simulates a brain perfectly, down to the last atom. Would the thing inside my simulation not be conscious, and if so, why not?
> As a thought experiment, would it be possible to create consciousness using pen and paper calculation? Perform the same calculations by hand, multithread it by using billions of people performing the same symbol manipulation we would have otherwise performed on the computer.
I don't see why not, to be honest. That is: given enough computing power and enough knowledge on how it works.
> I would argue that it's not the act of manipulating symbols that gives rise to consciousness(since it's the only thing the computer and paper methods have in common), rather that something qualitatively different is happening in the brain.
I must admit: I'm a Hofstadterian in this; I agree with his conclusions that consciousness, as we describe it, is an emergent property caused by the act of manipulating symbols representing ideas and concepts in a self/circularly referencing manner.
The one thing of note that I disagree with is his opinion on general AI. I think that could take centuries, and that it’s kind of like trying to predict when some unsolved math problem will be solved.