The argument that artificial intelligence requires understanding how intelligence works is an argument that natural intelligence requires Intelligent Design. (Its also an argument that fortuitous discoveries--such as of pharmaceuticals with utility in treating conditions they were not designed for and whose mechanisms we do not understand--cannot occur.)
Obviously, understanding intelligence better would promote more effective directed research toward artificial intelligence. But if we can identify it (which the Turing Test is about), then it is quite possible that we can develop it -- and know that we have -- without understanding it. (And it may only be through developing it that we end up understanding how it works.)
I upvoted you for the insightful reasoning but I disagree with your stated premise that
> The argument that artificial intelligence requires understanding how intelligence works is an argument that natural intelligence requires Intelligent Design.
I think that statement makes sense when phrased that way, so it is an attractive idea. However, I don't think it's true. From an evolutionary standpoint, biological intelligence developed naturally because biological components are natural. Furthermore, machines do not develop when left in isolation, while biological organisms do. If you leave a large population of simple machines running in an environment, this is overwhelmingly not likely to result in a machine intelligence millions of years later. Machines were developed by humans and do not develop in the same way; comparing the two as in your Intelligent Design argument doesn't make much sense.
I do not think artificial intelligence can arise naturally because its components are not natural. This discussion could also foster a discussion on two other interesting questions:
1. Does intelligence require organic components that operate in a deterministic way (i.e. the brain) to transcend it from merely a "machine"?
2. If intelligence requires biology, where do you draw the line between creating an intelligence through natural human reproduction and creating an artificial intelligence another way?
Personally, I believe artificial intelligence does not require biological components, and I believe that under certain circumstances, it could develop unintentionally from a relatively advanced computer, but that is not the same as naturally.
> Furthermore, machines do not develop when left in isolation, while biological organisms do.
Biological organisms are machines.
Entities which meet the necessary requirements for Darwinian evolution (which are, approximately, that they can pass on their traits from generation to generation, have a source for variability, and have selective pressure in the environment) change over successive generations, and all kinds of machines change individually over time in response to interactions with the environment (in fact, pretty much all physical objects do.) Biological organisms obviously meet the requirements for Darwinian evolution (since that's where it was first observed and described), but there's no magic "biological sauce" required.
> I do not think artificial intelligence can arise naturally because its components are not natural.
The whole natural/artificial divide is unsound, since humans, and hence all of their products, are part of and products of nature. Nothing not-natural exists.
> Personally, I believe artificial intelligence does not require biological components, and I believe that under certain circumstances, it could develop unintentionally from a relatively advanced computer, but that is not the same as naturally.
The whole thing about artificial intelligence arising naturally is a strawman you've constructed. You've just agreed with and illustrated my argument -- which is that artificial intelligence does not require understanding intelligence.
No, I think we're arguing two different things here. I don't mean machines in the sense of a deterministic mechanism, I mean machines in the sense of a non-biological computer.
Machines do not have generations. Machines are not alive. I can simplify this: we do not yet have computers which are alive, thus they fundamentally do not operate the same way humans do in terms of evolution. This circles back to what I said about leaving machines alone to develop in an evolutionary sense - they won't, they'll die.
Machines do not yet improve themselves after a certain "age" of maturation.
The natural/artificial divide is not unsound, because artificial means something which is created by a human, and natural means something which is not. There are definitions of these terms and philosophical schools of thought that make this divide unsound, but colloquially, I don't mean those in this context.
Here is the basic point I'm trying to make - human beings arose naturally with no apparent intelligence to guide them into existence. Machines did not do so. You can argue they did because "everything is natural", but that's not my point here. They are fundamentally different from human beings and only exist because human beings existed first.
You can't compare the relationship between an Intelligent Design and human species and the relationship between machines and humans for this reason because they are the opposite. Humans apparently have no guiding force that opted to create them or guide their existence, whereas machines do - us.
That is what I mean by natural. I think an intelligence can arise naturally, but only if it follows the natural conduits that would form conventional intelligence, which is via biological organisms.
In order to replicate what happened through evolution using a completely different "container" if you will, you'd need to be able to understand intelligence completely, or at least enough to implement it.
Ultimately, intelligence never arises from non-organic components (at least not on Earth). To make it do so where it would not ordinarily happen is what I mean by artificial versus natural. And to do that is to essentially reverse-engineer intelligence itself, which would require understanding it.
I apologize if I'm still not being clear, but does this make my point any better?
You keep positing invalid distinction between biological and otger mechanisms. There is nothing limiting generations to biological mechanisms. Consider von Neumann's universal constructors or, for purely software mechanisms, the agents within Avida.
We do, in fact, now have machines -- in the software sense -- that have all features necessary for Darwinian evolution to operate. We don't yet have that for hardware devices, but we're fairly close, and we certainly don't need to understand intelligence do build such self-replicating hardware/software systems (though the universality of computation suggests we don't need hardware/software systems, since anything they can exhibit, pure software systems can as well.)
The problem isn't that you haven't explained yourself well, it's that your argument rests on a fundamental distinction between natural biological organisms and all other machines which does not exist.
I think you have to address the parent's argument that machines are not "alive", while biological organisms are.
Of course, first we need to define "alive", and then we should ask: can we build something that will be "alive"?
Do you consider the entities in Avida simulation to be alive? How about biological vs computer viruses? Is there a fundamental difference between them?
If we simulate a biological organism on a atomic level, together with its immediate environment, so that it behaves exactly like its real-world counterpart would, do we call it "alive"?
You can "downvote" my response, but here's a challenge:
You have no idea about how your own ecology works.
You barely understand how 'society' / 'interlinked communications' work between individuals (and your social / psychological sciences are laughable).
The first thing an (unbounded) AI does is link directly into the largest power source it can find (usually a Star) and then start re-organizing QM / magnetospheres to support itself.
First rule of dumb people not understanding evolution:
You can be lucky, you can be crappy, you can have some terrible design flaws (hello mammal eyes), but it's enough as long as you survive. Also, evolution works over POPULATIONS, which means that if 10% are ninjas, and 60% are the bog standard model and 30% are the crappy models - if your environment is 'fitted', YOU ALL SURVIVE.
Yep, even the 30% crappy ones!
Hands up who doesn't understand the difference between ADaption and ABaption!
Selection only occurs on a massive time-line or when environmental changes occur or there's a predator/prey war going on. And even then, the crappy crappy 10% can get lucky and still breed. That's why ecology / evolution is fitted over POPULATIONS, not individuals.
AI doesn't work like that, at all (once you get into the real stuff). All AI works on limited models (at the moment).
>Hint: this guy is a muppet, as shown by his attempts to "downvote" counter-arguments.
HN: land of the muppets.
>>On a more serious note, anyone actually working in the field of AI development knows this already. HN proving that... it doesn't. #Dragonswhoarenotdragons
Continuing on both the comment in which you were responding too, and your on comments, I think it is be even more simply stated as:
You don't have to understand how intelligence works to create a machine with intelligence if you can create a machine with the same or similar evolutionary forces that worked to create intelligence in humanity and want to wait long enough for it to happen. This is a totally viable approach to creating an intelligent machine, as you can certainly create a machine that can mutate extremely rapidly.
The problem is, that doesn't help us figure out what intelligence is, unless the process of watching this happen somehow gives us insight in the matter, or we manage to create a cooperative machine intelligence superior to our own that's more up to the task.
You're applying evolutionary forces to a simplistic binary plateau.
This isn't intelligence, it's a subset of maximal solutions to bounded problems.
Hint: your dog bounds you as much as you bound your dog. Your dog has some input into your modern homo sapien sapien mind. Same goes for your atmosphere, your gut fauna, your planet's iron core and so on.
I do think that understanding intelligence is critical for developing AI, because computers are not subject to the same evolutionary circumstances. If computers were, then they could be. Now if you are talking about an organic computer, and you mean to include existing animal brains, well, people already knew that, and they meant to talk about the inorganic computer. Maybe after (or concurrently) inorganic AI, people will take on the challenge of designing intelligent organisms that fit in the ecosystem.
I think intelligence may be composed of several sub-modules, the combination of which, working in concert, produces the sought-after effects people often talk about when they talk about the Turing test.
I do think tests which propose to pin AI on human characteristics is not as useful as an investigative avenue as it could be. I think the first real test of AI is the identification of causal factors in a phenomena. I think another major point of intelligence would be if AI could perform arbitrary analogical mapping. I think all of math is arbitrary analogical mapping, where you start with a set of capricious but useful building blocks, build a big structure and then... analogically map the math onto a phenomena.
I think these two ingredients makes for the kind of mental abilities that people have been craving for in AI, abilities like a computer developing its own software to use hardware, or a computer which models and reasons about phenomena.
I don't feel like the author is familiar with modern approaches to AI. For instance, he mentions "creativity" as a stumbling block for AGI, but there is a whole class of existing algorithms that exhibit prototypical creativity, namely generative models.
Essentially, the idea is that "creativity" is the act of sampling from a distribution over the kind of thing you are trying to create. Learning algorithms like Boltzmann Machines will learn a distribution over the inputs they see. One thing you can do with such a distribution is checking the probability of a given input under it, which may be good to classify them. Another thing you might want to do is generate representative samples, i.e. generate an example E with probability X iff P(E) = X under the model. The latter is what I would call "creativity".
Under this definition, creativity depends on both the learned distribution (which should only assign high probability to meaningful data) and of course on the sampling algorithm. As it turns out, it is very hard to write good sampling algorithms for non-trivial distributions (naive MCMC will often get stuck). So creativity is hard, but so are a lot of other tasks, so I don't think it's fair to single it out.
I don't think generative models are sufficient to model the kind of creativity that the author is thinking about. See this quote from the article:
The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.
He is looking for solutions outside of the distribution that has been previously observed.
What the author is looking for is a kind of induction. If I'm seeking a new theory of gravity, I'm not going to start looking at the colors of hats. Why not? Because I very strongly suspect it will lead absolutely nowhere. But if I think some ideas have potential and some others don't, isn't that a distribution of sorts?
Theories that have relevance or potential obey a certain distribution, and it is this distribution that you are trying to sample from. Sure, theories may not be directly derived from the extrapolation of sense data, but they are nonetheless derived from the extrapolation of theories in general. So what you're looking for is not a fundamentally new paradigm, it's more like an additional level of indirection. But there's no point to experiment with multiple induction levels if we can't even make a single one work well enough.
I agree that higher-order induction might very well suffice. But just to clarify the author's argument on the point, our rational justification for induction -- as you have just done -- cannot be inductive, because there cannot be an inductive justification for induction, and therefore induction cannot produce knowledge (which must be justified).
I don't think induction requires justification at all. I mean, if you do it and it works, great! If you do it and it doesn't work, well, what else were you supposed to do anyway? Ultimately, I use induction because I have no better idea, not because I think it necessarily works.
Also, generative or inductive mechanisms have a wider scope of application than prediction. They can be used to inspect your own belief systems and pinpoint inconsistencies: the easiest way to know if your model of the world is inconsistent is to generate ideas and examples that fit the model but trigger contradictions.
> if you do it and it works, great! If you do it and it doesn't work, well, what else were you supposed to do anyway?
But, as the author explains, epistemology doesn't work this way, and is certainly not inductive (maybe high-order inductive -- whatever that means). We don't treat physics as simply "something that works", but as knowledge based on assumptions (codified in symmetry laws) which are not inductive by any means. The laws of symmetry (assumptions, really) are a justification for induction, but can't be a result of induction alone. In fact, all of mathematics is a set of justifications for inductions that humans have developed.
Sorry but forming hypothesis based on observations, which is what assumptions means to me, is induction.
They are fallible so they need to be proven in order to be accepted. Usually that's done by proof of contradiction.
But you don't need to prove them to use them. Many people go around believing unproven theories.
In the sciences the preference is for verified theories(and theorems in math) and we prove them by deduction.
But it seems that you know that. I don't understand why you don't like induction in a general algorithm. We don't have to restrict ourselves to a single type of reasoning. Induction, deduction, abduction are all valid and used by humans for generating new knowledge.
I might be misunderstanding something in which case please correct me.
The article says that current research focuses on achieving intelligence by means of induction alone, but induction cannot explain all of intelligence, because we reason in ways that contradict induction (although maybe they're a result of higher-order induction).
This author makes a good case that current approaches toward producing AGI are misguided:
> The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality.
But I don't think he's done a very good job of supporting his assertion that thinking (in the AGI sense) is a computational process. The closest he comes is:
> But that’s not a metaphor: the universality of computation follows from the known laws of physics.
In any case, I think that's a point very much relevant to the analogy: when you do more of the same, but at a scale a few orders of magnitude larger, you do get qualitatively different results.
That's very much applicable to AI. As an illustration, when we started to use GPUs for training neural nets with the exact same algorithms as a decade earlier but with an order of magnitude more parameters, we got dramatically better results (see Ciresan et al. 2010 [1]).
Building the same dumb skyscrapers but making them 100 times taller might in fact get them to fly.
Your "skyscraper" would need to have its top in geostationary orbit, or else it would be wrapping up around the earth as its top would be in LEO or HEO and thus would be moving at a different speed than its base. I.e. your skyscraper (or cable) would be crashing down.
So that's a space elevator. But can a space elevator be said to be "flying"? It stays above a single spot the whole time.
First of all before a (hypothetical) "skyscraper" could fly it would have to have its top beyond geostationary orbit. How so? Because a skyscraper is geostationary and if its center of mass is below geostationary orbit then it's simply not orbiting. It doesn't have enough kinetic energy.
Secondly, there's no law that an object revolving around earth must do so at "orbit" speed/altitude. You and I are perfect examples of that: we're going far too slow. Only consequence of that is that earths gravitational pull must constantly be counteracted by a supporting force. If an object were to revolve around earth at a speed faster than "orbit" that's also fine, as long as there's a force keeping it down.
So, no, this "skyscraper" could have its top well below geostationary orbit as long as it rests firmly on mother earth. Once its center of mass is beyond geostationary orbit though it would need to be held down.
tl;dr Human intelligence is special. Computer accomplishments like being good at chess is not real intelligence. In fact anything computers ever do is not real intelligence, because they aren't like us and possibly never will be because there is some divine truth and wisdom that only humans posses.
How many times do we have to hear these arguments to realize they are just hot air? Either computers are capable of intelligence, or they are not. The answer to that question depends entirely on how you define intelligence. If you define it as the set of things humans are capable of and computers are not, then the answer will always be no. But just like humans, computers learn and are taught new things every day, and as time goes on the set of things human's are uniquely capable of grows smaller and smaller.
The article stands in direct contradiction to your supposed summary of it. To wit:
> Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.
> [Turing] concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written. This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken.
etc etc etc. He literally spends more than 60% of the (very long) essay arguing against your "summary".
Anyway, a real tl;dr is right at the top of the page: "Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough." and then the last sentence: "it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever."
In other words, AGI is provably possible, but the author believes that we're going about it all wrong (behaviorist-inspired neural nets running on training sets, etc.) and need a philosophical (specifically: epistemological) breakthrough to move forward.
The last thing that AGI needs is more philosophical thinking. Philosophy and the Turing Test have done enough damage to the field.
Philosophers can not seem to comprehend how a mass of meat can create consciousness and take leaps to try to rule out the mass of meat, regardless of the fact that they all know that when bad things happen to said mass of meat bad things also happen to said consciousness.
The Turing Test could not have been a more wrong direction to define AGI. Looking like something does not make you that thing, and it did not at all address capability, just the appearance of capability.
Insight into intelligence will come slowly, and it will come from examining, categorizing, and eventually understanding the internal workings of the human brain.
This is such a short-sighted and close-minded view of philosophy, it boggles the mind..
> Philosophers can not seem to comprehend how a mass of meat can create consciousness and take leaps to try to rule out the mass of meat, regardless of the fact that they all know that when bad things happen to said mass of meat bad things also happen to said consciousness.
You seem to be referring to dualism, a philosophical idea that is largely discredited among philosophers since the early 1900's. Functionalism, emergentism, materialism, and all other leading serious philosophies of mind that implicate the brain were first formalized and studied by philosophers, not neuroscientists.
> The Turing Test could not have been a more wrong direction to define AGI. Looking like something does not make you that thing, and it did not at all address capability, just the appearance of capability.
And philosophers are among the most vocal critics of the Turing Test. The Turing Test was made by Turing, who was first and foremost a mathematician & logician, not a philosopher of mind.
> Insight into intelligence will come slowly, and it will come from examining, categorizing, and eventually understanding the internal workings of the human brain.
Which, of course, is what modern philosophers of mind do.
> Philosophers can not seem to comprehend how a mass of meat can create consciousness and take leaps to try to rule out the mass of meat, regardless of the fact that they all know that when bad things happen to said mass of meat bad things also happen to said consciousness.
Neither can anyone else, so far.
> Insight into intelligence will come slowly, and it will come from examining, categorizing, and eventually understanding the internal workings of the human brain.
Insight -- yes. AI -- not so certain. Here's why: it's plausible that intelligence in the human mind emerges as a consequence of an intractable process. We might not be able to understand that intractable process. But suppose that instead of understanding it, we mimic it: we build a replica of the human brain, and it works! But this is not quite artificial intelligence but a replica of natural intelligence. This is not just a semantic difference: even if this mimicry works we may not be able to direct it in any way. For example, we won't necessarily be able to give this brain replica super-human intelligence, without, say, giving it a mental illness at the same time.
But most AIG researchers don't throw their hands up and say, "Oh damn, it's impossible (intractable), it's magic because we don't understand." Forgive me if I feel this a less-than-useful sentiment.
As far as the possibility of intelligence being intractable, what are the chances of this. Is it likely? If we don't even know what AIG entails, how would we possibly put odds on this? If we don't have the odds, what use is there is thinking about it? Should we stop researching AIG because of the possibility that it might be too much for us to handle?
> But most AIG researchers don't throw their hands up and say, "Oh damn, it's impossible...
Neither does the author of the article.
> As far as the possibility of intelligence being intractable, what are the chances of this.
I think the odds are very good. Every known complex system -- the weather, for example, or an ant colony -- is intractable, let alone the human brain.
> Should we stop researching AIG because of the possibility that it might be too much for us to handle?
Intractability doesn't mean it's too much for us to handle. We can simulate intractable processes, and we do it all the time (that's how the weatherman knows if it's going to rain tomorrow), so of course it can be studied. The problem with intractable processes is not that they can't be artificially created, but they can't be artificially controlled (or predicted) beyond very short time frame. So it's very possible we could build an AGI, but won't know how to make it any smarter than us.
>So it's very possible we could build an AGI, but won't know how to make it any smarter than us.
Make it as smart as the smartest of us and then it'll figure out how to become smarter on its own. Then ask it to explain to us how we work.
Tongue in cheek but once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale. Or we can always try mutating the thing to see what comes out - crude but worked at least once.
> once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale.
But this doesn't necessarily follow. The experience of a mind working at a higher speed will be a slowing down of time -- imagine what you'd do if time slowed down. The machine will not necessarily learn more (we might have a limit of how much we can learn), and will probably experience boredom and frustration (as everything around it will be slow) that might drive it crazy.
I don't expect professional philosophers will contribute much either. But it's plausible that a philosophical advance (a new way of thinking) within an existing field such as computer science will lead to the breakthrough. Deutsch has claimed elsewhere that all great advances in science are like this. For example, Darwin's Theory of Evolution represented a fundamentally new mode of explanation.
Deutsch discounts the possibility that we create AGI the same way the universe did: as a sum of emergent behaviors honed by an evolutionary process. Nobody developing an AGI is going to code every behavior, line by line. They're going to code an infrastructure within which an AGI could work.
The interesting thing about this is that the AGI will probably have about as good an idea of how its brain works as we do. While I agree this is likely to be the way AGI comes about, I suspect it will impact its ability to improve itself. What are the ramifications of (let's say) human-level intelligence running on different hardware?
Well, if AGI evolves from infrastructure that naturally fosters intelligence, then it is probable that most intelligence gains are actually a function of better infrastructure. This suggests that individual intelligences are actually dead ends: if a new, better infrastructure comes up, you can't just port an existing AGI to it because the whole of it is organized suboptimally: training a new AGI from scratch will always be faster than to try to improve the old one. It's similar to how new software projects can innovate more easily than existing ones because they don't have legacy burdens.
>In fact anything computers ever do is not real intelligence
No, he doesn't say that. On the contrary he names the principle ('Universality of Computation'; see paragraph 4) which guarantees that computers are capable of true intelligence, since, if programmed correctly, they can simulate the behaviour of any physical object, including human brains.
>there is some divine truth and wisdom that only humans possess.
He also explicitly repudiates supernatural explanations (see para immediately before the one mentioning John Searle).
Why is it so hard to remove our self from that equation? Why should intelligence be a skill only humans acquire? It is not only about machines: There is a constant stream of papers revealing animals are not that dumb as thought either. We survived recognizing we are not the center of the universe, may be we are also not alone on the top of the intelligence pyramid.
Latest edge question "What do you think about machines that think?" provides a good and very broad overview of many aspects. I go with George Church: "What do you care what other machines think?"
Every time an AI system beats a human at a task that was previously thought hard (chess, jeopardy, face recognition, etc) many people immediately dismiss it as not reflecting real intelligence any longer.
When that process is complete, and every tasks has been eliminated from those reflecting "real intelligence", we will be certain that we have created artificial intelligence by any measure that matters.
I've not read much about this definition, but it seems that a problem with focusing on the ability to predict is that to get useful work done, you have to reduce everything to a decision problem. You have an ideal fitness function, but how are you generating ideas, plans or theories to evaluate?
I personally prefer (although it's certainly less measurable) Ben Goertzel's definition - the ability to achieve complex goals in complex environments. It's still wishy-washy enough for people to write off any given achievement of AI though, I guess.
The brain as a prediction and inference engine (the Bayesian brain) has been accepted by many in neuroscience. I think this article is novel because he rebels against this idea. I do think that many of the things that we consider intelligence are prediction, image recognition, speech comprehension, understanding other's theory of mind, and many games (e.g., chess).
I think Deutsch's argument is that there is also creativity (and creative reasoning) that is omitted from today's approaches to AI.
> Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable.
This is where I gave up. Deutsch dismisses thousands of years of thought in a sentence. Not to mention that "justified true belief" is a phrase you find much more often in a textbook or an encyclopedia article than in a real work of philosophy.
The idea that we ought to have a better philosophical underpinning for AGI makes a lot of sense. Unfortunately the author blows past this and starts making a lot of tortured claims that don't entirely make sense.
The example of years that started with 20s seemed quite odd. Given the easiest way to understand numbers at least for me is inductively. Of course if you lop off a bunch of information such as the digits after 20 or 19 it sounds like an impossible problem.
Obviously, understanding intelligence better would promote more effective directed research toward artificial intelligence. But if we can identify it (which the Turing Test is about), then it is quite possible that we can develop it -- and know that we have -- without understanding it. (And it may only be through developing it that we end up understanding how it works.)