> I've since changed my mind and currently do not believe we can achieve AGI (ever).
Considering we (as in humans) developed general intelligence, isn't that already in contradiction with your statement? If it happened for us and is "easily" replicated through our DNA, it certainly can be developed again in an artificial medium. But the solution might not have anything to do with what we call machine learning today and sure we might go extinct before (but I didn't have the feeling that's what you were implying).
It is not a contradiction as I meant "achieving" in the context of creating it (through software).
The fact it happened to us is undeniable (from our perspective), but the how/why of it is still one of the many mysteries of the universe - one we will likely never solve.
FWIW this is the same argument once made against human flight. In the late 19th century, there were a lot of debates in the form
> Clearly flight is possible, birds do it
> Sure but how/why is one of the many mysteries of the universe, one we will likely never solve.
"Man won't fly for a million years – to build a flying machine would require the combined and continuous efforts of mathematicians and mechanics for 1-10 million years." - NYT 1903
The real answer to how birds fly is that they're extremely light weight so that wing muscles can lift them. Common pigeons or seagulls only weigh about 2 or 3 pounds. The largest birds of prey top out around 18. Anything heavier is flightless. A 150-pound human isn't getting anywhere on wing muscle power.
The largest Pterosaur are estimated to have had wingspans of more than 9m and weigh up to 250kg (550 pounds) and we believe they were able to fly. [1]
But that's not the most relevant point here. The fact that humans did achieve to fly, but through a different method than birds is exactly a supporting argument that we might achieve AGI with a different approach than how our brains do it.
There are countless similar examples. We see a natural phenomenon, we know it's possible and we find a way to replicate the desired effect (not the whole phenomenon) artificially. I haven't heard anything here that it will be any different for intelligence, except that we don't know how yet.
The chain of reasoning that everything observable in nature is replicable by humans would also imply us being able to replicate creation of a living cell from non living material and then endow that organism with consciousness.
Further more it would also imply us being able to replicate birth of stars, black holes, and "the big bang" itself.
I am not qualified enough to speak if there is anything fundamentally impossible with all of these, but that would basically make human race "God".
> us being able to replicate creation of a living cell from non living material and then endow that organism with consciousness.
Afaik we are very close to artificially creating living cells. This is one recent example [1]. The consciousness part is similar to AGI.
> Further more it would also imply us being able to replicate birth of stars, black holes, and "the big bang" itself.
Some things might be a logistical challenge rather than one of knowledge. Fusion energy attempts to replicate the way stars produce energy and we already managed to replicate the effect, we are just (many years) shy of maintaining it to produce positive energy.
But you might be right and some things are impossible to replicate. I'm much more inclined to believe we can't replicate the big bang than general intelligence as mother nature replicates general intelligence millions of times each day. And by now we started to have a discussion about believes rather than knowledge, which is a much more healthy way to put it, as we indeed don't know.
> Afaik we are very close to artificially creating living cells. This is one recent example [1].
I beg to differ. It may look impressive on the surface, just like GPT-3 looks impressive on the surface, but is far from the real thing. It is just another extension of the ladder to the Moon.
The effort described in the article is nowhere near a living cell. It lacks protein building and DNA/RNA mechanisms. They basically describe a group of nanomotors.
I can recommend watching James Tour on this very topic [1] and Stephen Meyer on the related topic of intelligent design [2]. Those two lectures were eye-opening for me learning more about this field. Note: both of them are self-confessed theist scientists which to me did not represent a problem (my viewes are agnostic, and it only made it more interesting as you rarely get to hear different views about these matters than pop-sci).
> The consciousness part is similar to AGI.
It is not clear what you mean by that. One thing is to build computer code and then have it manifest 'intelligence'. Whole another thing is doing same with organic matter that can not be 'programmed', even if we knew how to do it (let alone there is no evidence that 'programming' is responsible for consciousness at all to begin with).
This is also known as 'hard problem of consciousness' and David Chalmers is considered one of the leading experts in the field [3]. Basically smartest scientists in the world are clueless about this and do not know even where to begin, in many ways similar to AGI.
> Some things might be a logistical challenge rather than one of knowledge. Fusion energy attempts to replicate the way stars produce energy and we already managed to replicate the effect, we are just (many years) shy of maintaining it to produce positive energy.
I can see why one can have this position where it seems like we are making progress in everything we talked about, but that is the main punchline of the ladder to the Moon analogy. Indeed it is imaginable, and indeed every step makes us closer. But it does not mean we will ever reach it.
I agree with you that the discussion ultimately boils down to direction and strength of one's beliefs.
I’m curious why you think that. Do you think it’s a fundamental problem with the discrete nature of traditional computers? Or a problem with scale and computational limits? If it’s the latter, if a hypothetical computer has unlimited time and memory capacity, why do you think AGI would remain impossible?
Machines are good at computation, which is not equal to reasoning, but rather a subset of reasoning.
And not only they are good at computation, but they are exceptionally good at it - I have no illusion of trying to compete with a machine doing square roots or playing chess. And increasingly harder problems are being expressed as computation problems, with more or less success - most famously probably self-driving.
But at the end of the day it feels like using an increasingly longer ladder to reach the surface of the Moon.
While imaginable, and every time we extend the ladder the Moon does get closer, it is fundamentaly impossible.
Ever since Gödel we’ve had a pretty convincing proof that there is nothing that you can do in terms of reasoning that can’t be expressed using computation. And since Turing we’ve got a framework that shows there’s nothing computable that you can’t compute using a universal computer.
So unless there’s something mystical beyond the realm of mathematics to ‘reasoning’ it can’t be a superset of computing.
If a finite amount of matter in a brain with a finite amount of energy can do it, then a universal computing machine with a finite amount of storage and a finite amount of time can do it.
There are actually a lot of well-defined things beyond the power of a Turing machine (for example a Turing machine plus a halting oracle that only works on Turing machines without a halting oracle) but in terms of finite amounts of electrons doing normal low-energy electronic stuff you are quite likely correct. Humanity may go beyond computability if as some papers have suggested quantum gravity requires solving uncomputable problems.
Even if our brains reason based on quirks of quantum mechanics (seems unlikely given the scale at which neurons operate), what stops us from creating non-biological machines that interact with QM in the same way to produce artificial reasoning?
I am not saying that anything more than a really big computer is necessary for reasoning, only that one day physics knowledge may reach beyond the Turing machine (quantum computing does not).
Do you believe human brains contain a halting oracle? Or the moral equivalent of one - something that enables our brains to accomplish some non computable reasoning task?
It's semantics at this point but we did not create ourselves, it was a complex process that took billions of years to create each one of us. Something being conceivable isn't the same as it being practically possible. I can imagine what you propose, but the same goes for traveling to distant stars or a time machine for going to the future. All perfectly possible in theory.
Intelligence is an abstract concept, it depends on what exactly one means by that. I have watched a rockets take off from earth. I have never seen a self-aware machine.
Considering we (as in humans) developed general intelligence, isn't that already in contradiction with your statement? If it happened for us and is "easily" replicated through our DNA, it certainly can be developed again in an artificial medium. But the solution might not have anything to do with what we call machine learning today and sure we might go extinct before (but I didn't have the feeling that's what you were implying).