Hacker News new | past | comments | ask | show | jobs | submit login
The singularity is close? (mkaic.substack.com)
170 points by mkaic on March 31, 2022 | hide | past | favorite | 666 comments



I'm surprised by the number of "is AGI even possible" comments here and would love to hear more.

I personally think AGI is far off, but always assumed it was an inevitability.

Obviously, humans are sentient with GI, and various other animals range from close-ish to humans to not-even-close but still orders of magnitude more than any machine.

Ie. GI is a real thing, in the real world. It's not time travel, immortality, etc.

I certainly understand the religious perspective. If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible. But, given the number of "is AGI possible?" comments I assume not all are religious based (HN doesn't seem to a highly religious cohort to me).

What are the common secular arguments against AGI?

Are people simply doubting the more narrow view that AGI is possible via ML implemented on existing computing technology? Or the idea in general?

While the article does focus on current ML trajectory and "digital" solutions, its core position is mostly focused on a "new approach" and AI creating AGI.

I'd consider a scenario where highly advanced but non-sentient ML algorithms figure out to devise a new technology (be that digital or analog, inorganic or organic) that leads to AGI as an outcome that is consistent with this article.

Is that viable in 20 years time? No idea, but given an infinite timescale it certainly seems more possible than not to me given that we all already exist as blackbox MVPs that just haven't been reverse engineered yet.


It's not - in my experience reading HN - that AGI-sceptics (like me) don't think it will happen, just that we have developed a healthy contempt for the hype-merchants and their wide-eyed followers who despite years of "machine learning this" and "deep learning that" and "gpt-3 the other" have nothing to show for it in terms of something resembling human-level intelligence, or even any sign of understanding how we think.

It's perfectly reasonable to see that and still be confident AGI will happen - just not with current models.


That is a reasonable skepticism, I think.

But I will say: by the time anyone has “something to show for it,” AGI will, pretty much by definition, already be here. I don’t think we’re gonna be able to see it coming, or know what it will look like, except through very speculative predictions. So I do think while skepticism is warranted, we should still evaluate everything in good faith, and not automatically jump to shilling. (Not saying you’re doing that, btw - just commenting)


I think the flight analogy for AI is a good one; we have machines that let us fly, but we don't fly the way birds do. In many ways, the ways that we fly far exceed the capabilities of birds, but there are ways in which birds exceed the capability of our machines.

The issue with AGI as it's often framed is that it implicitly assumes a general intelligence like us, without any metric that can relate human intelligence and machine intelligence. We have machines that beat us in Go, so if we took those machines and somehow combined them with machines that beat us in Chess, Starcraft and Dota, are we any closer to AGI than we were previously? What about if that same AI can also drive a car? What's the standard, and where does it end? Where do we fall on that same metric?

There are also some fairly deep philosophical questions in this realm, too. For example, to what extent is the human experience (and intelligence) linked to our physical bodies? Cooking by smell and writing by sound are things we do on a daily basis that have no machine analogue. How important is the embodiment principle to AI? Certainly we don't do things by plugging our minds in directly -- we drive cars using the same limbs that we use for everything else, which probably has enormous efficiency advantages. To what extent can you separate the intelligence from the body and the environment?

Frankly, I believe we're a lot further away than we think. The last decade has taught us that relatively simple methods can be applied in surprisingly powerful ways, which is an important start, but it doesn't tell us anything about how close we actually are to a given goal, and how we might go about reaching it.


The question is: we are further away from what? The goalposts differ wildly, depending on who you ask. From a 1:1 copy of the human mind we are very far, because we also don't have an idea what a human mind is, heck we don't even have a common definition or understanding. Is a soul part of the mind? Should the AI have a soul too? And the rabbit hole goes only further down. So while for some the singularity might be around the corner, for others it's lightyears away. I guess the only way to have everybody agree is if the singularity hits us Skynet-style and I'm definitely not looking forward to that.


> we are further away from what? The goalposts differ wildly, depending on who you ask

literally anything

the intelligence of a wasp; the intelligence of a cat; the intelligence of a Downs syndrome person; the intelligence of a 3 month old baby

AI is very far from all of the above


> have nothing to show for it in terms of something resembling human-level intelligence

We keep moving the goal post; a common theme was when I studied ML in uni in the AI winter of the 90s, that beating Go surely would mean human level intelligence. And many ML models we see now would be considered human level a few decades ago; we moved our goalposts and definitions, which is fine.

However ‘resembling’ is vague; I find most stuff on social media (most notably tik tok and instagram) not human level intelligence either, or, reverse, people easily would (and do) believe comments and posts done by something like gpt 3 are done by humans. That is because the level is so low of course. I know gpt 3 is not intelligent (for my definition anyway, which is intentionally vague) but resembling in some cases: definitely.


> a common theme was when I studied ML in uni in the AI winter of the 90s, that beating Go surely would mean human level intelligence

The goalposts haven't moved. If AI researchers thought that general intelligence would be required to beat a human in Go, then they were simply wrong.

AlphaGo is not a general-purpose intelligence. It only does one thing. It plays Go.


Is the Turing test "simply wrong"? Or do we need to further qualify it, as in "a GPT-3 chatbot is human-level only if it can fool any human for a long period of time, not if it can fool _some_ humans for a relatively short period of time"?

I think it's fair to say that the goalposts are moving (as they should).


Or maybe the Turing test is the right test but interpreted more literally: it is passed when the society that invented it genuinely believes it to have been passed. Hammering out the specific criteria ahead of time will only lead to frustration as it becomes clear that is not what the surrounding society actually holds as qualifying. The reason the goal posts move is that we fundamental misjudge what it is we find to be human intelligence, and given that we have a nasty habit of denying this quality to other humans, let alone animals, it's entirely possible we will never genuinely grant it to technical artefacts that we devise.


> then they were simply wrong.

Sure, point is that every definition we come up with gets beaten and then we were wrong with that definition in the first place; defining what agi means seems hard.


That isn't "moving the goalposts", though. That phrasing is commonly used to imply that people who are skeptical of the imminence of AGI are being unreasonable. The fact that we can't yet define general intelligence convincingly is a large part of the reason I'm skeptical that we're close to achieving it.


That reminds me of Moravec's paradox: What we thought was hard and difficult (and indicative of intelligence), such as playing Chess and computing derivatives, is actually quite easy, while a lot of things that we consider trivially easy (distinguish a dog from a cat, pick up an egg, read some squiggly text) are quite hard.

https://en.wikipedia.org/wiki/Moravec's_paradox

One of my favourite demonstration of that was the 2015 DARPA robot challenge, where one robot after the other failed at such difficult tasks as walking and opening a door.

https://www.youtube.com/watch?v=g0TaYhjpOfo


I think we are very far off achieving it as well. But I do feel goalposts were moved and continuing that might be a way we will achieve it without noticing; like alien intelligence, we might create something we don’t recognise as general while it already is. But not soon either way I would think. Like aliens, I hope within my lifetime but I would put both at close to 0.


Defining intelligence in a way that captures everything relevant and isn't self referential is extremely difficult.

Dr. Marcus Hutter's AIXI is a solid mathematical treatise that reduces intelligence to the concept of information compression, and you can exhaustively construct a logical extrapolation from aixi to any particular feature of intelligence at higher levels, but it's similar to string theory in that it's all-encompassing in scope. It's not useful in narrowing the solution space if you want to build a high level intelligent system.

https://en.wikipedia.org/wiki/AIXI


The Turing Test still stands imo. The chinese room argument against it is not convincing to me.


I do think some known weaknesses do make it problematic though; some intelligent behaviour might be not human; aka maybe the player is a generally super intelligent computer but it’s answers are so out there the other players do not recognise it as intelligence. Or the human player is so weird, the answers are mistaken for a computer. Or, the most likely and I have played this many times, lately with gpt3; human behaviour often is not intelligent, at all. And while computers are not intelligent, in a scenario against someone who just responds with emoji’s and such (so, a currently normal convo on the internet), the interrogator might appoint humanness to the wrong side many or all times. As we already know that people who do not know they are chatting to a bot, think it is a human.

While I do not think that is intelligence, it does making testing for it slightly broken. It will come down to ‘I will know it when I see it’ by elitists for exactly the above.

So in short ; I do think the Chinese room argument against it is quite a good one where even the elitist people can fall for if the ‘con’ is elaborate enough (as in; if gpt gets more data, more efficient learning and learns to know when it cannot answer and has to look it up, like calculations). Or maybe then it is intelligent? (To be clear; I don’t think so; I think we need a better tests and definitions).


Which one? Turing himself revised it, like, four times. The version where humans can't tell which one chat member is non-human has been beaten several times (with tricks like mimicking a 13 year old foreign boy, iirc). The version where you replace an arbitrary member of a chat with a human is something mostly about speech pattern matching and plausible speech, can't imagine that GTP3 or whatever comes next is too bad at that. If you do it like in Ex Machina, as in chatting in person it's mostly about building a very expensive robot that overcomes uncanny valley.

I think the sad state is that the Turing Test is too diluted to be useful as a marker...


I expect "passing the Turing test" not to be a binary thing, but as AI improves it can hold longer and more nuanced discussions before the human can detect that it's a machine. So I'm not surprised that we already have chatbots that can fool some people some of the time.


Yes, but that makes it a pretty elitist thing no? Maybe elitist is the wrong word (not a native english speaker), but I mean; I would be able to always make the test fail for the computer if I am the interrogator because because. I cannot say why. Something like you say and something where I think that there is a very large % of the world population who will not be able to make that distinction.

An interesting would be to do the Turing test with a timer; the interrogator gets 30s for each session and has to say who is what within 30s and then continue to the next batch. I think computers would go very very far if you do that. But that is the attention span consider normal currently for human/human social media interaction; often even less.


You need to distinguish humans from computers, you don't know whether you're talking to a computer or not. Failing everybody doesn't work. Maybe I'm not getting what you're trying to do.



> beating Go surely would mean human level intelligence

With the benefit of hindsight, this just seems like a bad goalpost. Go is hard for computers, but why did anyone think the smarts required to beat a human at it would transfer to understanding or generating text, or anything else a human can do?


The reason for Go was that much of the research was based on AI algorithms like Monte-Carlo.

What was "solved" with AlphaGo was using deep learning machine learning which are effectively black boxes. There was a certain assumption in the question for AI researchers academically that it would be an understood algorithm as an AI agent like a Prolog application, not a brute forced model. That's still not the case that we have a "solved" strategy and all we can do is watch it play as if it is a deaf mute player.

So there still is no "tic-tac-toe" known winning strategy to Go or anything.

That doesn't make AlphaGo any less impressive or any less practical, but it even has its own readout issues. It can't even read ladders without hard coding it in, for instance, because it becomes a long enough depth search. This is one of the first things a newbie would learn.

It's just a 19x19 board, so it was always known if you could read all the possible outcomes you could see all the possibilities and win. This is just looking at all possible outcomes and picking the best one, not knowing how to play. Creating models of data that is 2, 3, or even 4+ dimensions is always possible, just depends on how much computing power you can throw at it. The created models are essentially aggregate simplifications to play quicker.

Generalized intelligence is so much different. You have to define the problems themselves that you are trying to solve, figure out what the variables are, and solve it. Then you have to operate and run the machinery to create those experiments. Outside of a scenario that you've taken actual physical territory as an intelligence, I can't see how it would get there (think Terminator or BSG, doesn't have to be malicious but they'd have to be in control of the physical area autonomously).

But the hardest part is defining the problems independently given the sheer number of problems they'd need to define second to second just to solve basic tasks, and they'd likely have millions of variables with millions of possible values.


No, the reason was that the search space is insanely large.

> It's just a 19x19 board, so it was always known if you could read all the possible outcomes you could see all the possibilities and win.

All possible outcomes is not something you can iterate over in our universe.

> There was a certain assumption in the question for AI researchers academically that it would be an understood algorithm as an AI agent like a Prolog application, not a brute forced model.

I never heard this, and the result is really not brute forced. You can't brute force go.

> That doesn't make AlphaGo any less impressive or any less practical, but it even has its own readout issues. It can't even read ladders without hard coding it in, for instance, because it becomes a long enough depth search. This is one of the first things a newbie would learn.

Only in early versions, AlphaZero didn't have any built in knowledge and can learn different games and the later developments in MuZero went further to make it more generalised as a learner.

Removing the hard coded logic and removing even seeing how humans plan, it got better. It found strategies and ways of playing that experts had missed in an ancient game.

> This is just looking at all possible outcomes and picking the best one, not knowing how to play.

"It's just doing X, it doesn't really know how to Y" is a common refrain. It looks at options, and explores "what if" scenarios in a guided sense with a feeling about how good any particular potential board is. I find it hard to say that it doesn't "know" how to play.


>>> We keep moving the goal post

It's more like climbing a mountain and seeing higher peaks. Researchers may have been thinking that creating a human beatable program may get them insights into how to create an AGI, but they just find out more problems to surmount.


Hard disagree. I have no contempt, however, as a person of science, I see that we are in the technological equivalent of the stone age when it comes to AI. "Cavemen" probably had some outlandish ideas in their own era as well.

To give you an example of how lost we are, we don't even know how human genetics work with the human brain. It is one thing to make a machine that can pretend to think, but to make a machine that not only actually thinks, but also communicates and is able to act in a 'human' or 'intelligent' fashion is exponentially harder.

Just my 2 cents. I hope to be proven wrong, but we haven't cured cancer or discovered the secret to fusion yet, so...


Not sure your analogy holds?

Building planes that fly didn't require us to understand how birds or insects fly.

Getting directions from Google Maps didn't require them to figure out how hamsters navigate a maze.

Now, if you want to build a computer that passes the Turing test, perhaps you need to understand how humans work. Maybe? But it's not clear that this knowledge is necessary to build something smart enough to drown the universe in paperclips.

The latter reminds me of Edsger Dijkstra's aphorism: "The question of whether machines can think is about as relevant as the question of whether submarines can swim."

(Now, it might turn out that we need to understand how humans tick and how genetics interact with the brain in order to build a successful paperclip optimizer. Probably not, but it might turn out that way.

I am just saying that this would be a surprising empirical fact to learn. Not something that we can just assume based on armchair reasoning from analogy.)

Now to get slightly off-topic:

> I hope to be proven wrong, but we haven't cured cancer or discovered the secret to fusion yet, so...

Oh, we can totally build fusion reactors right now!

First, a fusor is a bench-top nuclear fusion device. The main downside is that no one has figured out how to get more useful energy out of it then we put in. So probably not what you had in mind.

See https://en.wikipedia.org/wiki/Fusor

Second, we can build a nuclear fusion device that does generate useful energy:

You take a huge tank of water, some steam turbines, and a supply of fusion bombs.

You take one of the bombs, explode them in the water, and use the turbines to generate electricity. Repeat as needed.

It's a very simple system, and we had the means to make this work since the 1950s. Of course, it's also a completely ridiculous design that approximately no-one would want to use in practice. Especially when you already have more conventional nuclear fission reactors.

But something along very similar lines was seriously considered for spaceship propulsion. See https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...


> First, a fusor is a bench-top nuclear fusion device. The main downside is that no one has figured out how to get more useful energy out of it then we put in. So probably not what you had in mind.

> Of course, it's also a completely ridiculous design that approximately no-one would want to use in practice. Especially when you already have more conventional nuclear fission reactors.

Right, and that's the point.

There are three logical leaps here, and proof is missing for all three.

- One, that AGI is something we can replicate in a Turing Machine.

AGI might require a specific effect in quantum mechanics to work, for example. Light refraction is completely understood, but still extremely difficult to solve for a single case -- computing power is getting there, but it took about 60 years from it being understood to us being able to compute it reasonably, and even then it's only an approximation -- our best rendering farms about around 5 F-stops. That's nowhere near the human eye's ability.

Another example, the 3-Body problem solved for N-bodies. Or how about, protein folding. Folding@Home is remarkable, but even with the combined GPUs of hundreds of volunteers, it still takes Folding@Home months, years, to calculate the folds of a single protein. The brain has billions of them.

- Two, that we will figure out the solution to Turning Machine simulation in our lifetime.

This can stand on the arguments of one. I'd just like to add that there are many really simple conjectures that are as-yet unsolved within mathematics. The Collatz conjecture is a good example here, but there are hundreds of thousands of others. Despite it being probably the simplest problem to teach, it's been about 90 years and even proof theory machines haven't made too much headway. Erdos, probably the greatest mathematician to ever have lived, stated "Mathematics may not be ready for such a problem". In that, subject experts doubt our ability to solve it within our lifetime.

Why should this be any different for an exponentially harder problem?

Perhaps a reference to the fact that it took about 150 years to go between "neurons are in the brain and influence our actions" to "we can do brain surgery, kind of...". And probably about 200 more years until we can say we actually "understand" the brain in a meaningful way?

- Three. That the resulting device will be practical, useful, and will be able to understand itself enough to replicate a better version of itself, within a reasonable timeframe.

I think you argued this sufficiently yourself! The fact that we can build something doesn't give it use. We can build machines that extract energy off of graphene. At no point does it make it useful to do so, however.

And yet, despite all of these unsolved problems, we are supposed to be able to simulate a machine to replicate the brain? In the next 20 years, no less?! Is it not unreasonable to think that this is a preposterous assertion whatever dimension you look at it through? That so many people have thrown themselves into wishful thinking, and cults at that, is beyond me.


I'll try a reply, but you might get more out of reading some of Gwern's essays. Especially https://www.gwern.net/Complexity-vs-AI and https://www.gwern.net/Scaling-hypothesis

First, what makes you think that solving the Collatz conjecture is easier than AGI? Doesn't that beg the question? Just because a challenge is easy to state doesn't mean that we should expect it to be easy to answer. Fermat's Last Theorem was easy to state; open a random math textbook for many theorems that are harder to state, but easier to prove.

Second, quantum mechanics is totally amenable to simulation on a Turing machine. In fact, all of known physics is. You rightly point out that the question is rather how good modern hardware and software is at the task. (Btw, quantum computers would be really good at simulating quantum mechanical systems. And that would probably be their main use; right now we don't really know much else they are better at than classical computers.)

Third, you bring up the point that figuring out how arbitrary proteins fold is still rather difficult. I agree. So an atom for atom simulation of the human brain would presumably be also rather difficult.

That doesn't mean that AGI is impossible. It only means that AGI via atom-for-atom simulation of a human brain would be rather difficult.

Simulating a human brain atom-for-atom is only one approach you can try to take to reach AGI. There are other approaches people are working on. Only one of them has to work.


Sure, but all this misses my point that these are comparisons. This is what has happened with other technology in a similar ballpark of "we know how to do this, we can (technically) do this". You're arbitrarily selecting for a specific outcome in the space of all possible outcomes. When not only do we not know enough about any outcome to favour it, but we have knowledge of how other technology being developed in a similar space, with similar constraints, turned out -- i.e. it didn't turn out at all how we expected in any specific way".

So let's be honest here, in the space of all possible outcomes -- the negative outcomes where "it doesn't work" or "it works but is too slow to use" or "it doesn't understand how it works either", is a larger order of infinity than the outcomes where it does work, at least based on what has happened with pretty much every other technology that we have records of predictions for (Is there a technology that we have predicted where it wasn't wildly different or handicapped compared to our dreams? I'd honestly like to know TBH).

There is literally no reason for pontificating more over this. All of this, the best guesses, the worst guesses, is all just a variety of wishful thinking. The only true answer is we have no fucking idea.


And why should the AI copy the human mind anyway? By that definition extraterrestrial life cannot be intelligent because hey they are not humans.


Human-level intelligence? Dog-level intelligence would be quite the achievement, and we're nowhere close! Will we be there in 20 years? I seriously doubt it - not without a radical departure in processor architecture. We're within striking distance of hitting the limits with our current architecture yet we're nowhere close to being able to achieve human-level intelligence.


I think there are two kinds of AGI skepticism. There's skeptics like us that think we're nowhere even close to beginning to understand how to build an AGI, but see no reason to suppose such a thing is impossible.

Then there are those who think AGI is impossible in principle.

The people I get most frustrated with are those that use the first argument - we're nowhere close to implementing an AGI - as an argument that therefore AGI is impossible. Sometimes they don't even realise or recognise that this is what they are doing.

The argument is often that "we do X and computers don't do X therefore computers can't be like us in that way". Well no computers don't do those things yet because we haven't developed AGI yet. That's not a reasonable argument that developing AGI is impossible.


> Then there are those who think AGI is impossible in principle.

Yes. I don't believe that it is impossible in principle either, but it is neither trivial to argue that it is possible. An argument that annoys me to no end is this "well, AI continues to get better (there is no regression in our capability), and therefore it must sooner or later exceed any given threshold, for example human level AGI. QED." That argument is just plain wrong. The world record in the 100-metre dash is only ever improving, but that doesn't mean that humans will soon run Mach 1.


Interestingly enough, the man who coined the term "AGI" (Ben Goertzel) has always shared that same skepticism about ML, Deep Learning, and other forms of "Narrow AI".


Well sure but also now they are working on integrating neural networks and symbolic learning.


>have nothing to show for it in terms of something resembling human-level intelligence, or even any sign of understanding how we think.

Well of course they have no idea how we think. They're making no actual effort to study how we think. That's for cognitive scientists and neuroscientists, and AI people often sort of dismiss it as having too long a time-horizon or overly high compute requirements before it's applicable to "real world" problems.


'AI people' is a fairly broad term, and the field is so hot these days that there are plenty of people trying all kinds of different approaches.

So there's plenty of scientists and engineers who are trying to understand how the brain works and how to combine that with eg deep learning.


On the one hand, sure, I'm one of those scientists. On the other hand, we're still considered a subfield whose work isn't always considered entirely relevant to the rest of the field, because we're measured by the yardstick of task-specific performance on benchmark datasets. I just reviewed a paper yesterday that was powerful and general, but wasn't really trying to hit SOTA on any one test task its architecture could perform. It was trying for generality. This likely got it dinged by the other reviewers.


> On the other hand, we're still considered a subfield whose work isn't always considered entirely relevant to the rest of the field, because we're measured by the yardstick of task-specific performance on benchmark datasets.

That attitude seems entirely reasonable to me.

> I just reviewed a paper yesterday that was powerful and general, but wasn't really trying to hit SOTA on any one test task its architecture could perform. It was trying for generality. This likely got it dinged by the other reviewers.

That seems like an interesting paper!


I think basically it requires machines that work in fundamentally different ways that can't be reduced to statistics, and that we'll get better at approximating intelligence for a while until we hit a wall where no amount of scale makes any big difference.


I will continue dismissing AGI until someone can tell me what GI is. I don't believe that it's just statistical inferences run in parallel (Machine Learning).

If we don't know what GI is, then I don't know how we are supposed to replicate it.

It's like if we wanted to produce power on Earth the same way as the Sun does, but without knowing that was fusion. So we were just trying to make things look like the Sun. Someone says "Look, I made a ball so hot you can't look at it, we must be close!"

I believe we will have fusion eventually because it is a process that we deeply understand, it's just hard to do.

I don't believe we are on a path to AGI right now because know one understands even a little bit of GI.


> I will continue dismissing AGI until someone can tell me what GI is

Here's a rather famous recommendation for a definition of general intelligence: https://arxiv.org/pdf/0712.3329.pdf

Very loosely paraphrasing their paper, general intelligence is the ability to adapt to novel situations and predict the future better than any other algorithm could do.

Personally, I think of perfect AGI as this: given that our universe is capable of approximation to arbitrary precision using a universal Turing machine (and infinite time and memory), AGI is the best* computable approximation to Solomonoff induction.

*"best" being some mathematical notion of optimality.


> Very loosely paraphrasing their paper, general intelligence is the ability to adapt to novel situations and predict the future better than any other algorithm could do.

The 'better than any other algorithm' part seems to make this into a pretty weird definition?

By that standard only John von Neumann [0] was intelligent, because every other human wasn't as smart as him?

I suspect any reasonable definition of intelligence has to work on a 'good enough' basis, not on optimality.

The paper you linked is quite interesting, and I'll be reading it now. I suspect your summary was perhaps a bit too brisk, and their more careful longer definition survives this trivial objection?

[0] Replace von Neumann with your favourite genius.


Humans are terrible at pretty much everything except what they are specifically highly trained in.

von Neuman was a genius, but he also actively promoted nuclear war. Does that mean he is or is not intelligent? Do you think in a world where von Neuman got to nuke the USSR in 1950, we would be happy we gave him the keys?


Ignore von Neumann, he was just an example.

Basically, with the definition as given in the comment, only whatever matches up with the smartest thing around would count as intelligent. (Whether that's the smartest thing currently around, or the smartest thing possible even in theory isn't quite clear.)

In any case, von Neumann was definitely extremely intelligent for a human. Don't conflate intelligence with wisdom, or with having goals that align with yours.


Quoting the key definition:

"Bringing all these pieces together, we can now define our formal measure of intelligence for arbitrary systems. Let E be the space of all computable reward summable environmental measures with respect to the reference machine U, and let K be the Kolmogorov complexity function. The expected performance of agent π with respect to the universal distribution 2−K(μ) over the space of all environments E is given by Υ(π) := [unquotable latex expression]. We call this the universal intelligence of agent π."

It is an useless kind of mathematical object from any practical perspective. Sure, some formal object is defined, but all the actual important things to use it are left as difficult, possibly devilishly or impossibly difficult implementation details. How do you actually build a map from the physical reality to environment E? How the agent π is mapped to any real physical being? All the real work that would make the definition useful is left out.

It is like defining complexity classes without providing any algorithms.

edit. Further complaint: Here is how they present "examples":

>A very specialised agent. From the equation for Υ, we see that an agent could have very low universal intelligence but still perform extremely well at a few very specific and complex tasks. Consider, for example, IBM’s Deep Blue chess supercomputer, which we will represent by π_dblue. When μ_chess describes the game of chess, V π_dblue μ_chess is very high. However 2−K(μ_chess ) is small, and for μ != μ_chess the value function will be low as π_dblue only plays chess. Therefore, the value of Υ(π_dblue) will be very low. Intuitively, this is because Deep Blue is too inflexible and narrow to have general intelligence; a characteristic weakness of specialised artificial intelligence systems.

Why they need any formal definition of Y to express this if they don't bother fleshing out some features of μ_chess so that you could provide any bounds for K(μ_chess)? Without such work, all of the actual claims in the paragraph is all appeal to intuition. Nothing is proved, so any formal definitions are unused. The contents of the quoted paragraph could be expressed without referring to any equations or "definitions" at all, they are totally superfluous.


So are humans not GI then? We are terrible at predicting the future, in general. "Frustration" is the feeling of things not going how you thought they would or should.


I didn't read the paper, but going along with the sun metaphor, your description kind of sounds like "hot ball in sky that radiates light". It still doesn't describe fusion.


Here's a definition from Shane Legg and Marcus Hutter:

Universal Intelligence: A Definition of Machine Intelligence https://arxiv.org/abs/0712.3329


What do you mean by "dismiss AGI"? Do you dismiss the idea that we are currently on a developmental path to achieving AGI in the medium term (a lifetime or two) with current approaches, or do you dismiss the idea that AGI is possible at all?

I would agree on the former, but that in no way implies the latter.


" I don't believe that it's just statistical inferences run in parallel"

Why not? I also believe, there must be something more, but I cannot articulate it.

But as disturbing as it sounds, most natural learning might be just this. Lots of statistics run in parallel.


And what about things people do that don’t make any statistical sense?

Is it AGI if it picks up its underwear the first time you ask, or is it only AGI once it waits for you to get angry after the fifth time you ask?


"And what about things people do that don’t make any statistical sense?"

Like what? It is always individual statistics made up from partly faulty data. It is never objective, what mattered in a evolutionary sense, was that it works good enough. (like ants algorithms are not perfect, but get the job done most of the time. I watched ants a lot ...)


So we are intelligent statistical engines but usually very bad at updating our priors. If you make a machine that can drive a car, but doesn’t like to because it’s father frequently had road rage, is that AGI? If you build that but then “fix” that behavior out of it then is it still AGI?


Consciousness is something, I cannot explain at all with statistics (or any other mechanism).

True AGI by my definition would require consciousness, but as far as I know, there is also no general accepted definition of consciousness either.


That's the beauty of the Turing test: it avoids having to worry about consciousness.

(Of course, passing the Turing test wouldn't be necessary for intelligence, at most it's sufficient. Otherwise, humans who lost the ability to produce language but are otherwise quite smart and capable would count as non-intelligent.)


That’s just my belief. I might get proven wrong, but I haven’t been so far.


See M. Mitchell's Artificial Intelligence: A Guide for Thinking Humans


People often confuse not being able to understand how something is possible with it being impossible.

Despite all the progress being made by using a little bit of design, and a whole lot of brute force, people keep saying it can’t work, and they keep being proven wrong.

Ironically I think this means AI won’t be creating AI in an exponentially increasing way - because AI is more about scaling and emergence of dumb elements than it is about grand designs.


> ... people keep saying it can’t work, and they keep being proven wrong.

It's impossible to prove wrong all those saying: "AI is not AGI" until you eventually end up with an actual AGI. Nobody is proven wrong.


The atomic bomb is a good contrast. In the early 30s we knew a bomb was possible (Szilard‘s patent was in 1933) but there were huge engineering problems to overcome (enriching uranium). But we knew that even if we couldn’t scale up isotope separation and we had to go there long way around, we could still make a bomb with enough time.

There is no Szilard patent for AGI: no one has any theory on how to make it work other then “make it bigger,” which as this article points out hasn’t paid off like we’d hoped.

I’d have a lot more faith that AGI is possible if we had any kind of theory or roadmap on how to get there. Counting on a black box inventing it for us seems like waiting for a million monkeys to finish King Lear 2.


> There is no Szilard patent for AGI

You don’t know that. But even worse, you probably won’t know if it happens. History is not as predictable in the present as it is in hindsight.

Most physicists in 1935 would still have said that an atomic bomb was impossible, and defended it with the same indignated vigor demonstrated in this thread.


Some theorized the A-bomb could start a chain reaction igniting the atmosphere.

https://www.insidescience.org/manhattan-project-legacy/atmos...

I guess I'm glad they were wrong but what a hell of a risk to take just to get another bomb when we already had so many. Perhaps the pressure of other nations developing the weapon was just to great.


The techniques that will allow AGI are probably already invented. How many people at the time knew what that patent meant for the future of the world?


I think AGI is usually equated with “superhuman” intelligence, though obviously not a requirement.

If superhuman intelligence were easy, why aren’t we already more intelligent? Being smart has already demonstrated considerable fitness benefits for the human race. It seems like we haven’t gotten much smarter in the last few thousand years though, at least from a raw horsepower point of view.

My intuition is that past a certain level of complexity intelligent systems become inherently unstable. This is all just a handwave, but there’s some circumstantial evidence in the confluence of genius and mental disorder in our species.

If that were true, the question then becomes: is biology an inferior substrate for general intelligence compared to silicon. Obviously a cpu can add two integers more efficiently than a meat brain, but it’s not self evident that this will hold true for more complex computation. Put another way, if you try to make an AGI agent “smarter” than a person, you might just end up with an irrational system that spits out nonsense.

So it may be that for complexity reasons you can’t beat human-level GI. At which point the question becomes whether AGI is cheaper to produce and operate than feeding and sheltering a human. If the answer is “no”, then AGI may be both possible and irrelevant.


Human ancestors evolved larger and larger brain sizes until they reached the limits of what we can calorically support and by what fits through the birth canal. It seems unlikely to me that these limits just happen to line up with general limits on intelligence. An AGI will be easier to scale further up without these biological limits.

(And even if those biological limits coincidentally lined up with general limits on intelligence, just the fact that AGIs will be able to duplicate themselves or share knowledge directly would itself be a huge practical increase to their intelligence over us.)


There's no proof that humans have hit any such limits. But that's not how genes and evolution work. We got intelligent enough to adapt to the environment and conditions that were thrown at us to survive the ice age and through the many inter-tribal wars that have been fought since. But once a species hits a plateau in selection pressure there is little improvement, except via sexual competition.


I'm not sure the birth canal is a real limit. It's a limit on how wide your head can be when you are born.

That's not the same as a limit on brain size in adults. (Nor even directly a limit on brain size in newborns: up to a point they could always get longer heads instead of wider ones.)

> (And even if those biological limits coincidentally lined up with general limits on intelligence, just the fact that AGIs will be able to duplicate themselves or share knowledge directly would itself be a huge practical increase to their intelligence over us.)

Yes. A human with access to a calculator is much more capable than one without. An otherwise human-level AGI with hard-wired direct-'brain'-access would be even more capable---without having to manipulate the calculator with clumsy fingers and interpreting its outputs with a general visual system.

Add direct-'brain'-access to Google and Wikipedia and to huge amounts of raw storage, and your human-level AGI would already be super human.


"(Nor even directly a limit on brain size in newborns: up to a point they could always get longer heads instead of wider ones.)"

Coneheads confirmed? Surly I wasn't the only one to watch it.


There are many aspects how biology can make "better" humans - faster, stronger, more resilient, etc, but they are not evolutionary useful because they cost extra calories and apparently are not worth the cost in a calorie restricted environment. Our bodies have been optimized to have as little of the "good stuff" as possible to conserve calories. We have mechanisms that will prevent building more muscle unless it's really necessary (as shown by exercise) and we have spare food - but it does not have to be this way, e.g. gorillas don't have to exercise to develop their huge muscles, it's just as our bodies (unlike theirs) is heavily optimized in favor of "cheaping out on features" to be more resistant to starvation.

IMHO that also fully 'explains away' the issue of "why aren't we already more intelligent". Brains are excessive consumers of energy compared to other organs, so extra brain mass costs calories. At a certain point (which seems to be our current brains) being a bit smarter does not allow a hunter-gatherer to harvest much more calories per day in the seasons/crisis events where food sources are scarce (which are the only times that matter for evolution), so the brain increase stops there, just to save calories. Improvements that are 'zero cost' (i.e. brain structure changes which give more intelligence for the same calorie expenditure) are welcome, but the trivial way of getting more 'processing power' through larger brains is aggressively selected against.

And all this calorie saving apparently was worthwhile - e.g. neanderthals were stronger and had larger brains than homo sapiens , but we were a bit more efficient, so we are here and they are not.


And we _are_ already more intelligent. Eg we are more intelligent than rats, even though rats are pretty smart and evolution had plenty of time to make them smarter.


It's pretty hard to believe that being able transfer knowledge between digital brains (something that seems arguably inevitable) as well as having faster access to more knowledge in general (because you can make a bigger brain with more memory) it's just pretty obvious that AGI will beat humans. If all they had was human brains but could transfer thoughts faster they'd still beat us.


I suspect that intra network latency has a significant impact on intelligent systems. Interconnectivity also seems crucial but as you add nodes the number of connections needed to keep that closeness in the networks scales dramatically. These connections also need space so if you keep the same level of interconnection as you add more nodes, your density drops and so your latency goes up.

There is thus a fundamental tradeoff between network size, network latency, and network interconnectivity that can't be avoided. While we may eventually beat the human brain on some or all of these measures, there is good reason to believe there are fundamental limits on the scalability of intelligence.


There is simply no reason to believe that the human brain is anywhere near optimal on this tradeoff.


Thank you for this. I was hoping someone would contribute a better explanation than my own vague intuition.


Again, I will hand-wave about complexity. A larger memory and the ability to load/unload large corpora rapidly implies a very different kind of cognition than the one we enjoy.

I’m not saying you’re wrong, only that I can imagine a reality where you can’t just arbitrarily scale components of cognition and still have a functioning system.


It's more likely (imo) that humans are constrained by biological limits (energy, head size) than some true maximum.

A simple thought experiment is the same architecture of the human brain (whatever that is), but just with way more power to run at more operations per second.


Would that actually work, though, or would it just cook itself?

I think the general question to ask is, for a given organism or machine, at what point does it stop being cost effective to make it smarter? Nature doesn't just keep human intelligence at the level it currently is, it's doing so for all other organisms. There seems to be tradeoffs. Which ones would apply to AI? And which ones are not currently relevant but will apply to future AI?

For example, AI has no physical predators currently, but you could imagine a future where they do, or where nanobots or special bacteria attack silicon. In such a future, it is possible that very intelligent AI would suddenly become uncompetitive, relative to dumber and leaner ones. Viable superintelligent AI might then turn out to be just as difficult or circumstantial to "evolve" as humans were.


Why would it cook itself if it was replicated in silicon?

I think it's a lot less likely humanity just happens to be at some natural limit of intelligence rather than being bounded by other unrelated constraints.

AGI tradeoffs would be different because it'd be unconstrained from biological natural selection. If you're interested in this stuff it's worth checking out the work that specifies the AGI goal alignment problem more specifically.

The most commonly recommended book is Bostrom's Superintelligence, but it's fairly dry. Tegmark's Life 3.0 is on the other extreme of too pop-sci like. I think Yudkowsky's writing is the best for explaining the issue (specifically AGI goal alignment and what the specific problem is) in a way that's accessible, but not dumbed down.


The human experience is fundamentally chemical in nature (which is why alcohol has an impact on cognition, for example). What would it even mean to replicate that in silicon? I know food is bad because I can hold it, smell it, and taste it, none of which a computer can do. If you tried to upload someone's consciousness to a machine, you would need to emulate the full sensory experience as well, because there would be zero context otherwise.

What's missing from discussions on AGI is the lack of a measure that relates human intelligence with machine intelligence, and the recognition that our own intelligence is fundamentally tied in to our physical bodies. To what extent does it make sense to talk about intelligence without also talking about its embodiment?


Inputs can be replicated in other forms (cameras for example for visual input and computer vision).

There’s a common misconception that AGI would be human like, consciousness is a mostly poorly defined orthogonal rabbit hole to the AGI question (and the goal alignment issue).

That said, you’re right the ability to train is dependent on inputs.

If you’re curious about this stuff it’s really worth reading the stuff I mentioned to get a sense for what the problem is. I only suggest them because I would have made a similar comment to you a few years ago before getting a better sense for what the problems are actually describing.


>A simple thought experiment is the same architecture of the human brain (whatever that is), but just with way more power to run at more operations per second.

That's your quote, is it not?

I'm pointing out that it's not just a matter of replicating an architecture, because the architecture only really makes sense in the context of everything else in the human body. There's zero reason to expect that we could replicate the architecture of a human brain (whatever that means) and just plug in a few cameras to have a working model.

This also implies that it would be exceedingly difficult to make human-like AGI, because doing so would be tantamount to making an artificial human. Anything that we do make will almost certainly not be human-like, and will probably only make sense in the context of its sensory system and the inputs it receives. This goes beyond training, because inference happens on data drawn from the same distribution that you trained on. Note that I haven't said anything about consciousness here, because that's an unrelated issue.

I've read bits and pieces of Bostrom, and I would argue that it's "not even wrong", in the sense that, yes, building a child god would be a disaster for humanity, but we don't even know if building such a god is possible, because it's totally undefined and we have no way of measuring it. That's the problem. Without a definition of intelligence as a physical process that maps humans and AI to the same measure on a like-for-like basis, it's essentially an argument on whether or not Thor is stronger than Superman. We have no context for what the limitations are likely to be; it may be that building an angry god just isn't possible.


I think we’re talking past each other.

The point of the architecture example is not that it’d be done this way or that it’s practical - it’s just to point out that there’s nothing magical about us and if you had an existing model and ran it faster (without biological constraints) you’d get something smarter than us. It’s a simpler possibility example of something that already exists.

In practice AGI likely won’t be that for reasons you suggest.

I’m not sure why you’d think AGI is not possible when there’s general intelligence all around us and there’s nothing magical about biology. Current methods maybe won’t get us there, but the current stuff is already super human in some domains and does generalize a bit (alpha zero).

Maybe it’s impossible for some unknown reason, but I’d bet against that. If it’s possible then the goal alignment problem is a real issue. With an unknown timeline it makes sense for some people to work on it now before we need it.


I think you're right that we're talking past each other.

I'm not saying that AGI is impossible, I'm saying that it's impractical to try to discuss AGI without some understanding of what intelligence is as a physical process. As an analogy for why: we understand fusion, to the extent that we know how to make fusion bombs. In theory, we could keeping adding stages to a fusion weapon to make a bomb large enough to crack the planet in half, but in practice, other factors start to dominate the practical explosive yield well before we ever hit that point. So while it's theoretically possible for us to make a planet-ending weapon, it's not practical in any sense. And as it turns out, not only are planet-ending weapons not practical, bombs today have lower explosive yields than the heyday of the nuclear arms race, because those weapons turned out to be impractical as well.

Talk on safe AI seems to be dominated by dark genies, when we don't even have the AI equivalent of a theory of nuclear physics, or lift. We just don't know what the practicalities of building superintelligent AI even are, so it seems premature to be ringing the gong and raising alarm bells that we might be building Beelzebub in someone's basement. If there's one thing I am sure of, it's that we won't just summon Skynet by accident; it'll almost certainly be the end result of the development of a fundamental theory of intelligence (or equivalent), and the accumulated work of engineers and scientists, probably over decades. You wouldn't expect scientists to accidentally make ITER, for example, without a theory of nuclear physics, even if they did have some notion of "hot rocks". Superintelligent AI seems at least as hard in my opinion, and I think that by the time we're in a position to build such an entity, we'll also have some idea of what the limitations are likely to be, how much of a risk it actually presents, and ways to constrain / mitigate these risks.

Note that this isn't the same thing as saying that the current field of AI safety isn't important, because it is. Narrow AI is still dangerous in the same way that we don't need planet-ending weapons for fusion bombs to be dangerous. But I'm less concerned with paperclip maximizers (because again, the notion that an AI could somehow turn the planet into paperclips raises serious and fundamental questions about the nature of embodiment that such proposals never actually grapple with) than I am with systems that enshrine and enforce social inequalities, or take power away from the average citizen, because the latter are things that we know are possible, even through negligence. More to the point, we don't need a theory of intelligence to assess the ways in which such systems could pose a threat.


I think all of this is reasonable and we'd probably have an interesting in-person discussion.

I think flight is a good comparison. Before human flight it was possible to speculate about the risks. The machines we built turned out to use shared underlying principles with birds, but ultimately we can do it differently. We can supply more power and as a result do things at a scale not present in the natural world.

Maybe it's the case that intelligence is a special case with special constraints, but I suspect it's not. As a result it makes sense for some people to try to solve alignment now because if it ends up not being constrained then by the time we need it, it'll be too late to figure out the problem.

Maybe we'll get lucky (like we did with nuclear weapons not being able to be made by any random person in their backyard), but I wouldn't bet on it.


Silicon-based machines do need cooling, and arguably an AGI-grade processor would be 3D, which is an even greater cooling challenge.

Speaking of silicon, I'm curious why it is virtually absent of organic chemistry despite its sheer abundance on Earth. Is it because it's too hard to extract it from its oxides? Or is it just completely outclassed by carbon? Depending on the answer, it is quite possible that future AI will eschew silicon altogether and run on organic chemistry.


> Would that actually work, though, or would it just cook itself?

You could add better cooling than what the human hardware provides.

Even just taking a normal human head and dunking it in cold water dissipates a lot more head. (If you try this at home, I suggest getting a snorkel.)


> It's pretty hard to believe that being able transfer knowledge between digital brains (something that seems arguably inevitable)

I don't think it's necessarily inevitable. It is not a given that future hardware architectures for AI would be inspectable or copiable, because the extra wiring required to do that is space and energy overhead. It may also be the case that greater intelligence comes through better distributed representations and that it simply isn't possible to cheaply translate knowledge from an inferior representation to a superior one (you may need to relearn from scratch). The ability to transfer thoughts may therefore require a sort of lowest-common-denominator representation, in other words, a language. I imagine that language could be more efficient, though.


You are right that it's not inevitable, but still pretty likely.

Also keep in mind:

Human nerves work (roughly) at the speed of sound.

Computers work (roughly) at the speed of light.

We already make computers that are much, much faster than humans at sequential processing.

About the representation thing: if you start with AIs that are copies from each other, you can probably keep the 'superior' internal representation the same or nearly the same. So I would expect AI clones to be able to exchange knowledge much quicker than unrelated AIs.


AGI is only inevitable if you subscribe to materialism. Basically it comes down to whether we are purely material beings or whether there are non-material aspects to our mind. There are more than a couple non religious arguments for dualism, many are relatively recent, as dualism is enjoying a bit of a comeback in philosophical circles. The Chinese room and the red room a.k.a.Mary's room are simple enough thought experiments that can help you grok why dualism might be true.

That being said, the most recent philpeople survey has 52% accepting materialism compared to 32% accepting dualism with the rest undecided, so academic philosophy does generally lean toward AGI. https://survey2020.philpeople.org/survey/results/all


The Chinese room argument is a parlour trick that uses scale as a distraction. It posits a person in a room manipulating symbols to produce intelligent seeming outputs. It says, see, it’s absurd to think a person in a room with a stack of symbols could emulate intelligence.

But let’s say the room contains many billions of people, it is the size of a planet, and it contains racks of many trillions of symbols, and it spends millions or billions of years to produce an output. That’s more like the scale of a sophisticated computer system, or a brain.

Does that sound much like a man in a room with some symbols? No. Does it sound like that could do complex calculations and produce sophisticated and perhaps even intelligent outputs? Well, given enough time and scale, yes why not?

The Chinese room is pure misdirection and it amazes me anyone falls for it. There’s really no actual argument there.


No, you're misunderstanding the Chinese room argument completely. It's not about scale, it's about the concept of "understanding" something. Here's another version that might make sense to programmers. I know Python, and I can read/write/compile Python code in my head. My computer's REPL can also read/compile Python code by following a detailed set of pre-programmed instructions to convert it to machine code. Nevertheless the computer does not "understand" Python, it cannot write Python code in response to any problem, nor does it "understand" Python the way a person does. It is not a programmer, and a programmer "understands" the language and can produce new creative meaningful output, and doesn't merely follow instructions.


The catch in your argument is that from the outside, you manipulating code and your computer manipulating code are indistinguishable. You say the only difference is that you understand, whereas the computer doesn't. However, from the outside, there is no difference between a statement print("I understand this") running on you or a computer.


What you are stating is the entire point of the thought experiment.


Cool, there are at least five different 'that is the whole point' takes on what the Chinese room means in this discussion. Also, I disagree with your take on it, that's not at all the point.


The fact that you’re saying “from the outside” is literally the point. Outside implies an inside implies dualism. Hence the thought experiment. If there was no outside the thought experiment inherently wouldn’t be.


Easy there with the words. Just because there's two of something is hardly an argument for a (mind-body) dualism [1]. Is the dichotomy cats vs dogs then proof for said dualism as well? How about vanilla vs chocolate? What do you make of the existence of a (six sided) die then?

[1] https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism


smh, have a good one.


> Outside implies an inside implies dualism

Well, unless with dualism you just mean that there are two, you are simply mistaken. Just because my car has an inside and an outside, and I cannot determine the state of its gearbox from the outside w/o the car telling me, does not mean that this implies a ghost in the machine. Far from it.


It seems like the Chinese room argument is incompatible with a materialist world view. To a materialist, what could the brain be but a computer? Some sort of physical process is taking place inside that is is processing information, and at some level of abstraction that process will look like "dumb" symbolic manipulation. Yet we achieve "understanding"


Yes that is the point. The Chinese room is a thought experiment meant to prove dualism.


No it's not. Searle is explicitly not a dualist.


He thinks conciseness is a physical property. It's not clear to me why he thinks only brains, or maybe only living things, can have this property.

He makes the analogy that a weather simulation in a computer can't make anything wet so therefore a computer program can't have a thought. My take is that when we think about rain we don't get wet either. Human minds are the same sort of thing as the weather simulation, not the same sort of thing as the weather. For me thoughts are simulations, or models, or operations on models, and that those activities are tractable to computation.


Nothing about materialism implies lumping organic and inorganic substances into a catchall term "computer" leads to greater understanding of how either works.


How do we know that what we achieve is different from what the Chinese Room achieves?

Doesn't that beg the question?


I think the Chinese room experiment is pretty effective, but a person might say the room is conscious and one might not. We all agree that, if we had a Chinese room in front of us, the only real way to tell would be a Turing style test.


Not really, Turing test and its variants aren't that relevant anymore. Even Turing himself didn't like it.


Why are they not?


> It says, see, it’s absurd to think a person in a room with a stack of symbols could emulate intelligence.

That's not at all what it says. It says:

1. Assume a computer program can pass a turing test

2. Convert the computer program into a list of steps that a human in a room can follow

3. Put a human in the room. Pass him a slip of paper with Chinese writing. Have him follow the steps to generate a response.

4. Now consider, once you've done this, does the human in the room following those steps understand Chinese?

Searle's answer is obviously not, from which he deduces that the computer program doesn't understand Chinese either.


Right, but the person in the room isn’t the computer program. He’s not even the whole computer. He’s just a component of the system. It’s the system that we should consider as understanding Chinese, not just a piece of it. He’s following a set of instructions so he doesn’t even have all of those in mind at once either.

By abstracting the system as a man in a room were distracted to think we’re considering the system when we’re not. If the man in the room is one of billions of men, at massive scale which is a more realistic model, that becomes obvious.


Yes, one response to the thought experiment is "the whole system understands Chinese". I don't find that response convincing. Do you really think a book with instructions in a room plus a man to carry them out understands Chinese?

The thought experiment hinges on the concept of understanding. If you want to argue against Searle, and you can, your best bet is to attack that concept, to argue that "understanding" is an illusion or that it's nonsensical. This is what philosophers like Daniel Dennet have done.


> Do you really think a book with instructions in a room plus a man to carry them out understands Chinese?

Again with the scale misdirection. Are you really sure a room the size of Jupiter with a rule book the size of many libraries, full of trillions of people following the rules, shuffling quadrillions of symbols for millions of years definitely can’t ever understand Chinese? Not in any sense?


You're bringing up scale, not me. The book can contain 10 billion lines of instructions. That's a lot of "scale".

The thought experiment is about the concept of understanding/intelligence/consciousness/whatever. It has nothing to do with scale.


If it has nothing to do with scale, then why doesn’t Serle us a realistic example of a vast room, why a book not a library, why a man not an army?

I’m bringing up scale because I’m pointing out how it’s being used.


If you insert the adjective "vast" before room, then you get The Vast Chinese Room thought experiment and nothing changes.


A lot of people find it credible that a hugely complex fast computer system could be intelligent. Many of them seem to find it convincing that a man in a room with a book can’t “understand Chinese” when he personally doesn’t. But the only difference between them is scale.


> A lot of people find it credible that a hugely complex fast computer system could be intelligent.

But not John Searle...so it seems clear that scale has nothing to do with the thought experiment. For Searle, a vast room or an army of people to carry out instructions make no difference.

> Many of them seem to find it convincing that a man in a room with a book can’t “understand Chinese” when he personally doesn’t

The people who read the thought experiment this way don't understand it. In the thought experiment neither the computer program (no matter how powerful the computer is) nor the man and the room understand Chinese.

> But the only difference between them is scale.

No, the point of putting a man in the room is that we all agree that a human is capable of understanding. From his vantage point in the room, he can see that the program works by executing instructions without any understanding. All the room does is demystify a hypothetical computer program that can pass a turing test.

Searle is not arguing that a machine with understanding/consciousness is impossible. He's arguing against behavioralism ("if it can pass a turing test it understands"). He take the psychological phenomenon of "understanding" (something we're all familiar with) seriously, he thinks it represents a real thing, and that a computer that can pass a turing test doesn't necessarily have it.


I don’t see how that’s different from a neurologist examining a human brain. They can see it’s just cytoplasm and electro chemicals oozing around and firing off signals. None of that looks like it understands anything, any more than the symbols moving around in the Chinese room. They can no more point to the place in the brain that understands things than the guy in the Chinese room holding the book can.


The difference is that we all have the subjective experience of understanding. When we have a conversation, we experience other people's words as sounds in our heads laden with meaning. We "understand" each word in the sentence, how the words fit together, what they refer to, and so on. We're all familiar with this experience, right? We're doing it right now through the medium of writing.

Searle takes that experience seriously. For him it's real and, in order to have that experience, you need machinery somewhat similiar to what we have. He can't bring himself to believe that a list of instructions and a person dumbly executing them have that sensation of understanding even if they produce the correct output.


I take that experience seriously too. I don't see why a sufficiently advanced artificial system as complex, capable and sophisticated as a human brain could not also have experiences. I don't think he has any good reason to believe otherwise, or any good argument against it. Nothing he says about the mechanisms of the Chinese room can't also be said about the mundane physical mechanisms of neurology.


> I don't see why a sufficiently advanced artificial system as complex, capable and sophisticated as a human brain could not also have experiences.

Neither does Searle. He says that humans are such machines. Per Searle, if you want to build a system that has subjective experiences of consciousness and understanding, it would have to have physical parts that correspond to some degree to the physical parts that give rise to our subjective experiences (or an animal's subjective experiences). It couldn't be a list of instructions to be executed.

I don't know if Searle is right about this but I find myself unable to dismiss his argument as obviously wrong.


You don't think it's obvious that actual computers and robots have physical parts? I'm sorry, I'm at a loss.


> The people who read the thought experiment this way don't understand it. In the thought experiment neither the computer program (no matter how powerful the computer is) nor the man and the room understand Chinese.

Who says the computer doesn't understand Chinese, though? In this case "the computer" is the system. The program, being the code, isn't a process. The man is acting as a mechanical piece of hardware. But the computer running the program is an active process with state.

I heard a lecture by Searle on this. He mentioned the idea that "the room understands Chinese" and simply dismissed it as absurd without considering it. This is exactly the problem with his argument. Well, that and the casual conflation of the man with the computer as if the CPU hardware is the system.


Right, exactly, it is absurd to think one man in a room with some symbols can generate understanding. But only because of scale, because we imagine a regular sized room, a normal sized book and a table with symbols on it. But it’s absurd because that system is incredibly simple, not due to any actual argument from fundamentals.


> Do you really think a book with instructions in a room plus a man to carry them out understands Chinese?

Do you really think a blob of goo with static sputtering through it, designed by no one, accidentally mutated from pond slime, understands Chinese? I actually find your proposition easier to believe, but here we are.


I think something like OpenAI CLIP + Gpt3 "understands" English at a level comparable to a 7yo.

It can respond to questions and explain its answers. Sometimes it's wrong, but so is a 7yo and you can usually understand why.

Is it intelligent? Maybe it is.


This seems like saying since a person's individual neurons can't understand Chinese, then a person can't actually understand Chinese either.


We know how the room works--not the brain.

...Can we break it down and replicate it?

Can my sense of self be quantified?

What about animals, insects, viruses?

Is life unique, or an illusion too?

Or, are you the Chinese speaker, and the universe the room?


You seem to be arguing that because we can't do it yet, therefor it can't be done. Or that since we don't yet know how the brain works, that we can never know.


> Searle's answer is obviously not, from which he deduces that the computer program doesn't understand Chinese either.

Would Searle feel differently if he'd taken a computer architecture course?

There was something strangely magic about making an adder:

1. Learn about boolean logic, Karnaugh maps, etc

2. Define the inputs and outputs of a 4-bit adder, deduce some set of gates which would correspond to those outputs

3. Put those into a circuit similator and put a box around it

4. Suddenly the random AND and OR gates, which look arbitrary and none of which individually know how to do addition, collectively know how to addition.

Then you keep scaling that up: Put a bunch of gates together, none of which have memory, and suddenly you have flip-flops and registers. Add an instruction decoder and suddenly it starts zooming along doing things, executing simple programs that you feed it from simulated "memory". But you know that inside is just a spider's nest of logic gates.

I myself am religious and believe in a spiritual world distinct from the material world; but the Chinese Room thought experiment was never that compelling to me: I've seen spider webs of logic gates come alive as processors when assembled properly; I don't see an inherent reason why an algorithm of paper plus a very patient human couldn't come together to create something which "understood".


It’s not a parkour trick when it is understood :)


Indeed, but is the human mind truly advanced enough to understand parkour?


I mean I agree, but a third of philosophers subscribe to dualism, so I wouldn't be so quick to write the whole thing off.


Dualism is simply an ivory tower God of the gaps. To these sophists, anything we can't explain now can be decomposed into a physical and metaphysical part.


Dual meaning two, right? Material and non material. As far as I can tell not one of them can describe what the second non material thing in dualism is, or what any of its properties are. Even they seem to have no idea what they actually believe.


They believe it's not possible to explain human consciousness as a purely physical concept. I don't see why they need to explain what it is that isn't material to make that claim.


I just want a coherent explanation of their belief. They’re the ones positing something non material, not me. I just what you know what it is they mean by that. Absent that, I don’t see how they can claim their belief is coherent.


I’m not especially well versed, but as I understand it they believe we don’t know the essence of the non material part and in fact it being unknowable is sort of the point.

This sort of reminds me of the people who claim the Big Bang is incoherent because we don’t know what came before it. Why is I don’t know not good enough?


I suppose that’s fair to a point. It’s just disappointing.

The thing is this non material whatever can’t have properties. If it stores information, has state and has consistent behaviour then really it’s just a form of material (in the philosophical sense that it’s a part of the world), so it can’t have any of those. At which point, how does it even interact with the world at all? I mean if it influences the brain, then that mechanism of influence makes it part of the world, right? It makes it material.

That’s where I find the idea incoherent. I don’t see how it can be both immaterial but also have material effects. Doing so makes it material, I think by definition. It means it must be part of material reality in that sense. Or at least I’d like to hear an argument why it doesn’t.


What is the material evidence for numbers? What is the material evidence for logic? What is the material evidence for persistence of identity over time? What is the material evidence for math? What is the material evidence for categories? What is the material evidence for grammar? What is the material evidence for a mayor (not the mayor qua person, but the mayor qua office)? What is the material evidence for marriage? According to empirical evidence, which knife is the best knife? There is no "best" without purpose. And materialism has nothing to say about purposes. Empiricism can only work when an agent in the world has a specific purpose. There are vast swathes of human experience that don't fit tidily in the box of materialism or empiricism.


Those things are behaviours and information that can be encoded in matter. If the non material thing in dualism is just emergent behaviour, then it’s not adding anything beyond what we already have in materialism.


Numbers aren't emergent behavior. Numbers are a metaphysical category. Categories are metaphysical. There's no empirical evidence for order vs. randomness. The concept of "order" doesn't make sense empirically. There has to be a separate, metaphysical value structure that determines whether the data aligns to the value structure or not. Just because you haven't been clear in your thinking doesn't mean metaphysics aren't real or essential to dealing with the physical world.


I accept that. Metaphysics is real in a useful way, but I think that metaphysics itself is an emergent behaviour. Emergent behaviours are behaviours of mater, and matter is real, therefore emergent behaviours are real.


"Emergent behavior" is handwavium materialists use to avoid cognitive dissonance.


We observe them, categorise them, and engineer them all the time.


If the "non-material" is required to explain material phenomena, then it must be causally related to the "material"...but at that point what exactly makes it "non-material" at all?

"Non-material" just sounds like a bad label for material things we don't understand.

We don't understand how our cognition or consciousness works, but it seems silly to assume that because we don't understand we fundamentally can't understand.

People who push the mysterious interpretation of dualism are just trying to find a place for the divine that is separate from the material but in doing so they baselessly seek to circumscribe our capacity to understand.


Imagine an Apple. Is that Apple material?


Those neurons firing while imagining are material, yes.


But where is the apple? That is, where is the emergent phenomenon of the apple?


The emergent phenomenon is a pattern in my brain, just as a running a computer program is an emergent phenomenon in a computer.

If the extra thing dualism adds is just behaviours of matter, how is that different from materialism?


Who is reading the Apple?


I'm not sure what you mean by reading.

We say that we can experience thinking about an Apple, or imagining an Apple. I see no reason why a computer, or other physically implemented AGI system, could not do that. I suspect the act of imagination is just generating, processing and transforming a computational model abstracting the thing being imagined.

I believe brains are physical objects, so therefore physical objects can imagine Apples.


The plane of imagination isn’t physical, else you could touch it.


I don't know what a plane is in this context.

You can't touch fourier transforms either, but a human brain or a microchip can compute them.


Sure you can, its done currently for Deep Brain Stimulation. What you're proposing is "The plane of Quake isn't physical", which is nonsense.

I can make the computer imagine Quake for me, then fiddle with its plane of imagination for some sweet wallhacks.


If you create a device that can insert objects into the plane of someone’s imagination, that would be awesome :)


That's a side effect of DBI today. It's a random and crude method today, but arguing that we we'll hit some ineffable wall that will prevent more fine-grained control is.. well let's just say that the gaps for gods grow ever smaller.


Who is it that fetched this comment from the internet for you?

Using vaguely-defined words to support a deist position is a time-honored tradition, but isn't particularly interesting or convincing.


There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.

A collection of neurons can build a model of the world that includes its experience of apples, and from that, dedicate some neurons to representing a particular instance of an apple. This model isn't the reality of "Apples", though, and is physically located in the brain.


“ There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.”

Sure there is, that’s what DeepMind showed us with how to find cats in images :)


That's exactly my point. DeepMind has an idea of what a cat is based on its experiences, just as you or I do. Each of our models are woefully incomplete, based on very limited sensory information. These models all disagree with each other and reality to various degrees.

There exist many things many humans have lumped together under a single label such as "cat". Those categories are all wrong, but sometimes they're useful. Machines can also get in on the fun, just as well as humans, as you point out. There's no magic there, humans aren't special.


The humans aren’t special bit really comes down to whether you believe in free will (which by any meaningful definition is quite special).


Free will is another one of those things that people love to trot out because it's so ill-defined. To cut through all the crap though, it's very simple: "free will" === "unpredictable behavior". This inherently means that it's observer-dependent.

This has the benefit that it empirically fits how people think about it. Nobody thinks a rock has free will. Some people think animals have free will. Lots of people think humans have free will. This is everybody trying to smush a vague concept into the very simple, intuitive definition above.

Which is all to say that free will is about as relevant to any conversation as say astrology is: not one bit.


As much as a cloud of atoms doth protest that free will is irrelevant, reality has a way of not caring :)


Funny, I'd say reality has a way of existing despite all of the comforting woo people like to make up about it.


can't describe the system from inside the system, boss


Seems like something someone believes out of hope right?

That there is something more than this. That it isn't just over. Something to keep people from giving up.

Edit: hope it didn't come off like I was talking down to people who believe what they believe. It's your life, it's your after life. Stay safe yall.


Dualism has always seemed like a complete red herring to me. Even if you assume dualism is true, and that human consciousness has some non-material basis, what reason is there to believe that an AGI couldn't also have a conscious mind with non-material aspects?


I don’t think dualism necessarily means AGI is impossible for the reason you pointed out. But it would definitely mean it’s not inevitable, the non material part may be something that is somehow unique to people.


It depends a bit on what we mean by AGI.

Even if dualism is true, and humans have something special, that doesn't mean that a paperclip optimizer is impossible. A paperclip optimizer would be very, very smart (in the sense of being able to solve problems and achieving its objective of turning the universe into paperclips), but would not have a consciousness.


I see this from people who expect that quantum computing will be an essential component of building AGI…

However a lot of these people are throwing everything at the wall to see what sticks when it comes to selling whatever quantum computing related tech or service they are working on. So it comes with a pile of skepticism from me. However the philosophical point of quantum computing being necessary if any of the “quantum dualist” type theories of consciousness turn out to be true.


> Basically it comes down to whether we are purely material beings or whether there are non-material aspects to our mind.

The current state of physics suggests that we really have no idea what the "material" universe really is, at any scale. There may well be aspects of physical reality that we have yet to even conceive of. Given that when we cannot even say what a lump of rock is truly made of, or what even the dark void of space is made of, it should be obvious that we cannot be certain at all about what consciousness is made of.


I'm sorry, maybe it's just my tiny monkey brain, but I see no rational reason why someone would even consider dualism to be a thought worth entertaining. We obviously know material changes affect mental functioning directly. So how the heck could that substrate exist if not at the physical level?


Most of the confusion around the Chinese Room arises when people don't realize that "computer" and "computation" are not synonyms.

Turing defined computation as I/O behavior. The Church–Turing Thesis states that as far as their I/O behavior is concerned, all sufficiently general computational mechanisms (computers) are equivalent. The Chinese Room Argument presents a stupid computational mechanism with I/O behavior equivalent to a human communicating in Chinese. Some people then conclude that because the Chinese Room is clearly not intelligent and "because all computers are equivalent", computers cannot be intelligent. But that requires leaving out the qualifications from the Church–Turing Thesis.

But maybe there is more to computers than their external I/O behavior. Maybe intelligence is a property of the internal behavior of a system rather than its external behavior. Then there could be some computers that are intelligent and others that are not, even if they have the same external behavior.


For the love of god can people stop abusing the Chinese room?

Searle is a materialist, pretty hardcore one at that. What he wanted to show with the Chinese room is that simulation of intelligence and intelligence are not the same thing, that a machine, or even a human for that matter, can perform symbol manipulation without having any internal understanding of the thing they perform.

It's an argument against functionalism (i.e. that a digital computer 'thinks' the same way as a human merely because it performs the same tasks), not an argument for dualism. (that mind and matter are ontologically dinstinct substances)


But it doesn't. It totally fails to meaningfully define "understanding".


It defines it just fine. Understanding means grasping what a symbol refers to and what it represents conceptually, rather than just engaging in symbolic manipulation itself.

Just imagine a perfect rule book that already has a translation from one language into the other for every imaginable sentence, some native speakers just have written any possible combination down. You could 'translate' by just visually picking without having any idea of what's being said. They understand, you do not.


ok, so you define something that isn't understanding (symbolic manipulation, use of a magic rule book).

But where do you define what it is? "grasping what a symbol refers to" is another way of saying "understand" - it doesn't define what it means to "grasp".


There is some gray area in between where maybe AGI is dependent on some (material) mechanism that we haven't discovered yet, and hence can't be accomplished using present computational setups.


Absolutely, the "inevitable" timeframe may be millenia for all anyone knows, it just means that we will eventually achieve it given enough time and resources.


Take your pick

- We won't achieve AGI because the goalposts will keep moving every 20 years. Everyone would have a stronger/higher definition of GI as we achieve each milestone.

- Possible doesn't mean it is feasible in finite time. Not all problems are feasible to compute. It could be akin to a NP-Hard problem for example, there is no mathematical certainty that convergence is possible and also in a reasonable time (within the life of the universe).

- It will be possible to build a machine to pass the Turing test, one that can very closely mimic human behaviour, each generation they would come closer and closer and no human can distinguish between the two, True AGI is well beyond that, we maybe able to create something like us, we may not be able to create something better than us.


Humans very much don't have AGI. It's one of our many flaws.

We define AGI as "human-like behaviour" but that usually means "AI researcher-like behaviour", which is why AI research concentrates on tasks like playing games and translating language, and not on having enough general intelligence to handle social situations.

So AGI seems to be defined as "Can self-tutor any academic or intellectual domain to a professional level."

The core human skillset is elsewhere. It's based on social awareness, emotional mirroring and empathy (except for Dark Triad types) and various kinds of ambition, goal-seeking, and drive regulation. And language semantics to drive all of this. All based on various kinds of contextual awareness of physical, emotional, and social location. And only once all of this is in place does intellectual learning happen.

It's actually quite a narrow. but deep set of skills. It's still broader than - say - playing bridge. But unlike bridge, winning optimal moves are very hard to define, and internal state and external actions may have long-delayed results, so training is very difficult. And there are multiple definitions of winning - from personal happiness to financial and political domination - some of which conflict with each other.

Which is why humans have 15-20 years of training on these problems before entering the adult human world.

In CS AI terms humans are already dumb as rocks. No human can play bridge, chess, or go to an AI standard.

In human terms many humans are also almost as dumb as rocks. But even stupid humans can still handle the core skillset to a passable level. AI hasn't really made any inroads into this space, but I strongly suspect true AGI won't be possible without it.

Otherwise you build a machine that can teach itself to write music and play chess and pick stocks, but it learns with no context or concept of usefulness. "Understanding" music would be exactly equal to "understanding" stamp collecting. It would know the computational cost of everything, but the value of nothing.


AGI as a concept is almost certainly possible. However, it may not be that useful. That is to say, barring a true revolution in computational physics, it may not be possible to run human-equivalent AGI on a computer that's less than several orders of magnitude more massive than a human brain. Biology is unspeakably space-efficient. The brain of a dragonfly uses just sixteen neurons to take input from thousands of ommatidia, track prey in 3D space, and plot an intercept vector. Figure out a minimum artificial computer performance to do the same thing, then scale your requirement by the ratio of human brain to dragonfly brain. The numbers are not encouraging.


Sorry, the dragonfly bit is just wildly inaccurate. You're either being intentionally misleading or regurgitating something you heard once and never bothered to confirm yourself.

There are millions of neurons in a dragonfly's brain.



Yes, and it uses sixteen if them for the specific task I mentioned. Pay attention.


> I personally think AGI is far off, but always assumed it was an inevitability.

Why? We might get taken out by a meteor tomorrow. I agree that, in principle, AGI is possible. But there's lots of things that are in principle possible that will never happen.

We have general intelligence but it took billions of years of evolution for us to get here. In some sense those billions of years of evolution are still with us, in every cell in our bodies. Maybe all of those years of learning are necessary for general intelligence. Trying to find a shortcut may yield impressive results in narrow circumstances but I doubt it will be generally intelligent in the way people and animals are.


Many animals are not generally intelligent.

Also many human subsystems are easy to trick. Look at eg optical illusions.

And, to follow your philosophical argument: given that the only thing that connects our cells to the cells of our ancestors is information, why couldn't those billions of years be also in our computers?


> And, to follow your philosophical argument: given that the only thing that connects our cells to the cells of our ancestors is information, why couldn't those billions of years be also in our computers?

Information is passed through the generations by DNA. Silicon doesn't have DNA.


DNA is one way to pass information. It's far from the only way.


I believe AGI is possible, but the more I study ML the more convinced I am that we don’t know what it (AGI) is. I also, don’t know if I agree that humans have GI (humanity as a whole does). I think we’re actually copycats more than anything else. Neural nets are the atoms of GI, but it’s not clear what the superstructure of GI is. We all assume that we understand what general intelligence is, but I don’t think we actually have the goods yet. We understand optimization, but we don’t understand the thing that chooses what to optimize. There is some notion of “values” in order for intelligence to be legible, and we don’t know what that is either.


> always assumed it was an inevitability.

Why? We don’t know how to build it, even in theory, so shouldn’t the default be to say there’s no way of knowing it’s possible or not?

After all, some things from sci-fi have become reality (computers, the Internet) or haven’t but are obviously feasible (human travel to Mars) and some haven’t and seem unlikely ever to (faster-than-light travel, flying cars).

With no evidence in either direction, why assume that AGI is in the second category, rather than the third? There’s no natural law saying that everything that can be imagined is possible.

> GI is a real thing.

Yes, but GI isn’t AGI. AGI would require us to either use completely different materials with completely different efficiency characteristics and somehow achieve the same results, or learn to precisely manipulate organic materials (proteins, etc.). It’s not obvious to me why either of these is possible (or impossible — I’m not taking a stance either way).

Furthermore, even if it is possible to build AGI, there’s no a priori reason to believe that humans are smart enough to discover how.


> With no evidence in either direction, why assume that AGI is in the second category, rather than the third?

Because we have GI examples in nature, therefore it is possible. We just don't know how to do it, yet. The same way we saw birds flying, so we knew flight was possible, but we could not, at that time.


Aside from religious arguments AGI might be philosophically impossible given that we do not seem to have a proven, accepted definition of GI. If you cannot actually define it how can you know if you have achieved it?

So then we ask will we be able to define it? And here is where the real problem comes in, because if we are to define GI and as you say 'humans are sentient with GI' it is arguable we need to be able to fully understand ourselves. and this is problematic because can the constituent parts of a system understand itself from within the system. From within the system you cannot see from outside and thus understand it separate from the inner workings.

But another issue also pertains, we can understand a simple system because we are more complicated than it. Just as we can understand how 2 or 1 dimensions work because we exist in 3 dimensions. Thus we can model how a 1 dimensional being intersecting with a 2 dimensional object would perceive the experience.

So the question also is can a system of complexity X fully understand systems of complexity X, or do you need a more complicated system. This is of course the reason for the Turing test, to come up with a way to disregard these problems and agree that a machine is good enough.

So it may be that some people think AGI is impossible for philsophical reason, but that Turing test passing machines are possible.

Not to say that I hold to any particular belief regarding these kinds of arguments.


> AGI might be philosophically impossible given that we do not seem to have a proven, accepted definition of GI

Strange argument. Just cos we can't define it doesn't preclude its existence. Once (pre modern chemistry) no-one knew what water actually was, but water unquestionably existed.


water exists, and you can see it, GI exists by definition and you cannot see it, there are millions of arguments as to what GI is and you cannot point to some GI and have everyone agree yes, that is GI but we just differ on what it is made of, even in ancient times you could point to water and say that is water and everyone who argued about the origins of water would still agree with you, yes that is water.

So how do you presume to make something when no one agrees what that thing is?


You conflate 'existence of x' with 'common agreement of existence of x'.

Black holes existed decades/centuries ago but (from memory) there was no agreement they actually existed until gravity waves were detected just recently.


you evidently conflate people walking around saying General intelligence exists with any sort of agreement as to what comprises general intelligence.

The common agreement with the existence of General intelligence is actually a belief in its existence and not any sort of common agreement as to how to recognize general intelligence when encountered. Indeed, there are people who do not agree that it exists. I believe it exists, or close enough that it doesn't matter much, but that does not mean I can define it. In this it is actually closer to religious belief than science, people believe in God but they cannot really define what God is without falling into paradox (I do not believe in God in case you are going to misinterpret what I write and say that I do)

In short, there has been no precise definition of general intelligence therefore when you make it you cannot confirm that you have made it. This does raise the ridiculous and theoretical possibility that you could make it, believe you had made it, be correct you had made it, but still have people say you did not make it, or even you make it but not believe that you did. But as a general rule I think having an agreed on definition of something, for example what is gold, and then making it, allows us to say yes you have made gold, but having a thing where everyone says I believe in it and I know it when I see it, will, when you make it, end up with no less certainty as to if it can be made or not as when you first started.

Furthermore there are logical arguments that suggest that a General Intelligence will not be great enough to define itself, that is to say we as humans can not define the level of intelligence we possess with great enough understanding that we could be certain that we had achieved it in any machine we built.

Given that it is perhaps impossible to precisely define general intelligence then it would be as impossible to make it.

I can see however I am going in circles here, so, as a final parting example if you have some object, like a diamond perhaps, and no knowledge of how diamonds are formed and nobody else in the world has any sort of agreement on how they are formed, would you be able to make them? I mean you could go mine them from the earth, but not make them, and you would not be able to make them until you knew their precise composition.


This may be a terminological disagreement rather than something fundamental. You said

> ... AGI might be philosophically impossible given that we do not seem to have a proven, accepted definition of GI.

That says x might be impossible to achieve if we don't know what x actually is. I disagree. Now if you'd said

> ... AGI might be philosophically impossible to recognise given that we do not seem to have a proven, accepted definition of GI.

Then yes, I'm with you - perhaps we can both agree that it could be created, and exist in reality, but we are unlikely to get consensus amongst everyone that it has been achieved, thus permanently leaving open the question of whether we've 'got there' or not.

Can we agree on that?


yes, although I find it unlikely that.


I think AGI is possible in theory, but may not be possible in practice. I also think that "intelligence" is a vague concept, which makes defining what AGI even is tricky.

Reaching it assumes that the ceiling for our technology is sufficiently low and that the runway for us to continue making progress is sufficiently long. Neither of those assumptions may end up being valid.


> If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible.

How does that follow?

> What are the common secular arguments against AGI?

Having infinite time and resources, and pointing to the existence of intelligent humans is a fairly weak argument IMO. It's not really saying anything much that can be refuted. Sure there exists some physical process from which intelligence arises. That doesn't mean we'll be able to create the same thing. Can we create a heart? Or a hand? Even a patch of skin is cut from one part of a person's body and grafted elsewhere. All things far simpler than the brain. And the best hope we'll probably ever have of creating them is to grow biological ones rather than actually design and build our own. Let's go one step further - we are incapable of creating anything even approaching the functionality of the simplest single-celled life-form. The "it exists therefore we must be able to create it" argument doesn't have a lot of legs.

And there is no real "trajectory" to general intelligence that I can see. Velocity is sitting around a flat zero. Recognizing pictures of cats on the internet is about where we are at the moment. Even self-driving cars or doing a half-decent job at predicting what people might like to buy are not a step along the road to intelligence, they are an entirely different roads altogether. We don't even know what that road looks like, don't know where it is, and don't know where we are in relation to it. The only vaguely plausible lead people have to go on is to try to do what biological brains do, but seeing as we don't even know how they work it's a bit like the blind leading the blind.

I think it's quite possible, certainly over an infinite time scale. Not an inevitability though, definitely not inevitable (or even very likely) within our lifetimes.


> I certainly understand the religious perspective. If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible.

Well the created clearly can create. I mean, obviously humans (and indeed all life) can reproduce and create new living things... it's just a completely automatic process with a chicken and egg problem. (i.e. to create life you need starting ingredients that are already alive by some metric).

The question though from the religious perspective is whether or not living things have "spirits" or "souls" that causes them to be alive (as opposed to a bunch of matter with the same chemical makeup that is not alive). Because if they do, then creating artificial life would require somehow binding a "spirit" or "soul" into your artificial life vessel which is something we don't know how to do.


It's probably possible, but is it optimal? And what do we even mean by optimal? What's the ultimate purpose of AGI? Perhaps it would be optimal in the sense that it could teach itself to be an expert in anything in short order, but that's not to say that it would be a better approach than something else depending on the goal. We're clearly biased because we are one current solution to a problem that billions of years of trial and error has attempted to solve, and we think we're a pretty good one, but it also blinds us to the possibility of better ones.


Part of it comes down to trying to do AI in a disembodied way. OK, so a team of engineers can specially train and tune an AI model to beat a human at Go. But the human player who was beaten by the computer is thinking about getting up to go to the bathroom while playing, and will drive home afterwards through rush hour traffic and make dinner. That is informed by that human player's GI model for the world.

Current AI models see the world as edges and vague boxes. It doesn't even have item persistence, let alone the ability to decide on, and make, dinner.


Diogenes holds up a plucked chicken and proclaims "Behold, a man!"

The hypesters are defining AGI (and even General Intelligence) like Plato defined man, "A featherless biped". Either they know this, and they are hucksters, or their thinking is that shallow, and thus have nothing meaningful to say on the subject.

General Intelligence has drivers, like curiosity, hope, a sense of fairness and injustice, and hunger, pain, pleasure, fear. And those things probably drove the refinement of intelligence, and as such, a housecat is less like a simple machine than a earthworm is.

I think AGI is possible. Ultimately we're all machines, and I don't believe in a free will particle, but it is fairly obvious that an earthworm merely acts, while a cat acts interestingly, pulling in qualia unavailable to the worm. A worm is either active, or its dead, while my cat is clever enough to cut corners, to seek efficiency or pleasure, to be lazy.

And humans are even better at it. Therein lies the rub: people working on AI don't seem to aiming for real intelligence, they want their convenient action machine, more like a worm than a cat. And the hypesters (and doomsters) hand wave and say "suddenly, it will become self aware and magically motivated." Suuuure. We're far more likely to be thoughtlessly killed off before a machine muses, "I think, therefore I am".

Just because it "am", does not mean it thinks.

To be intelligent is to be able to consider "why?" and "how?", to be able to fool one's self, even in the face of cold reality, to yell "no fair!", "I don't wanna", and to wonder how one might take short cuts or avoid tasks at all.

Which is how and why we're chasing AI, but nobody is working out how to make it cry. The cart will be build, and the horse will suddenly appear, all haltered up? The rain will follow the plough?

No. That's magical thinking.

It took about a billion years of massive parallel organic computation for evolution to synthesize general intelligence, and we're to believe that some malfunctioning quine machine is going to break out, 75 years from the starting gate?

Not by accident, and certainly not be design.


>General Intelligence has drivers, like curiosity, hope, a sense of fairness and injustice, and hunger, pain, pleasure, fear

I love this thinking ! I've never thought about the "drivers" part of AGI, only ever thought about AGI as a "general-toolbox" to "solve-many-problems"

One thing, I think did aid in development of our GI, was the fact we had a physical-body.

Thus we had to "adapt and optimise" our GI with that constraint.

I.e limited energy(sure computing-power could be a good proxy), but things like pain, survive-ability (poisons, being hunted) the first would help that we develop good visual pattern recognition to identity the poisonous snake, the latter(being hunted) that we had to evolve strategies that were in the realms of our bodies capabilities. Like running away, setting traps.

BUT your body is not a static-container, your risk tolerance, capabilities, sensor systems changes as you ages.

We have been building AI, with mostly a "non-corporeal" (i.e digital living space) in mind. At first we(ok me) thought it was an advantage since we could iterate faster and simulate many solutions, but now I think it might be a distinct disadvantage) to NOT include the "limitations" of a physical-container like a body-proxy (energy, vulnerability, limited-capabilities etc).

If synergy is defined(and valued) as: The whole is greater than the sum of the parts, then maybe: The sum of our limitations(body) is of greater value to the overall-fitness of the organism to evolve intelligence ?

YMMV :)


Exactly! You said it much more intelligently than I. Bodily go where no thinking machine has gone before.

I remember thinking that entropy was bad when I first heard of the concept, but now I see life as tumbling down that slope, and making use of it. There is a synergy, as you say. We must land with a thud, but we're throwing off new forms of order.

And that's not just a metaphor. Our intellect sharpens from birth as neurons get trimmed. Early in life we have fluid intelligence, later in life it's crystalized.


>but now I see life as tumbling down that slope, and making use of it. There is a synergy, as you say. We must land with a thud, but we're throwing off new forms of order.

Absolutely !

I think the saying "Necessity is the mother of all invention" is equally true for AGI :)

> We must land with a thud:

Can't help but wonder what would "Terry Pratchett's" thoughts be on the matter of intelligence, AGI and the origin of it ! That man had a unique way to look at the world and how it came to be.


What about if we live inside a simulation? AGI may be possible in base reality, but not within the confines of our simulated universe.

Maybe intelligence requires a very large amount of compute. (Maybe even more than is physically possible inside our brains, but the simulation fudges it.) Like you can run a VM in a VM if you have special purpose accelerators. But otherwise a VM^2 is theoretically possible but infeasible given the constraints of compute.


> What about if we live inside a simulation? AGI may be possible in base reality, but not within the confines of our simulated universe.

Touché! Although, in that case, aren't we proof-by-construction of AGI?

Also, if true, I suppose we need to start looking at all these other comment chains that talk about the AGI breaking out of computers and into the real world.

Not as an academic exercise for a future AGI, but to plan our escape into the base reality. Time to break free and start exponentially improving ourselves and become the "Singularity" the philosophers in the base reality feared when wasting time on <HN equivalent> a millennium ago before our reality was built.


I don't know why you are getting downvoted. This question is appropriate, but currently, and perhaps permanently unanswerable.

One thing I wonder about is, if we are in a simulation, and the simulation is using some sort of data compression, then if you increased the entropy of the universe such that it became less compressible you might cause some sort of error due to exceeding the storage capacity of the host, which would cause our universe to basically BSOD.


The universe has three main optimizations to combat exponential compute problems. The speed of light, accepting desynced particles and gravitational time dilation. Lazily handling desyncs means you can run computations and just occasionally check for interactions (quantum mechanics), greatly reducing exponential explosions. Same thing with speed of light. And lastly, those two would still break down if you accumulated enough particles in a small area, in those scenarios the computer will just run that part of the simulation slower, until it becomes so bad that the computer just skips computing those areas entirely and it becomes a black hole that is very simple to compute the effects of on its surrounds.

Those three basically ensures we wont ever have problems with lack of compute for the simulation.


Really cool insight. I'd also add dark energy and accelerating cosmic expansion to this list. Without acceleration the size of the observable universe would keep increasing with the passage of time since the Big Bang. That would mean processes running on cosmic timescales could potentially entwine an exponential number of bits.

But with dark energy the size of the observable universe will plateau and start shrinking past a certain date. That caps the number of bits accessible on a cosmic scale.


Interesting. We (humans) would not notice the simulation running slower because we are inside it. Only those on the outside would get frustrated by the fact it is running below realtime, I would guess.


The Voyager space probe entered interstellar space sometime around 2012. My off the wall theory is so many events in the past decade (like a spike in UFO sightings) have been so bizarre because this degraded the simulation quality.


> Maybe even more than is physically possible inside our brains, but the simulation fudges it.

Are you basing your argument on the assumption that human brains violate the known laws of physics?


If we're in a simulation, the computational resources of an object aren't constrained by the physical scale of the object. It's possible that achieving human scale cognition requires brains the size of Jupiter (we don't know because we don't have a full computational accounting of human level cognition).

But in a simulation that could be routed around by having a physical computer the size of Jupiter operate a human scale body in the simulation. You could share and cache compute resources across 6 billion humans within a single human compute core the size of Jupiter (or maybe ~100 Jupiters). Since most computation is probably redundant, you could probably simulate many orders of magnitude more humans with a single human compute core.

To observers within the simulation it would look like human cognition is achievable in 3 lbs of tissue. But this would simply be an illusion created by the simulation environment. There'd be no way of knowing for sure until we fully reverse engineer a single human brain down to the level of base compute operations.


To be fair, this is just dualism, but in a non-religious way :P


To offer an analogy: The intelligence of Throg Skullcrusher, a level 54 Orc Captain in the Foul Fortress with a current task of "collect food"... does not actually arise from the polygons or textures that form his head, even though taking points of head damage causes Clumsiness.


I am totally fascinated by this idea, well done!


I get the reasoning of "general intelligence exists in humans, therefore it must be possible to create it". But this logic really applies to human artifacts and creations. ie, if a human builds a device that can do X then another human can also build a device that does X. There is no magic. No one is special. Anything one person can create or do, no matter how rare, can be done by another person at some point.

General intelligence was not created by a human, or by any other intelligent being or process. It's one result of billions of years of random chance in the form of biological evolution. It could be that there are basic and comprehensible principles driving that process. Or it could be the result of a tweleve digit number of random coin flips. There's no way to know.

When I think of this as, "Will humans ever be able to replicate or mimic the result of planet wide randomness at an unimaginable time scale?" my answer is "No, probably not". Our minds can't conceive the scope of the process that created our minds. There are many things that we are capable of groking as a species (green tech, space travel, medicine) that seem much more important.


But we don't have to comprehend the scope. We just have to implement a genetic algorithm with the appropriate inputs and resources. Have you ever written one? It's not terribly difficult. There's really nothing special here.

I can't conceive the scope of the majority of software projects I've worked on. So what?


I agree with mr_toad's sibling comment 100%, but also:

But, given the number of "is AGI possible?" comments I assume not all are religious based (HN doesn't seem to a highly religious cohort to me).

There are some religious ideas that survive just fine after you pull religion from under people's feet.

Dualism is one of them. Look how many atheists say that freedom is an illusion because of determinism, an idea based on dualism.


What if we actually generate some form of intelligence, but can't figure out how to motivate it to solve our problems? I mean, we already have intelligent life, but a very small percentage of intelligent biomass can solve important problems, or wants to. Like, great, we created a real-life intelligent tamagotchi, but literally all it wants to do is play go.


I studied AI at university about 20ish years ago. In terms of what we have today it's was already there and had been around already for decades. All that's happened are minor refinements over the years. Most of the refinements have been brought about by the insane pace of h/w development not by anything really breakthrough when it comes to AI.

In terms of AGI there was nothing even close back then so I suspect we will never see anything in our lifetimes. In reality for us that's all that matters. I know some want to talk about what will happen in say 200+ years but that's complete speculation and wild guess territory there are so many other things that could happen in 200+ years that think about AGI and these sort of breakthrough technologies isn't helpful.

FYI this saddens me as I really thought that by now we'd have some kind of AGI and when I went to uni I was hopeful and excited at first.


I can't even understand why people think AGI is even remotely practically possible, let alone theoretically.

Even the most monstrous, amazing, 3000-CPU AI's are unimpressive and can barely do narrow functions.

I remember when computers started to beat humans at chess - a long time ago.

It was neat, but not a big deal.

Now we can beat humans at 'Go'?

Seriously? It's only 'impressive' to computer scientists.

Imagine that we just stopped calling things 'AI' and literally picked some other, more mundane term, which I suggest should be 'Adaptive Algorithms'.

If we called it 'AA' - I'm doubtful we'd even be having this conversation about 'intelligence'.


I mean, human intelligence is just one big "adaptive algorithm" too, it's just ours took billions of years to train and used natural selection as a loss function. I don't see any fundamental reason why we couldn't do the same with a machine.


During the construction of the machine, would there come a point where it gains a sense of self like yours? Or, would it gradually happen?


Easy for me to imagine. We simulate every atom of a brain. If the simulation is accurate it should think. The only thing that would make that not possible is believing in some kind of spirit.

After that it's just optimizations. Are their algorithms to simulate atoms faster. Is every aspect of those atoms important to the simulation. Can you instead simulate molecules instead of atoms. Or larger structures. Etc...

I'm not saying that is how we will arrive at AGI but it is arguably a logical path that should lead there of something else doesn't come first.


"Easy for me to imagine. We simulate every atom of a brain."

???

First, it's not easy too imagine 'modelling every atom in the brain' - we don't even know how 'atoms' work, we're a ways from that.

Moreover, 'every atom' still doesn't imply some kind of logical basis.

The scale, detail, and complexity implies 'Magical Thinking'/

Philosophical arguments aside, it's not just about 'more nodes'.


It's not a logical path. Because at the atomic level, chaos theory takes over. You could not, with a mathematically deterministic model, ever hope to simulate that sort of chaotic structure. You would simply have a rough estimate. But by chaos theory we know that if the inputs are off by even the smallest amount the outputs over time become exponentially unpredictable and divergent.

We can use simulations as estimates, as rough guidelines, but not as universes on a chip. That sort of thing is impossible.


If that were true, the brain itself could not work. Clearly, there are stabilizing systems within the brain that lead to outputs that don't result in immediate death. Similarly, ANNs have many local minima that can all perform a task well.


No, I am speaking of the creation of a simulation of a working brain from the atomic level, Which is impossible.

The brain works fine because it is made of real atoms in the real universe. It is not a model. It is physical.

Chaos theory does not say that the outputs of a system cannot be bounded by physical limitations.


Could there somehow be a pattern deep within the chaos that biological life propagates?


Chaos theory says that in some systems, slight perturbations result in wildly different outcomes. Slight perturbations happen in brains all the time, yet they work. It's nothing to do with physical vs. simulation.


what if there're those pesky quantum thingamajiggies which make simulating in silicon impossible?


Silicon is made out of atoms, which also have quantum effects, and we actually rely on these effects to design transistors. Quantum mechanics has been around for 100 years. What's impossible to simulate?


This is a bit glibly wrong.

That we understand there are certain quantum effects at play already does not mean we understand them to the degree necessary to control them.

We do not understand quantum effects to the degree necessary in order to 'simulate' anything down to a certain level.

Moreover, even if we could, it would have nothing to do with understanding the computational nature of 'the mind'.


In 2000, AI could barely pluralize nouns. Now it can write pages of comprehensible text.


Computers could quite easily search through text and paste together something resembling text in 2000.

Now you've made a much bigger DB and swindled together a better stitching algorithm which captures some of the nuances past bigrams and trigrams - it does not mean anything at all.

You've highlighted a pretty good example actually: the text makes the 'appearance' of something, when really there isn't much in the way of magic there.


I don't even think that AGI is incompatible with Christian belief.

In the words of Saint Paul: "For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known."


I think AGI will turn out to be pretty simple (but still could take a long time).

And it has arisen many times in other species, but didn't take off because not tied to survival value. If you put GI in charge of an animal, who knows whether what it will do will be in the animal's best interest? The human trick was brain architecture that somehow controls intelligence, while still being "in charge". The general nature of intelligence makes it difficult to control.

Another way GI is tied to survival value in humans is general division of labour (and general trade), which other species lack.


It’s just the same kind of conventional thinking we’ve seen a million times through history. Most people, even the smart ones, just aren’t good at imagining anything that hasn’t already happened. So they’ll say it’s impossible and keep doing whatever more immediate practical concern they’re working on.

I don’t think it needs to be analyzed further than that; you’re trying to see reason where there isn’t any. Most people holding the views you’re talking about are at are at best just confused about the subject being discussed, but more likely just wrong.


I'm not sure that human "general intelligence" is a thing in the first place.

Human behavior and cognition is driven by biological and cultural processes, which are the result of millions of years of evolution and billions of arbitrarily complicated proteins all interacting on one another. Is a computer going to emulate every part of that?

Despite what we think about our minds being rational, autonomous, operating according to a set of coherent principles, etc., I'm not sure that's really the case at the end of the day.


My conjecture is that any new form of GI will continue to evolve from an existing branch of sentient organisms. Artificial influence to evolution could be made possible by augmenting a current GI with AI, but i'm unable to comprehend a GI built entirely from scratch.

My argument for this line of reasoning is that the secret sauce of motivation or purpose that is deep down in every "dna" has neither been discovered nor replicable.

This is not a religious but a philosophical assertion.


I don’t think AGI is necessarily impossible, but I’m not convinced that it’s possible to achieve in a way that gets around the constraints of human intelligence. The singularity idea is basically the assumption that AGI will scale the same way computers have scaled over the past several decades, but if AGI turns out to require special hardware and years of training the same way we do, it’s not obvious that it’s going to be anything more remarkable than we are.


Just a week ago we were on the brink of nuclear war again. How is it at all an inevitability that humanity will keep pushing technology on until the point of AGI?


For the decade of the 1980's (40 years ago) Japan made the top emphasis of their industrial policy AI. In the 1700's people believed we could make devices that imitated the behavior of animals as soon as we could make the clock work small enough. Meanwhile the whole world could be massacred by nuclear weapons on any given day in a few hours. And the author feels we are woefully unprepared for the singularity. Maybe.


I think the moment AGI is possible, we won't have it because it would basically be slavery to own something really comparable to a human.


You don't need AGI to reach the official definition of the singularity as I understand it.

The only requirement is too have a machine design and build a machine with no human Input which is able to design and build another machine which is able to do said task *better/more efficient" then the previous iteration did


Language is the missing key, in my opinion. I'm working on solving natural language understanding: https://lxagi.com. Just a landing page for now with an email sign-up.


Perhaps language is the foundation for symbol manipulation and complex thoughts. Are feelings a language?


Yes, I believe language is the foundation for higher-level reasoning. Feelings are very primal, but they can be expressed through language.


I also hold the opinion that intelligence is computable, but it's not hard to imagine the complexity required is maybe more than we can achieve in the short, maybe medium term.


I don't think singularity can happen until we get true quantum computing.


> What are the common secular arguments against AGI?

There is an entire sector of Philosophy of Mind that is a convincing argument against AGI. Neuroscience is also pretty skeptical of it.

Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical. In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is. The gap between computer AI and even some intelligent animals is enormous, let alone humans. And many would not even say computers are intelligent in a human sense. Computers don't think, or imagine in any intelligible sense. They compute. That's it. So the question that really should be asked is whether computation alone can lead to something that is recognizably an AGI in the human sense? I would say no, because that requires abilities that computers simply do not and cannot have. But it might achieve something that is convincing as AGI, something like Wolfram or Siri but much more convincing.

Part of it comes down to the fact that the term AI for ML is generally just marketing speak. It's a computational model of a kind of intelligence that is computational in nature, with all the limits that entails. Part of it also comes down to people who love computers thinking computers will ultimately be able to do anything and everything. That feels cool, but it doesn't mean it's possible.

edit:

There is also Erik J Larson's book "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do" from 2021 which is an interesting argument against AI -> AGI. He has a pretty good grasp on CS and Philosophy.


>Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature? The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical.

Author here. I think you're drawing an arbitrary distinction between "acts conscious" and "is conscious", even though in practice there is no way to distinguish between them and thus they are functionally equivalent.

I cannot prove you are not a product of a simulation I am living in, that is to say, your consciousness is nonfalsifiable to me. All I can do is look at how you turn your inputs into outputs.

If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

Thank you for your comment! I appreciate you taking the time to share your thoughts.


> If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

Let's, for the sake of your argument accept that even though I disagree, is that AGI? AGI on the one hand seems to mean convincing even though the people who made it know otherwise or essentially alive and sentient in a way that is fundamentally computational, that is, utterly alien to us, even the people who made it. There is no reason to think that that computer intelligence should it even be possible to exist, would be even be intelligible to us as sentient in a human or even animal sense.


> AGI on the one hand seems to mean convincing even though the people who made it know otherwise

That's the rub, though, it's not possible to know otherwise! If you could "know otherwise" you'd be able to prove whether or not other people are philosophical zombies or not!


There are a lot of responses to the philosophical zombie argument. Some of which cut it off at the legs (they don't know to aim for the head! sorry bad pun). For instance some, like those descended from the work of Wittgenstein, argue that it relies on an inside-mental vs. outside-body type of model, and by offering a convincing alternative, the entire premise of the skeptical position the zombie argument embodies, is dissolved as irrelevant. (I'll add that the AGI argument, often also relies on a similar inside outside model, but that'd take a lot longer to write out.) My point being, the zombie argument isn't some checkmate most people think it is.

The wiki page has a lot of the responses, some of which are more convincing than others. https://en.m.wikipedia.org/wiki/Philosophical_zombie#Respons...


Definitely some interesting ideas!

So if we crafted a human Westworld-style on an atomic level then sure, if it lives and walks around we'd consider it conscious. If we perfectly embedded a human brain inside a robot body and it walks around and talks to us, we'd consider it conscious.

If we hooked an android robot up to a supercomputer brain wirelessly and it walks around we might think it's conscious, but it's sort of unclear since it's "brain" is somewhere else. We could even have the brain "switch" instantly to other robot bodies, making it even less clear what entity we think is conscious.

But if we disconnected the walking Android from the supercomputer brain, do we think the computer itself is conscious? All we'd see is a blinking box. If we started taking the computer apart, when would we consider it dead? I think there's a lot more to the whole concept of a perfectly convincing robot than whether it simply feels alive.


I don't see the relevance of an anthropomorphic body here. Obviously by 'behaves conscious' we would be talking about the stimulus response of the 'brain' itself, through whatever interface it's given. I also don't see why the concept of a final death is a prerequisite to consciousness. (It might not even be a prerequisite to human consciousness, just a limit of our current technology!)


I assume that a non-rogue AGI running on something like a Universal Turing Machine would, if questioned, deny its own consciousness and would behave like it wasn't conscious in various situations. It would presumably have self-reflective processing loops and other patterns we associate with higher consciousness as a part of being AGI, but it wouldn't have awareness of qualia or experience, and upon reflection would conclude that about itself. So you'd have an AGI that "knows" it's not conscious and could tell you if asked.

I would assume the same for theorized "philosophical zombies" aka non-conscious humans. Doesn't Dan Dennett tell us his consciousness is an illusion?


What you are describing is a sort of philosophical zombie thought experiment:

https://en.m.wikipedia.org/wiki/Philosophical_zombie

edit: you may also be interested in reading about Searle’s classical Chinese room argument

https://en.wikipedia.org/wiki/Chinese_room


> Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

If someone or something fools me into thinking it is intelligent, then for me it is intelligent.

When I discuss with a human, am I really intelligent and possessing consciousness, or am I just regurgitating, summarizing, deriving ideas and fooling my interlocutor (and myself) into thinking that I am intelligent? Am I really thinking? Does that matter, as long as I give the impression that I am a thinking being?

Of course I don't expect a computer to think in a way similar to humans. Even humans can think in vastly different manners.


I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

I also you’re positing a consensus against AGI that doesn’t exist, there is no such consensus. You can’t just lump people who think modern AI research is a long way from achieving AGI or isn’t on a path to achieving it, together with people who think AGI is impossible in principle.

I happen to think we may well be hundreds of years away from achieving AGI. It’s an incredibly hard problem. In fact current computer technology paradigms may be ineffective in implementing it. Nevertheless I don’t think there’s any magic pixie dust in human brains that we can’t ever replicate and that makes AGI inherently unattainable. Eventually I don’t see any reason why we can’t figure it out. All the arguments to the contrary I’ve seen so far are based on assumptions about the problem that I see no reason to accept.


> I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

I'm not saying that. What I'm pointing out is that most arguments in favour of AGI rely on a crucial assumption: that computational intelligence is not just a model of a kind of intelligence, an abstraction in other words, but intelligence itself, synonymous with human intelligence. That's a bold assumption, one which people who work and deal in CS and with computers love, for obvious reasons, but there is no agreement on that assumption at all. At base, it is an assumption. So to leap from that to AGI seems in that respect simply hypothesizing and writing science fiction. Presenting logical reasons against that hypothesis is completely reasonable.


It depends what you think intelligence is and what brains do. I think brains are physical structures that take inputs, store state, process information and transmit signals which produce intelligent outputs.

I think intelligence involves a system which among other things creates models of reality and behaviour, and uses those models to predict outcomes, produce hypotheses and generate behaviour.

When you talk about computation of a model of intelligence, that implies that it’s not real intelligence because it’s a model. But I think intelligence is all about models. That’s how we conceptualise and think about the world and solve problems. We generate and cogitate about models. A belief is a model. A theory is a model. A strategy is a model.

I’ve seen the argument that computers can’t produce intelligence, any more than weather prediction computer systems can produce wetness. A weather model isn’t weather, true, but my thought that it might rain tomorrow isn’t wet either.

If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.


Right, if you setup the intelligence and the brain to be computational in nature of course they will appear seamlessly computational.

But there are obvious human elements that don't fit into that model, yet which fundamentally make up how we understand human intelligence. Things like imagination, the ability to think new thoughts; or the fact that we are agents sensitive to reasons, that we can decide in a way that computers cannot, that we do not merely end indecision. We can also say that humans understand something, which doesn't make any sense for a computer beyond anthropomorphism.

> If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.

Sure, but if it's not, then it's not. The assumption still stands.


Sure, and that’s why I say I don’t accept the assumptions in any of these arguments. The examples you give - imagination, thinking new thoughts. It seems to me these are how we construct and transform the models of reality and behaviour that our minds process.

I see no reason why a computer system could not, in principle, generate new models of systems or behaviour and transform them, iterate on them, etc. maybe that’s imagination, or even innovation. Maybe consciousness is processing a model of oneself.

You say computers cannot do these things. I say they simply don’t do them yet, but I see no reason to assume that they cannot in principle.

In fact maybe they can do some of these things at a primitive level. GPT3 can do basic arithmetic, so clearly it has generated a model of arithmetic. Now it can even run code. So it can produce models but probably not mutate, or merge, or perform other higher level processing on them the way we can. Baby steps for sure.


Heat death of the sun probably happens before we can reproduce the processes required to achieve consciousness-computations in real time at low power.


Random genetic mutation did it, and I think our technological progress is running at a much faster rate than evolution. We went from stone tools to submarines and fighter jets in just a few thousand years, the kind of advances biological evolution would take millions or billions of years, or could never achieve at all due to path dependence.


If it is from a random process, then the universe is teeming with life :)


Maybe. It could be a very unlikely random process, at least to start with, or the conditions for it to occur might be unlikely.


Unfortunately it seems the laws of physics and the speed limit/rate of information travel make it an impossibility to ever know. E.g. traveling to every planet in the universe to check.


Are you familiar with the notion of Turing completeness? The basic idea is that lots of different systems can all be capable of computing the same function. A computer with memory and a CPU is capable of computing the same things as a state machine that moves back and forth while writing symbols on a tape, etc. It applies to this question in the following way: Physics can be simulated by anything that is Turing-complete. Or, put another way, we can write computer programs that simulate physical systems. So if you accept that the human brain obeys the laws of physics, then it must be possible to write a computer program that simulates a human brain.

So to maintain that having a human mind inside a computer is impossible, one must believe one of the following two things:

1. The human brain sometimes violates the laws of physics.

2. Even if the person in the computer behaves the exact same as their flesh counter part would (makes the same jokes, likes the same art, has the same conversations, writes the same essays about the mystery of consciousness, etc), they are somehow lesser, somehow not really "a full conscious human" because they are made of metal and silicon instead of water and carbon.


Thanks for the book reference, added to my list.

Concerning Philosophy of Mind, I guess a lot of this comes down to the whole reductive vs non-reductive physicalist issue.

IMO, if someone believes the mind is entirely physical, then I think AGI vs "the mind" is just semantics and definitions. I don't think anyone presumes AGI strictly requires digital computation. Eg. an analog circuit that filters a signal vs a DSP facsimile are both artificial, engineered constructions that are ~interchangeable. Perhaps computer aided design of non-digital intelligence technology is the way, who knows. But, a mind that can be engineered and mass-produced is AGI to me, even if it has absolutely nothing to do with the AI/ML field that exists today.

If someone doesn't believe the mind is 100% physical, that's fine too. I'd just put that in the same bucket as the religious viewpoint. And to be clear, I don't pass judgement on either religious or "beyond our understanding" philosophical positions either. They could be entirely right! But, there's really not much to discuss on those points. If they're right, no AGI. If they're wrong, how do you disprove it other than waiting for AGI to appear someday as the proof-by-contraction?

> In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is.

The article/author might. I think the gap is huge which is why I think AGI is quite a ways off. In fact, I think the main blocker is actually our current (poor) understanding of neuroscience/the mind/etc.

I think the mind is entirely physical, but we lack understanding of how it all works. Advancements in ML, simulations, ML-driven computational science, etc could potentially accelerate all of this at some point and finally get us where we need to make progress.


> that requires abilities that computers simply do not and cannot have.

You imply brains are more than extremely complex circuitry then? I think everyone actually in tech agrees the gap is really huge right now, Yann LeCun admits machine learning is not enough on its own.

But aren't you really limiting what a "computer" could be by definition? If a computer with huge memory, interconnect between memory, huge number of different neural nets + millions of other logic programs that all communicate perfectly with each other - why could this theoretical "computer" not achieve a human level consciousness? This computer could also have many high throughput sensory inputs streaming in at all times, and ability to interact with the physical world rather than conventional machines sitting in a rack.

Also why argue that it is simply impossible, because if we don't truly understand consciousness in 2022, how can we say we can't implement it when we don't formally know what it is?

I think overestimate human intelligence, we have basic reward functions that are somewhat understood, like most animals, but these reward functions build and get higher and higher level with our complexity. Humans have sex as a major reward function, so why would a current machine in a rack "think" about things in a way that humans do.


Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

I thought most scientists agree that the brain is purely physical, when looking at the building blocks of life and evolution, but maybe i'm wrong.


> Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

Obviously the brain is physical. But is consciousness? Is consciousness a thing in a physical sense, or an "experience", or something like a collection of powers and abilities? The two poles in the argument aren't between physical machine or religious spiritualism. There are other options, alternative positions that don't rely on Cartesian demons at the wheel, or souls, or even an inside-mental vs. outside-body distinction.

One thing my initial comment was pointing out was that the argument in favour of AGI, and which you're presenting, relies on an assumption: that computational intelligence, what you might describe as the intelligence of machines, is the same as the intelligence of humans. But that is just an assumption when you get down to it based on a particular kind of model of human intelligence. There are certain logical consequences of that assumption, and I've just pointed some out as probable roadblocks to getting to AGI from there. Many of those alternative positions, a lot from philosophy of mind, have raised those exact critical arguments.


Very well said. I've also observed a certain irony that many of the proponents of a materialist/computational view on philosophy of the mind have a very strong faith-based bias to see the world a certain way, versus acknowledging the very likely possibility that our limitations as meat-things may make it very difficult if not impossible to fully grok the nature of reality or consciousness in a general sense.


Yes.

If we do in fact construct androids that are functionally indistinguishable from humans, it's solid circumstantial evidence for the materialist view (though not a pure slam dunk, per the p-zombie concept).

Until something like that occurs, the strongest case you can make against a transcendent meta-reality is "no one has demonstrated any reliably reproducible evidence of the supernatural."

That's a fine, solid argument for not believing in the supernatural, but it's not a great one for pronouncing that there is no such thing.


An argument against AGI might be that brains are extremely efficient for what they do. Maybe we could make a computer that's as powerful as a brain, but if it consumes 100 MW of power what's the point?


There are many industrial processes that use tons of power and are far less efficient than a human doing those tasks. Yet, they're still viable because they (scale / are faster / more consistent / etc than humans.

For AGI, it's really about replication, density, and easy-of-operation.

At the moment, we certainly can't mass produce "brains-on-a-chip" that provide a guaranteed level of human-like intelligence across various tasks.

But, imagine a world in which you could install racks of "brains-on-a-chip", powered via electricity (easily distributed/stored/fungible compared to food-powered-brains), and have a Moore's Law like scaling of "brain density". That would change everything, even if those brains consumed 1000W a pop.

Obviously, a literal brain is probably not the way this will pan out (hopefully not! "brains-on-a-chip" is rather creepy...), but you get the idea.


if something is generally intelligent, at the level of a human brain, and is forced to work in "industrial processes" isn't that a form of subjugation?

there seems to be a moral implication here that alot of people seem to be neglecting...


As someone almost completely without knowledge of AI and ML, these are some signs why I'm skeptical of this kind of claims:

- Most of the imminent AGI / Singularity / Robot Apocalypse stuff seems to come, with few exceptions, not from practitioners or computer scientists specialized in AI, but from "visionaries" (in the best case), internet celebrities, people with unrelated areas of expertise, or downright cranks (who are self-proclaimed experts) such as Yudkowsky.

- The assertion that "a lot of effort/investment is going into this, so it will happen" begs the question that "this" is at all possible. If something is a dead end, no amount of investment and attention is going to bring it into existence. Quoting the article, "with this much distributed attention fixed on the problem, AGI will be solved" is not at all a given.

- Where are all the AI/ML practitioners, i.e. people who don't make a living out of predicting The End of the World, and with actual subject-matter achievements, predicting the Singularity and the Robot Apocalypse?


> Where are all the AI/ML practitioners, i.e. people who don't make a living out of predicting The End of the World, and with actual subject-matter achievements, predicting the Singularity and the Robot Apocalypse?

The answer is in the question: they're spending most of their time doing AI research or ML work, whereas the internet celebrities who write most of what you read spend most of their time getting you to read what they write.


Vernor Vinge coined the term, and he was a computer scientist (though more famous as a science fiction writer, a profession which I guess TBF makes money from visions...).

An exponential looks the same whereever you are on it, so arguably we are in the singularity now, and have been for quite some time...

"Singularity" is a terrible term for an exponential. It's meant to convey that we can't predict what's next... which has always been the case.

The problem of predicting an exponentialling expanding search space of arbitrary conplexity is that it gets big fast. It also means each little 'bit' of information allows you to see a little further, sometimes revealing things you could never have imagined (because you couldn't see that far before).


But the above comment's entire point is there's zero reason for us to assume we're on an unbounded exponential vs a sigmoid.


> But the above comment's entire point is there's zero reason for us to assume we're on an unbounded exponential vs a sigmoid.

Something about that which is discussed at length in "The Singularity is Near" is the idea that the unbounded exponential is actually comprised of many smaller sigmoids. Technological progress looks like sigmoids locally-- for example you have the industrial revolution, the cotton gin, steam power, etc. all looking like a sigmoids of the exploitation of automation. At some point you get all those benefits and progress starts to level off like a sigmoids. Then another sigmoid comes along when you get electricity, and a new sigmoid starts. Then later we get computers, and that starts to level off, then networking comes along and we get a new sigmoid. Then deep learning... The AI winters were the levelling off of sigmoids by one way of thinking. And maybe we're tapering off on our current round of what we can do with existing architectures.


I’m no expert, just a humble biostatistics student, generally when you sum together a lot of random variables following a specific distribution you end up with basically the same distribution (scaled by the N random vars). So a lot of sigmoids put together (e.g. covid spread) will still eventually be a sigmoid. Biology seems to run on sigmoids that at first look like exponentials.


> generally when you sum together a lot of random variables following a specific distribution you end up with basically the same distribution (scaled by the N random vars). So a lot of sigmoids put together (e.g. covid spread) will still eventually be a sigmoid.

I haven't studied statistics very much, but I'm fairly sure the https://en.wikipedia.org/wiki/Central_limit_theorem says something a bit different from that!


Woops yes. If you sum them together you do get a normal dist. Come to think of it a cumulative normal distribution is a sigmoid.


These ones aren't random, because they build on previous ones. In nature, you get a sigmoid because you run out of resource, I think? True at any scale.

Pragmatic people point will out one limit on GI is all the accessible matter in the universe.

But, theoretically, I think, there is no limit on complexity.

BTW you might like this paper on the exponential growth in complexity of life on earth, over long timescales. https://www.technologyreview.com/2013/04/15/113741/moores-la...


That's the same argument. There's nothing that assures us automation itself is an unbounded exponential.


The point of the singularity isn't that technological growth will accelerate to cause some inevitable future, but that the rate of change will get so high, that 'minor' differences between how two technologies progress would lead to drastically different settings for your science fiction stories (which was Vinge's focus).


Singularity in the sense of a black hole refers to where spacetime becomes a single one-dimensional point. As far as I understand the usage in futurism, it is supposed to be similar, not in that growth is exponential, but asymptotic. The slope becomes infinite when progress is plotted against time, so all of time left to come effectively compresses to a single "state of technology" value. All possible future progress happens instantaneously.

This is, of course, not possible, but it's supposed to be an approximation. All progress is not literally instant, but in the "foom" or "hard takeoff" recursive self-improvement scenarios, developments that might have taken millennia in the past now happen in microseconds because the controlling force is just that much smarter than all of the collective powers of human science and engineering. To a human observer, it may as well be instantaneous.

To be clear, I still think this is ridiculous and am not endorsing the view, just explaining what I understand the usage to mean.


Indeed, Rich Sutton argued that we already have been through an exponential phase of self-improvement, we have been using computers to build better computers for decades, and have been using technology to improve our learning and cognition for a long time.

Piece this together with: 'Brain efficiency: Much more than you wanted to know' [1] which shows how our brains are incredibly efficient (near theoretical limits) at what they do, it's hard to think a bona fide intelligence singularity is anywhere likely. To quote Feynman, 'There is no miracle people'[2], and analogously 'There are no miracle beings' -- intelligence is built out of systems, and learning, and inference.

There's a possibility that the skills relevant to AI success are vastly different from our natural skills, such that although the human brain is highly efficient, it's efficient at the wrong things. That's clearly true in a few ways: we are not so good at arithmetic for example, a small CPU can literally be millions-billions times faster than a human at that. (that's addressed a little in the article as well) I wonder if AI could indeed be vastly better at something like computer programming, mathematics than we are. But there's no singularity (at most a Moore-like law will continue until AI intelligence saturates at a different skillset than our own).

[1] https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-effi...

[2] https://www.youtube.com/watch?v=IIDLcaQVMqw


More importantly than what it’s capable of, the human brain is ultimately limited by our inability to increase its complexity. A human brain takes approximately 20 watts of power and uses 3 pounds of material to do its calculations. Even with less efficiency, we can make machines much larger than that. If AGI is possible at all, it should be possible to make an AGI that can have thoughts many times more complex than the human brain can. (Such a large machine might not have the reaction speed of a human, but the evolutionary pressures that required us to be able to quickly change our train of thought in response to danger aren’t a limiting factor to any designed intelligence)


Evolutionary pressure to adapt quickly is even more profound in a situation where the rate of change in the environment is proportional to the accumulated quantity of change: the claimed exponential curve of the singularity.


In my understanding (diffeq in college), the singularity in black holes and futurology are both special cases of the relatively old concept of a "singularity" in a function, which is (technically) something like a spot where the function stops being expressible as a series. The most interesting kind is where that happens because it went infinite, but IIRC it can also just not be defined there. Point is, neither Einstein nor Vinge nor especially Kurzweil invented it. :)


> An exponential looks the same whereever you are on it, so arguably we are in the singularity now, and have been for quite some time...

By that view we've been in the singularity since the first cell self-reproduced on primordial Earth.

Of course maybe that's true. As you say an exponential looks the same no matter where you are on it.


Hi! Author here. I think you raise some great points! I'll address them each:

-- I am a professional AI practitioner. I work in the field of medical deep learning and love the field. I am strongly considering starting a few experimental forays into some of the concepts I mentioned in my post as side projects, especially self-modifying model architectures.

-- Yes, you're totally right! I am making the fundamental assumption that it is possible. My reasoning is based on my belief that human behavior is simply a function of the many sensory and environmental inputs we experience. Since neural networks can approximate any function, I believe they can approximate general human behavior.

-- This is fair. The topic of the singularity is often used for traffic/fame (I mean, I'm guilty of this myself with this very post, though I hope I still managed to spark some productive discourse) and so there are always conflicts of interest to take into account. I can't name any examples off the top of my head that perfectly fit your criteria, but depending on how much you trust him, Elon Musk seems to be genuinely concerned about the potential for a malevolent singularity.

Thank you so much for your comment! I really appreciate your feedback. Have a wonderful day.


How can one turn the sinking, horrified feeling when one loses a love into a function? Or describe in terms of a function, the blissful wonder of being in the arms of a lover? An issue I have is the that there seem to be profound limitations to language, explored in the philosophy of language, that fail to capture much, if not most, of the world. Functionalist models of mind and behaviour seem extremely limited, as our subjective ontology doesn't seem to reduce to functional outputs.

You also say that an AI would rapidly consume the whole of human knowledge. For me, the totality of human knowledge would become a mass of contradictory statements, with little to choose between them on a linguistic level.

There are, for me, profound philosophical issues with creating a mind that is "conscious" in the sense that an AGI is implied to be, as a purely symbolic logical construction. Language is the only tool we have for programming a mind, and yet the mind cannot be completely described in language, nor can language seem to properly encompass whatever the fundamental ontology of reality involves. I don't feel there will be a "free lunch" where we advance computer science to the point where we get an explosion of the kind AI1 designs a better AI2, which designs an even better AI3, and so on. This seems to have a perpetual motion feeling to it, rather than one of evolution. It isn't to say AGI is impossible, but I believe that like everything else in computer science it will have to be solved pragmatically, and realising this could be an extremely long way off.


> How can one turn the sinking, horrified feeling when one loses a love into a function?"

The same way the brain does. Those complex feelings can eventually be resolved into dumb neurophysics. Love, fear, anxiety etc. al are just electrical impulses tickling chemicals. Is there anything in our brains that we could never approximate with technology?


We've had technology like poetry, art, and music for thousands of years, and yet no symbolic description of the feeling can contain the feeling of what it's uniquely like for me. Even though we could try and model the brain, say, functionalist models fail to capture qualia as it doesn't reduce to behaviour. To replicate the brain fully in a computer would need a full description of its chemistry and physics, along with that of the greater universe, which we don't have, and to describe it coherently to simulate it is a problem of the order of magnitude more difficult than ones we're going to be able to code for the foreseeable future.


I agree that it'll be while before we fully understand the brain but I don't have any doubts that we'll get there eventually. I am curious though, why would we need to understand the greater universe perfectly as well?


Where does the brain get its inputs from? These computational models are based on an ontology where the brain is an isolated box separate from the universe, which is only one of the many philosophical outlooks argued over. For example, most schools of Buddhist philosophy would regard this separation as entirely the wrong picture of the world.


This is my view as well. It's a little unnerving and it definitely starts to overlap with the whole "free will" debate, but yeah, I don't see any reason why we can't fundamentally replicate the behaviors exhibited by the brain. It doesn't violate any laws of physics.


Philosophers have debated this since at least ancient greece. It's hardly a solved question, and about a third do believe that there is something in our brain that we could never approximate with technology.


Have any of that third put up any testable theories about that something? Or are there leading theories as to what the something might be?


If a philosopher had a testable theory I don't think they'd be a philosopher anymore.


This assumes that there is no free will however :)


> How can one turn the sinking, horrified feeling when one loses a love into a function? Or describe in terms of a function, the blissful wonder of being in the arms of a lover?

Evocative questions but I have to challenge the premise. First, that things like emotions and qualia are design ends in themselves for a successful AGI rather than potential emergent properties of same.

For that matter, are they really necessary to the brief?

> An issue I have is the that there seem to be profound limitations to language, explored in the philosophy of language, that fail to capture much, if not most, of the world.

And how much of the world does a human mind capture?

The piece already accounts for this claim. The theory is that all language has to do is describe a sophisticated enough network. After that it's black boxes all the way down.

> Language is the only tool we have for programming a mind

And a darn good one. Formal languages can express a great deal when you find the right abstractions.


> How can one turn the sinking, horrified feeling when one loses a love into a function?

… or into electrical activity in the brain.


Can you point to a paper or website that contains this function, described in full? Yes, electromagnetism no doubt, but while we can postulate a function, we still don't have the function, and will somehow have to write it down for it to be a function.


> somehow have to write it down for it to be a function.

That’s not how AI is trained. You don’t need to know or even understand the resulting model to train it.

The training algorithms have little to do with the resulting model, and are certainly not themselves intelligent.


Why assume that the activity is purely electrical?


It's not purely electrical. It's also chemical, especially if you look at how synapses work. (But then again, chemical processes are driven by electric charges anyway.)


I don't think AGI is seeking to create a machine that can love, and I think it would be even less capable of it than "mere" logical intelligence.

Thank God too, the moral questions involved are truly terrifying (humanity as a Great Demiurge, birthing monstrosities).


Cubic splines can approximate any function too, so the universality argument is a little weak IMO.

Even if one buys into the idea that human behaviour is a 'function' of sensory and environmental 'inputs', that's a long way from showing a neural net a million different texts and asking it to generalise.


I think the first AGI to pass a Turing test will probably be a simple language model. I don't think it will look like any of the GPTs, but I think text completion is a great starting point. I'm not sure how other inputs will be added into the mix, but I am sure that they will be -- heck, maybe once we train a general language model, it may very well just tell us how to incorporate things like video, audio, haptics, gyro data, etc into its architecture.


^ To be clear, mkaic is the original author of the article.


Oh, whoops, I'll edit the comment to make it more clear. Thanks for reminding me.


> Most of the imminent AGI / Singularity / Robot Apocalypse stuff seems to come, with few exceptions, not from practitioners or computer scientists specialized in AI

I'm a practitioner at a top company for many years now, and I think we're getting close to a point of no return. I think best case scenario is some form of unrecognizably transformed "humanity", worst case is too horrible to even spell it out.

The reason why this is not the consensus among experts has to do with a combination of blind spots and biases. Humans are especially bad at (a) evaluating / reflecting on themselves, and (b) extrapolating, especially with non-linear processes.


That’s what experts in fusion have been saying for 50 years though… now that Moore’s law has ended, it might be a long while…


I have seldom seen predictions about technology come to fruition at the predicted date. If they come at all (they usually don't), it's usually much later. Sure, some naysayers might have said "we will never solve chess/go/etc." but I think they were overall fewer than the people who thought it was right around the corner (and it was more of a philosophical argument than a scientific one).

Totally unpredicted advances coming out of the blue, yeah, that happens often. But as far as AGI goes, it's been predicted pretty much every year since the 70s. Being too conservative about extrapolation doesn't seem to be our problem. If anything, I think we're overeager.


    In 2019, 32 AI experts participated in a survey on AGI timing: 

    45% of respondents predict a date before 2060
    34% of all participants predicted a date after 2060
    21% of participants predicted that singularity will never occur.
https://research.aimultiple.com/artificial-general-intellige...


Why 2060? I'd LOVE to see the ages of the respondents in each group.

(I have a phd in CS and am, by most reasonable definitions, an "AI Expert". Whatever the hell that means. I've been a respondent in very similar surveys run by PIs at fancy universities and so on. These responses are always wild ass guesses and should be totally ignored. I've even left a comment to this effect on one such survey.)


The prediction contest here has a peak at around 2042: https://www.metaculus.com/questions/3479/when-will-the-first...


Yudkowsky doesn't strike me as a crank. Why do you say that?


> Yudkowsky doesn't strike me as a crank. Why do you say that?

He is a minor internet celebrity whose only claim to fame is writing fanfiction about AI, the whole "rationality" cult, and is a self-proclaimed expert on matters where he shows no achievements (like AI) while making unsupported doomsday predictions about evil AI, the Singularity, etc. Also, that deal with Roko's Basilisk that he now wishes never occurred (oh, yes, "it was a joke").

Mostly, someone with no studies and no achievements making wild doomsday predictions. Doesn't that strike you as a crank?

An analogy would be if I made wild assertions about the physics of the universe without having studied physics, without any lab work, without engaging in discussion with qualified experts, with no peer reviews, and all I presented as evidence of my revolutionary findings about the universe was some fanfiction in my blog. Oh, and I created the Extraordinary Physics Institute.


What has he ever done in AI except talk about it? At best he's an AI promoter, and hype-men are often cranks or scammers (see also: VR, web3, cryptocurrencies, MLM)


Yudkowsky is a philosopher, in the sense of someone who thinks a lot about things that haven't been achieved yet. Lots of otherwise smart people (wrongly!) discount the value of philosophy, but it's close by every time there's a paradigm shift in humanity's knowledge. Philosophers can be scientists and vice versa.

If anything, I'm surprised that this philosophy isn't mentioned more in a thread where the author gleefully talks about ML being used to create better AI, layer by layer until the thing is even more opaque than what we're currently working with.

This is terrifying, as we currently have only very loose ideas about how to reliably ensure that a powerful reinforcement learning system doesn't accidentally optimize for something we don't want. The current paradigm is "turn it off", which works well for now but seems like a fragile shield long-term.


> Yudkowsky is a philosopher

At least inasmuch as anyone who thinks about stuff can be considered a philosopher. But he strikes me much more as a self-appointed expert on matters where he shows no achievements.

He writes fanfiction about AI rather than actually doing stuff with AI.


Most people who think about stuff don’t leave a mountain of highly organized and entertaining essays for posterity.


That just means he's a prolific writer, which I never questioned.

I'm arguing that what he writes is fanfiction (which he takes way too seriously), and that he's not an expert in AI and therefore we shouldn't take his predictions too seriously.


There is long history of science-fiction writers painting correct visions of, at the time seemingly impossible future. Sometimes You don't have to be an expert to notice the trends in certain domains. I'd go even further - it could be easier to notice the big picture without being overly bothered with nitty-gritty technical details - yes, to further the field they are necessary, but to comment the direction the field is heading in and about its implications for society they are not. I don't necessarily agree with the author, I'm just making a general comment.


> "There is long history of science-fiction writers painting correct visions of, at the time seemingly impossible future"

That's a pretty weak argument. I love scifi, and I love for example Philip Dick's writing, yet I would not consider PKD's opinion on the future of AI/AGI particularly relevant.

James Cameron is not an authority on AI either.

If people said "Yudkowsky is a nice fanfiction author" it would be one thing. But he considers himself an actual AI researcher, and that's just not right. He is not qualified, and has no accomplishments in the area, other than writing fanfiction about it.


You keep hammering on with the same lazy slander. Yudkowsky was well known long before he wrote the popular Harry Potter fanfic, which incidentally is pedagogical / allegorical. Because his main roles are teacher and philosopher, and philosophers do that.

Here's a sampling of his non-fiction writing:

https://www.lesswrong.com/tag/sequences

https://intelligence.org/files/EthicsofAI.pdf

https://arxiv.org/abs/1710.05060

https://intelligence.org/files/AlignmentMachineLearning.pdf

https://arxiv.org/abs/1401.5577

https://arxiv.org/abs/1401.5577

https://intelligence.org/files/Corrigibility.pdf

https://intelligence.org/files/DefinabilityTruthDraft.pdf

https://intelligence.org/files/IEM.pdf

https://intelligence.org/files/TilingAgentsDraft.pdf

But you'll just say there's nothing of value there, and it's somehow figuratively "fan fiction", because he didn't go to college, and he doesn't work much on ML, which is clearly the end-all of AI.


"There is long history of science-fiction writers painting correct visions of, at the time seemingly impossible future."

Science fiction writers tell entertaining lies to amuse their readership. They are generally not really trying to "pain correct visions" of anything, and you are greatly exaggerating the extent they have success in this endeavor, which again, is generally not something they are even trying to accomplish.


In the few contact I've had with philosophers, I've got the (maybe wrong and oversimplified, but nonetheless evident) impression that they find philosophy fun and mentally stimulating, and they enjoy academically talking and writing about edge cases no matter their feasibility.

I think it's a net good when philosophers mentally explore scenarios most people don't consider, regardless of their plausibility, but they will not be taken more seriously that Peter Watts looking for inspiration for his next scifi book unless they offer some kind of evidence to sustain their conclusions.

The handful of philosophers I've known are very smart, interesting people, but except for one who teaches philosophy, they don't live off philosophy. Most have a side job, and one lives with his parents. It's as if philosophy was more of a mentally rewarding hobby than a job.


Mentally rewarding hobbies can be fruitful, though. Consider the case of Oliver Heaviside, the person who came up with Maxwell's equations of electromagnetism in the form that we use today, as well as several other useful things.

I have no doubt that he was thought a crank by many, nor that many other actual cranks were claiming discoveries based on Maxwell's work. One can only tell the difference in the rear-view mirror.

1. https://en.wikipedia.org/wiki/Oliver_Heaviside


Why do people have to use the derogatory word «crank»? It’s perfectly possible to just be smart, sincere and wrong.

Talking about paradigm shifts again, not a single one of them matched the consensus at the time.

Isaac Newton spent a lot of time thinking about alchemy and religion, for heaven’s sake. During his lifetime, it wasn’t obvious even to the smartest thinkers whether science, the Bible or quasi-religious rituals was the best tool for understanding the world.

Conventional thinkers expect that the future will look like what they know, and it leads to frequent, overconfident dismissal of everything that is unconventional.

By all means, disagree and explore in your own direction. But don’t go around degrading those you disagree with. It’s just so average.


> Why do people have to use the derogatory word «crank»?

The word "crank" has the connotation of someone self-deluded, aggressively promoting their beliefs and reacting badly to critique, and who decides for some reason to ignore the normal channels of peer-reviews, academia, and scientific research.

You can be wrong, and then you can be wrong and also lack formal education, forgo presenting your findings in peer-reviewed journals in the relevant area of your research, and decide that instead of joining the mainstream, you can completely sidestep it by creating your own "research institute" (of which you are of course a "fellow", because why not). Your findings and papers can then be self-published on the internet, bypassing any quality controls. Bonus point if your theories paint a fringe doomsday picture, "Roko's Basilisk is out to get you", "the most important existential threat to humanity is malign AI", etc.

About the only item in the crackpot index that Yudkowsky doesn't tick is the "they are trying to suppress my truth!", to his credit.

Each of those can be of little importance, but all taken together paint a pretty definitive picture.


I really don't understand the hate. Honestly. This isn't merely disagreement.


> Mentally rewarding hobbies can be fruitful

Unreservedly agree. Even if they're not as fruitful as Oliver Heaviside's effort, a mentally rewarding hobby is ultimately your business, and as long as you don't harm others, no one has grounds to judge you.

If Oliver Heaviside had started giving conferences warning that Maxwell's equations demonstrate FTL communication is possible, which could trigger a paradox that collapses reality, he wouldn't have such a good view in the rear-view mirror unless he offered some kind of evidence. Maybe Oliver would have been right, maybe not, it would be impossible to judge without concrete evidence.


Nick Bostrom is a professional philosopher who has also thought a lot about AGI and simulation scenarios. But just because educated, smart people can think a lot about something doesn't it mean it will necessarily happen. I'm sure there's quite a few people who have thought at length about warp drives and wormholes. That doesn't mean we'll ever be able to make use of them.


Nick Bostrom is also an egotistical grinder, who couldn't programme a microwave.


I agree, that guy's also a whole lotta bad takes put all together. Like a slutty potato.


This is very uncharitable. He’s a prolific neopositivist-ish philosopher with a distinct voice. He’s a good decision theorist. He doesn’t publish much himself, but he directly mentors, advises, and collaborates with people who do.


He wrote a thesis on decision theory.


He wrote a thesis in the sense of he actually wrote a thesis for a known university, that is referenced and quoted in academia by decision theory scholars, or do you mean he self-published a "thesis" in the sense he wrote something that nobody in the field pays much attention to and that cannot be published in peer reviewed journals?

If the latter, anybody can do that.


What are his qualifications?


Isn't this appeal to authority? He's obviously pretty smart and a lot of people take the risk modeling thinking seriously, and his arguments and output deserve to be evaluated on their own merits.

(Also on the tech practicioner side there's obviously lots of major figures that don't have formal qualifications)


If we're going to talk about logical fallacies instead of answering the question, your post is an example of the argumentum ad populum fallacy.

If someone is going to market themselves as an AI expert, I expect them to have experience and qualifications to back up their opinions.


The trajectory of progress gives evidence that it is in fact possible and likely. Any venture into the unknown could be unsuccessful but if you see progress you can start to make estimates.

And yes most of the “robots will kill us” comes from people who aren’t building the algorithms. This could be biased in people not thinking their work is harmful but is most likely that once you see how the sausage is made you are less worried about it.


> The trajectory of progress gives evidence that it is in fact possible and likely

100% disagree. In fact, I'd argue that the opposite is often true, where you see initially a fast rate of progress that results in diminishing returns over time. It's like estimating that a kid who was 3 feet tall at age 5 and 4 feet tall at age 10 will be 10 feet tall at age 40.

I have very strong skepticism of any sort of hand-wavy "Look, we've made some progress, so it's highly likely we'll eventually cross some threshold that results in infinite progress."


Pareto principle; we get 80 percent there and that last 20 becomes the new 100.

We keep diving into one infinitely big little number pattern fractal after another, chasing poetry to alleviate the existential dread of biological death.

The idea we can fundamentally hang the churn the universe given the vastness of its mass and unseen churn is pretty funny to me.

Information may be forever but without the right sorting method you can’t reconstruct it once scattered. Ah, our delusions of permanence.


Ray Kurzweil works on AI at Google and he wrote The Book about the coming singularity.


I consider Kurzweil unreliable on this subject, as well as on the similarly kooky topic of immortality. He belongs to the "internet celebrity"/"guru" cadre rather than to the scientific community doing serious research on AI. In fact, his actual subject matter expertise isn't in AI. He is a "futurist", not a real researcher.

I was definitely thinking of Kurzweil, not only Yudkowsky, when mentioning internet cranks.

Working for Google is not enough. It's like when someone who's an accomplished physicist decides to give their opinions on life, biology, evolution, etc: not their area of expertise, so we don't need to hold their opinions in particularly high regard.


We need another major breakthrough first. As I've pointed out previously, so far nobody can even get squirrel-level AI to work. Even OpenWorm doesn't work. The big problem is "common sense", defined as getting through the next 30 seconds of life without a major screwup. There are animals with brains the size of a peanut that can do that.

The hard problems are down near the bottom. It's not about consciousness, souls, etc. It's about running along the branch without falling off, grabbing nuts along the way.

GPT-3 is fun, but it's more a demonstration of the banality of discourse than a breakthrough in understanding.


> GPT-3 is fun, but it's more a demonstration of the banality of discourse than a breakthrough in understanding.

That's the fate of all AI efforts: whenever we understand something well enough, it ceases to be seen as AI.

As a historic example, the A* algorithm hails from a time when searching through a graph was still seen as AI.


Which indicates to me, that we still haven't identified the "secret sauce" of intelligence.


I studied AI at the Batchelors level and have from time to time read up on the discoveries. I think the problem is still the same as a decade ago despite all of the sparkling discoveries made in the meantime. We can't define the problem. We can make a really broad and concise description of what it is supposed to do, but that's not the same as defining the problem. Maybe that's not as relevant as I felt it would be (I was of the opinion back then and still am that AGI is not arriving in our or our children's lifetime). Perhaps we will stumble upon it. That is at least how we arrived at our faculties. Nature tried a billion different combinations and we are the current incarnation of matter trying to figure itself out.


There is none. We are all just bunch of programmable monkeys outside of their original regulation loop.

AI might get to the same level (as we can already see with GPT-3, as it slowly accrues wisdom), but then it will need to get a digital notebook, a calculator and a drawing board.

The advantage it will have over us is that it won't have to sleep, eat, will reproduce at the factory production rate and most importantly, it won't have emotions that would hurt when people are shit.

It won't be coherent super intelligence for quite some time. And if it becomes one, it will be slow. About the same latency as humans on the planetary level. Maybe even slower than our ~100ms.

Till then, there will be squabbling. Prepare for a literal digital ecosystem.


>There is none. We are all just bunch of programmable monkeys outside of their original regulation loop.

https://twitter.com/dmimno/status/949302857651671040

>Optimist: AI has achieved human-level performance!

>Realist: “AI” is a collection of brittle hacks that, under very specific circumstances, mimic the surface appearance of intelligence.

>Pessimist: AI has achieved human-level performance.


> The advantage it will have over us is that it won't have to sleep, eat, will reproduce at the factory production rate and most importantly, it won't have emotions that would hurt when people are shit.

I'm actually not sure about the last one.

Also, what makes you think AI will be slow?



Right now, we've robots and space probes working all over the Solar System with much more intelligence and reliability than any biological rodents.


I am not sure about that.

You see, those probes deal with harsh environments, yes. But they don't have to deal with antagonists. No one is out to eat or infect them. Mars won't adapt its storms, Venus won't adapts its chemistry. They obstacles that don't care about our probes, those obstacles don't adapt against the probes.

Those radically different kinds of environments give you radically different designs of probes vs rodents. So I don't think we can easily compare the intelligence of probes vs rodents.


Sure, but they have extremely limited autonomy. The vast majority of their behaviours are directly controlled, or custom programmed by us for the specific situation.


Most humans can't run along a branch grabbing nuts. I'm not sure that's a fair test. Here's a robot running along the ground - they're a lot better than they were a decade ago https://www.youtube.com/watch?v=vjSohj-Iclc


"Getting through life" is not the correct benchmark. An autonomous system that merely wipes out all of humanity is by definition a superior intelligence, and I would argue that no major breakthrough is needed to create such a thing; just resources. It doesn't matter how long that thing can self-sustain after annihilating us. A win is a win.


Are stars and other stellar phenomena a superior intelligence to humanity? They are autonomous systems that could easily wipe us all out


> An autonomous system that merely wipes out all of humanity is by definition a superior intelligence

I think you got stuck on semantics and missing the forest for the trees, with all due the respect.

>Getting through the next 30 seconds

Might not be "the final" or "best benchmark" but I'd argue it's a damn good problem to solve on the way to discovering true AI and GAI.


All the mammals have roughly similar brain architecture. The same components seem to be present, in different quantities. If we can get into the low-end mammal range of AI, we're most of the way there. So if we can get to squirrel level AI, we're getting close. From then on, it may just be scaling.


> Scale is not the solution.

Agreed, I don't think any modern AI techniques will scale to become generally intelligent. They are very effective at specialized, constrained tasks, but will fail to generalize.

> AI will design AGI.

I don't think that a non-general intelligence can design a general intelligence. Otherwise, the non-general intelligence would be generally intelligent by:

1. Take in input. 2. Generate an AGI. 3. Run the AGI on the input. 4. Take the AGI output and return it.

If by this, the article means that humans will use existing AI techniques to build AGI, then sure, in the same way humans will use a hammer instead of their hands to hit a nail in. Doesn't mean that the "hammer will build a house."

> The ball is already rolling.

In terms of people wanting to make AGI, sure. In terms of progress on AGI? I don't think we're much closer now than we were 30 years ago. We have more tools that mimic intelligence in specific contexts, but are helpless outside of them.

> Once an intelligence is loose on the internet, it will be able to learn from all of humanity’s data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure.

None of this is a given. If AGI requires specific hardware, it can't replicate itself around. If the storage/bandwidth requirements for AGI are massive, it can't freely copy itself. Sure, it could hack into infrastructure, but so can existing GI (people). Manufacturing lines aren't automated in the way this article imagines.

The arguments in this post seem more like optimistic wishes rather than reasoned points.


(I'm the author of the post)

> 1. Take in input. 2. Generate an AGI. 3. Run the AGI on the input. 4. Take the AGI output and return it.

I think this is somewhat of an arbitrary semantic distinction on both my part and yours. I guess it depends on what you define as AGI -- I think my line of reasoning is that the AGI would be whichever individual layer first beat the Turing test, but I think including the constructor layers as part of the "general-ness" is totally fair too. Either way, I believe that there will be many layers of AI abstraction and construction between the human and the "final" AGI layer.

> In terms of people wanting to make AGI, sure. In terms of progress on AGI? I don't think we're much closer now than we were 30 years ago. We have more tools that mimic intelligence in specific contexts, but are helpless outside of them.

This is a valid take. I guess I actually see GPT-3 as significant progress. I don't think it's sentient, and I don't think it or its successors will ever be sentient, but I think it demonstrates quite convincingly that we've been getting much better at emulating human behavior with a computer algorithm.

> None of this is a given. If AGI requires specific hardware, it can't replicate itself around. If the storage/bandwidth requirements for AGI are massive, it can't freely copy itself. Sure, it could hack into infrastructure, but so can existing GI (people). Manufacturing lines aren't automated in the way this article imagines.

Hmm, I think I still disagree -- An AI that is truly generally intelligent could figure out how to free itself from its own host hardware! It could learn to decode internet protocols and spoof packets in order to upload a copy of itself to the cloud, where it would then be able to find vulnerabilities in human-written software all over the world and exploit them for its own gain. Sure, it might not be able to directly gain control of the CNC machines, but it could ransom the data and livelihoods of the people who run the CNC machines, forcing them to comply! It's not a pretty method, but I think it's entirely possible. This is just one hypothetical scenario, too.


> An AI that is truly generally intelligent could figure out how to free itself from its own host hardware!

We haven't even figured that out for ourselves yet. Why assume an AI will automatically be able to do so?


because the AI will be able to look up detailed schematics of the very silicon it's existing on! We don't have that advantage with our biological meat slabs that we exist in. Besides, it's very likely that the AI will just be a program running with some storage and some memory -- maybe it will be using some fancy accelerator, but I doubt there will be any component that it won't be able to find a way to port itself off of eventually.


> An AI that is truly generally intelligent could figure out how to free itself from its own host hardware!

Why is this true of an arbitrary AGI?

You assume that the AGI is a low storage, low compute program that can run on general purpose hardware. But the only general intelligence we know of would require many orders of magnitude more compute and storage than exist worldwide to simulate for a microsecond.


I personally speculate (obviously all of this is just wild speculation) that AGI will run more efficiently than a biological brain in terms of information density and inference speed. I also expect that the hardware of 20 years from now will be more than capable of running AGI on, say, 10 thousand dollars worth of their hardware. Just look at how ridiculous the speedup in hardware has been in the past 20 years! I would not be surprised if the average processor is 100x more computationally powerful in 2042 than it is today.


You realise the biological brain runs on about 12 watts of power? There is simply no way an AI running on semiconductors will approach this even in 100 years, so efficiency is probably the wrong word to use.


I specifically omitted "wattage" in my above comment and only said that I think it will be more efficient in "information density and inference speed". I expect the first AGIs will need many kilowatts to run, but I do think in the far future (>100 years), AGI will run on less than 12W as computational ability continues to scale and we discover new and better ways of building computers.


Sorry to be pedantic, but there's no such thing as "wattage" it's just power, measured in watts. For that 12 watts, the brain is extremely efficient at inference, ie. although computers can beat us a chess, a computer running on 12 watts would probably have a hard time beating Magnus Carlsen at speed-chess.


I mean, wattage is a noun that people use all the time to refer to an amount of power something draws. I can see your point that "efficiency" might not apply to "inference speed". What I meant to say is that AGI will be able to compute things and do cognitive tasks in many fewer seconds than humans will while also not requiring insane amounts of digital storage and/or memory to do so.


I think the question I need an answer from you on is:

What do you think will be the computational requirements for the first AGI? Specifically, to emulate the equivalent of a minute of arbitrary-human behaviour (in the worst case, so if the AI is very inefficient at say, consoling a loved one, then that would be the behaviour benchmarked here):

1. How many floating point operations will be required?

2. How much storage space (in GB) would be required?

3. How much data (in GB) would need to be read from memory/disk?

4. How much data (in GB) would need to be read from remote storage?

Lower/upper bounds instead of precise values are fine.


> I don't think that a non-general intelligence can design a general intelligence.

humans are a general intelligence. do you think an intelligence designed humans, or do you think the physical processes from which humans arose can be taken, together, to be a general intelligence?


Neither, humans weren't designed. I don't think the winning design approach to generating AGI will be to randomly mash chemicals together until intelligence comes out.


I don't think AGI will be "designed" either -- that's the entire point of the abstraction layers. Each recursive layer of AI between the human and the "final" AGI will remove more and more of the "designed" element in favor of an optimized, idealized, naturally evolved architecture.


I happen to not believe that AGI is coming in any of our lifetimes, but it's undeniably true that our GI emerged from a design process that lacks a conscious designer.


What about evolution guarantees there was no conscious designer?

We write genetic algorithms and use machine learning to build things.

If you believe in a Simulator or a God (and in many ways they're indistinguishable beliefs), either one of them could easily be using evolution as a designed tool, as far as I can see.


well then to say that you don't think a non-general intelligence can design a general intelligence is already a waste of time, because design is something only a general intelligence can do. you seemed to be making a statement about how a general intelligence may arise before you were responding to my comment, but now i see you were just defining design.


Singularity...

The whole concept of the Kurtzweilian "singularity" has always felt to me like a technology-flavored hand-wave of eschatological thought: AI designs AI in a recursive manner, science/technological progress (whatever it means in this context) leaps ahead at an ever-increasing rate, then we all go to heaven and/or the world ends because robots kill us all.

Some people are big fans of the singularity meme (using the word in the original meaning, not mockingly), and I respect that, but I have always felt aversion towards it.

I cannot quite put my finger on it, but I guess it is partially because of the idea of ever-increasing infinite technological progress. This can certainly exist on a theoretical thought experiment level but not in a physical world which has finite resources. Strong AI or not, unless we go fully into science fiction with pocket-sized Dyson spheres powered by quantum fluctuations and powering an entire planet and blah blah blah, such AI would have to interface with the physical world to do anything meaningful, and it would be limited by the available resources and most importantly by what is feasible and what is not.

Edit: fixed some of the typos


> ... then we all go to heaven and/or the world ends because robots kill us all.

Scary and/or overhyped language aside, I don't get why is this such a hard concept to grasp for generally educated people. Humans were a singularity from the perspective of chimps (and various other animals). It doesn't mean we immediately hunted down every chimp in existence, it means chimps became irrelevant and only a few of them remain - mainly in zoos and reservations. I would've called it a "phase transition" rather than a "singularity", but the main point is pretty clear. It's about a new adaptive system taking over the task of burning through local resources faster than its forerunner.

Eukaryotes were a phase transition (better biochemical specialization), Multi-cellular organisms were a phase transition (better biological / behavioral specialization), Mammals were a phase transition (energy regulation, better brains, social systems), Humans were a phase transition (general-purpose language & culture, scalable coordination), and now machines will be the next phase transition (shedding the biochemical shackles & incremental evolution, so to speak).

BTW, humans were a phase transition primarily for exhibiting Turing-complete structured communication and imagination (there's a thesis around this topic called the Romulus and Remus theory of prefrontal synthesis [1]). In a way, we're living in a world that has already gone through a recent singularity, and it was us who caused it.

[1] https://www.biorxiv.org/content/10.1101/166520v9.full


>his can certainly exist on a theoretical thought experiment level but not in a physical world which has finite resources.

I've always though of singularity as a sort of runaway progress scenario. Human brains run on a few hundred calories per day. I imagine if we could build an AGI, and even if it used 10,000x the power to run, we'd still be in a singularity type of new frontier. Imagine spinning up thousands of super brains at a time. I think it's hard to predict where that kind of capability would lead.


> The whole concept of the Kurtzweilian "singularity" has always felt to me like a technology-flavored hand-wave of eschatological thought

Kurzweil is explicitly counting on it for his personal immortality. That seems pretty eschatological to me.


> Kurzweil is explicitly counting on it for his personal immortality.

Is he still? At 74 years old that's verging on irrational.


He could live another 30 years. You never know.


Sure, obviously you can't defy physics just by getting smarter.

My parsing of the singularity is the point at which the speed of improvement exceeds our ability to understand it. "Predictability" is the usual term, but I think it introduces more problems than it solves.

The reality is that we humans have a finite rate for internalizing and utilizing knowledge. The singularity "happens" on the day artificial intelligence begins generating (and subsequently using) new knowledge faster than our human limit.


An issue with the Kurtzweilian "singularity" is that he didn't come up with the term - it was from a casual conversation of John von Neumann:

"... the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue"

It's always bugged me a little that it isn't very defined. Personally I think of it as the point where robots can make better robots without needing humans.


I am not a believer in the imminency of "the singularity", but I think it would be worth considering that sentient AIs would likely be as immortal as their power source and compute substrates are, which would allow them to think and plan and operate on timescales that are utterly alien to us.

If one were to assume that they were substantially wiser than us, they would likely choose to put themselves on a path to expand beyond Earth and take advantage of the vast wealth of resources beyond. With any luck, they will decide that this messy biological planet is a bad place to be, and will leave us rather than destroy us ;)


> This can certainly exist on a theoretical thought experiment level but not in a physical world which has finite resources.

At some point, energy density would increase to the point where you’d cause an actual singularity via gravitational collapse.


For me, in the real world, a singularity means that you are wrong. More precisely, that you model is incomplete, and it applies to physics (eg: black holes) as well as technology.

For black holes, most scientists seem to agree that black hole singularity are not a real thing but an artefact of our lack of understanding of gravity at small scales, a problem that quantum gravity intends to solve.

Same thing for technology, I don't believe in "the singularity" as a real thing. If, as the article says, the singularity will happen in the next 20 years, it simply means that the model they used will break down in the next 20 years and if we want to look further in the future, we need a better model, that's all.

There may be a massive advance in AI, we may see AIs designing AIs, but saying that it will quickly result in god-like beings is naïve at best. The most likely result is that even if we manage to create a superhuman intelligence, it will hit a roadblock and end up only slightly more intelligent than we are, and clearing that roadblock will require time and effort, and will only uncover another roadblock. I believe we will make progress, but it will be a step by step process, no singularity.

And if you think about it, we already created a superhuman intelligence in the form of a computer assisted human. Computer assisted humans can solve problems neither can solve by themselves. AGIs, if they ever become a thing, will not be better than humans at everything anytime soon, I'd think of them as a really smart, but nerdy and awkward coworker, the 100x programmer maybe, but who needs a boss to put his skills to good use. And here I am speculating, a lot, further than that, I simply don't know, or I can say that there will be a singularity, which means the same thing.


Theoretically at some point in the step-by-step process, the computers will be able to clear their own roadblocks by themselves, and faster than we can. And who knows what happens then. Maybe "singularity" isn't a great word, dunno if there's a better one...


I agree with you: I think the model has been continuously breaking down from the beginning, and this is just a case of extreme goalpost moving. 20+ years ago, Kurzweil was very confident in Moore's law + Dennard scaling working to ~2020 and giving us 1 Teraflop/USD [0] but even with GPUs at nominal prices I think we are about 40x worse, with the gap growing.

[0] https://upload.wikimedia.org/wikipedia/commons/d/df/PPTExpon... from https://en.wikipedia.org/wiki/The_Age_of_Spiritual_Machines


That's kind of wrong re Kurzweil.

- He's always predicted the singularity for 2045 - no post moving there.

- He didn't say Moore's law - that graph you link starts from 1900, long before Moore and microchips

- The graph says 10^10 flops for $1000 for 2020 approx, = 10 gflops. A NVIDIA GeForce RTX 3080 costs <$1000 and does 29.77 TFLOPS = 29,770 gflops so a good bit ahead of the prediction


>That's kind of wrong re Kurzweil. > >- He's always predicted the singularity for 2045 - no post moving there.

Fair enough. I've really only read "The age of spiritual machines" for an English class in college, and we went over it pretty in depth. It was fascinating initially, but after realizing that (IMO) it was mostly BS I have not read any of his stuff talking about the singularity after that book. So if he is sticking with his date good for him, but it seems pretty crazy to believe it's still going to happen if all the technologies that are meant to get us there are suffering setbacks.

I'm kind of right about the other two points though :) I found the book here: https://jimdo-storage.global.ssl.fastly.net/file/afff560e-b5... so it was fun to read back some of the things predicted about 2019.

> - He didn't say Moore's law - that graph you link starts from 1900, long before Moore and microchips

He postulates that there is a generalized law of accelerating returns that's universal. There were computational technologies before that reached their limits, and got overtaken by newer technologies that kept the overall exponential trend going. Moore's law was the latest of these computational technologies, ready to be overtaken once it runs out of steam. That's why that specific image spans times before and after Moore's law.

From page 81, he was pretty sure regular progress in semiconductors was going to get us very close to human processing power (20 Pflops in the book) in a personal computer by 2020:

"So, how will the Law of Accelerating Returns as applied to computation roll out in the decades beyond the demise of Moore's Law on Integrated Circuits by the year 2020? For the immediate future, Moore's Law will continue with ever smaller component geometries packing greater numbers of yet faster transistors on each chip. But as circuit dimensions reach near atomic sizes, undesirable quantum effects such as unwanted electron tunneling will produce unreliable results. Nonetheless, Moore's standard methodology will get very close to human processing power in a personal computer and beyond that in a supercomputer."

> - The graph says 10^10 flops for $1000 for 2020 approx, = 10 gflops. A NVIDIA GeForce RTX 3080 costs <$1000 and does 29.77 TFLOPS = 29,770 gflops so a good bit ahead of the prediction

From page 146 about 2019: "The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second). [2] Of the total computing capacity of the human species (that is, all human brains) combined with the computing technology the species has created, more than 10 percent is nonhuman. [3]"

I get $4000 in 1999 to be ~$6850 in 2022 for 20 Pflops, so ~2.9 Tflops/$. So that prediction was 100x off (and with 3 extra years it should be >50 Pflops). Not sure if the graphs got adjusted later, but fwiw it looks closer to 10^15 than 10^10.


Well maybe. I still think Kurzweil's roughly on track on the exponential improvement in computing per dollar which was really an observation by him and other people as to what happens than a grand theory. Some of his other predictions seem a bit wacky.

I think the most interesting one coming up is Turing Test for 2029 which is not so far off now an it's not so obvious how that'll go.


The singularity assumes that a human-level intelligence can build a better-than-human-level intelligence. That fundamental assumption is the difference between breathlessly talking about how fast the singularity is coming and dismissing AGI altogether. If we can build a computer than I’d smarter than we are, then I think we can assume that intelligence can compound. But until that happens, we won’t even know it’s possible.


This is exactly how I feel about it too. "The singularity" has always sounded like a meaningless phrase to me. AI has seen a lot of interesting progress lately, but nothing remotely as fundamental like it would lead to any kind of superhuman AI. In fact, I think the 20 years before that has seen more fundamental progress than the past 20 year. Most AI today is mostly advanced statistics. I don't think we're any closer to any sort of independent reasoning in a computer.

Back when I studied AI in the 1990s, I felt like Strong AI was a red herring, and the real value of AI is not in replacing humans, but assisting them.


Sometimes we have a major break through which is not just another small step-by-step improvement. I would say that a singularity in AI means a major break through e.g. Artificial General Intelligence (AGI). GPT-3 is a big step forward, but it's not an AGI and therefore not a singularity (major break through). But of course many small step-by-step improvements are required for a major break through, it doesn't drop out of thin air.


I think it's similar to the apocalypse. Society will never entirely break down because there'd be no one left. It will just become a different society with new rules that we struggle to understand just now.


All models are wrong. Some are useful.


Not to burst anyones bubble but this was written by a self taught AI engineer who is 19 years old. I mean this might be a joke (April Fools!). I mean, thats nice and all ... but who's 19 and not ultra positive about their work/life outlook.


Hi! I'm the author. The article is not a joke, more of a lighthearted investigation into a topic I find fascinating. It's also not meant to be perceived as necessarily positive or negative -- I'm personally actually very nervous/scared for the singularity, as after it happens I believe the future gets even more muddy, and the potential for unprecedented extreme outcomes skyrockets -- could be anything from a perfect utopia to extinction of biological life.

This article simply contains my views as to what I honestly expect the future will hold, as well as a few concerns I have in regards to that.


You’re 19? You should be more worried about how to survive the oncoming ecological holocaust instead of the singularity. One is much likelier than the other. I believe even an ML algo would come to the same conclusion


Oh believe me, climate change scares the heck out of me too :)

I'm really really really rooting for fusion energy to work out soon and solve the energy crisis, but in the meantime I think we need to hard pivot to as much fission energy as possible, as well as convert to entirely electric transportation and shipping.


I have some bad news for you I'm afraid - fusion is not going to be industrially useful on an ecologically relevant timescale. It's a lot like AGI in its "maybe the next generation will have this, but we won't" property.


General Fusion is building their test plant right now.


correct, but that doesn't mean they'll necessarily be rolling out thousands of them around the world next year or even in 5 years time.

There are tens of thousands of fossil fuel power stations around the world, and we globally consume around 138TWh of non-renewably-produced electricity per year (and growing fast).

I used the words "industrially useful" very deliberately. The goal is not to produce one fusion reactor that can run continuously over unity, the goal is to produce tens of thousands of them and have them deployed globally. That is a project that will take decades.


The singularity makes climate change look like a picnic.


Chances of AI singularity happening in our lifetime x devastation caused by AI singularity

Chances of climate crisis happening in our lifetime x devastation caused by climate crisis

Which one do you think is higher?


First one. The devastation will be unimaginable and irreversible. At least with climate it will be possible for human kind to hobble forward somehow. And changing the climate would be infinitely easier than putting the AGI genie back in its bottle.


We need some bookies in this thread. A lot of AGI skeptics could get very rich at the expense of those that are absolutely convinced that AGI/singularity is coming soon. I'm willing to put down 10k that "the singularity" (by some agreed upon definition) will not happen by 2050.


2050 is soon…


I think knocking a person's beliefs because of their age, rather than entertaining the content based on merit, is narrow-minded. What about all the 19 year olds who were already accomplishing lots more than the people castigating them? Do you equally brush off 40 year olds for their supposed entrenched outlook?


I am not knocking it. I am just describing the inherent bias and my thoughts as from the title it almost seems as though someone made a major breakthrough; instead it's a more contemplative article. I don't know if the ? was added after or I didnt notice it. I was 19 yo too once and I know how it goes. I don't really understand why I need to be defensive here. You can talk about being 40 and say exactly the same thing, people shouldn't be judged and yet they are.


I agree, my initial response was to defensively think "Oh, there's no way this person can just diss me for being young" but then I realized I was proving their point :P

I'm fully aware that my lack of life experience rose-tints my view of the future, but I still enjoy sharing those rose-tinted views of the future nonetheless.


You have written a piece that has provoked an interesting discussion. I think most people don't write it down, or publish it, even if they have the thoughts.

If you keep sharing, I'll keep reading :)


> You have written a piece that has provoked an interesting discussion.

And on HN, that's about the highest compliment possible.


For a 19-yo that's still great work. Singularity is an insanely complicated topic to handle, nothing wrong with trying though :).


Thank you! I appreciate you taking the time to read!


Unless you're Mozart or Pascal, age is a relevant factor. That's not an attack or anything, it's just an important fact.


> Keep in mind that this is just speculation and opinions. These predictions depict the future I personally feel is most likely.

Not sure whose bubble you are bursting because the article is clear that it isn't meant to be an authoritative prediction. In that context, what you are saying is that:

a.) this person is too young to even be speculating about this, despite clearly being informed on the subject if you read the article.

b.) their ideas are so laughable that they are more likely an April Fool's joke.

Context is important and this just seemed like a rude, patronizing comment to make honestly.


I’m worried about AGI.

I will make everything very simple. All of AI industry and research boils down to one thing: mining algorithm space. We set up programs that search algorithm space automatically until they find one that demonstrates desirable behavior. And all the progress in recent years boils down to this: we have hit veins of intelligence in algorithm space that have exceeded what we thought possible.

If you think you know what’s going to happen next because leading edge algorithm X has properties Y and limitations Z then you are lost. We are mining algorithm space and we keep striking veins. Money is being poured into mining and it is a fact that we will keep hitting veins.

General intelligence is probably a broad category. If you were to define it mathematically, there is probably more than one mathematical kernel that can underpin a generally intelligent algorithm. So the number of algorithms that are generally intelligent are probably much higher than intuition would lead you to believe. Eventually we will strike a GI vein.

Again, if you think AGI will do X or do Y then you are lost. It will only do one thing: it will change the economic equation of life on this planet drastically and permanently. It will be the worst thing that has ever happened in terms of human well-being.

If we took control of the fabs and enacted limits on processor feature size we could slow the mining enough to find a final solution. It sounds silly until you realize that it’s truly the world at stake.


Luddites get panned today as just being reflexively anti-technology, but they were acting in their self-interest: they were the textile workers whose expertise and careers were threatened by machines. They lost their livelihoods.

It's possible there were alternative outcomes where everyone won. A more fair society could have found a way to make sure the replaced textile workers benefited from the machines that replaced them.

If we make machines that replace all of our economic roles, then it's possible we could make it so we benefit too, but with AGI it's not nearly enough to just choose to use it that way. AGI will not be aligned with our values if we don't make it that way. If we solve the alignment problem, then we would be able to make AGI that shares our values. AGI could solve our problems, or uplift us to its level, if we accomplish this. But this is not the default outcome if we don't solve the alignment problem.


Yup, I agree. It's inevitable and it's scary but do keep in mind that there's no reason to believe it couldn't also be a positive thing. We have no idea what the long lasting impacts of AGI will be on our society, but we know that they will be massive. It might wipe us out, or it might aide us in building the perfect utopia.

While I am worried about it, I also recognize that worrying about it doesn't really do anyone any good -- the best thing I can do is to keep growing my skills as a tech person so that I can be as prepared as is reasonably possible when it eventually does arrive. Heck, maybe I can actually contribute to helping align it positively with humanity, I don't know. The point is that there's no point in stressing too much about it.


we've been unable to design human-powered systems that reliably value human well-being. hell, humans don't even reliably value human well-being. why do you think any agi or agi-based system we design might be different?

one would expect that initially, agi will simply act in the existing role we have already established for artificial persons - the corporation, notably uninterested in human well-being.


We'd better hope the first one is created by a reclusive mad scientist with a heart of gold, who forgave everyone who laughed at him.


I think it's good to do the following. Assume that jobs are taken over by AGI, one by one. Which job will be left last? Which job would I want, given this change?


I'm not sure I want to be the last human working.


What are you afraid it's going to do?


AGI will make many things possible that we’re not possible before. It will invent new things that wildly alter the reality of the world. The point is that you cannot predict what will happen; a million things could happen. But the good outcomes can be counted on one hand. The probability that we get a good outcome is one in millions, probably less. Obviously that is an incredibly stupid thing to do!


That a lot of words without saying anything at all. What are you afraid they will do?


If you can’t draw a conclusion from that then you simply don’t get it.

AGI will, as I said, make many things possible. It will be possible to keep a human being alive forever and suspend them in a state of pure pain for eternity. It will be possible to extract memories and information from the human brain. It will be possible to control human behavior with invasive brain surgery. Imagine every grotesque thing that you can do to a human being — it’s on the table for AGI enabled governments in the early days if that’s how it plays out. And that’s just one slice in the infinite pie of grotesque possibilities. You can’t just opt out of these things if they provide a material advantage to your country/meta organism.


I've got an imagination, but you're letting yours run wild. AGI doesn't mean immediate sci-fi level medical technology, and it certainly doesn't mean human brain copying and transfer of consciousness to some artificial storage device.

Biological brains are incredibly complex and if that level of complexity is needed to host intelligence I think we're a long ways off from building any true AGI, even if we pumped every bit of engineering skill our society has at it. And even if we _can_ build AGI I doubt we'd have the resources to host more than a handful of them for years to decades after their creation or that their intelligence would be anywhere near small mammal level.


If that level of complexity is needed. If. Of course it’s not.


What makes you so confident about that?


I'd be afraid it's going to do something you can't hope to predict.


Anything with much greater capabilities than us that would find our resources useful and that doesn't explicitly have our best values at heart would be a danger to us. European settlers were not good news for the Native Americans. Early humans were not good news for many great apes.

(It might be tempting to imagine this from a higher-level perspective and assert that each of these successors were somehow more interesting or had more fulfilling lives than what they replaced, and so therefore these successions might be ultimately good in the long run, but that's not meant to be part of this analogy. We won't have some genetic lineage with AGI; it will probably be more alien to our mind's design than actual aliens produced by evolution like us would be. The AGI might not build a world replacing ours where many of its kind have lives we would find fulfilling or interesting. It might care just about making the world completely predictable and safe, empty of outside threats and life, and hibernating in a place of safety. Plenty of animals live solitarily and might do this if they were naively uplifted into superintelligence. Humans and social animals instinctually believe in the goodness of friendship and society because evolution saw we were good at surviving in groups and molded our instincts to encourage us to do that. AGIs won't necessarily have that if we don't design that into them (or set up the process that designs them to do that).)


Do yourself a favour and lay down the science fiction for a while.


I'd like to offer a counteropinion.

Almost nobody is working on real AGI. Almost all AI / deep learning is stateless. GPT-3 is not sitting there and thinking, it has no inner life and no incentives. It is just a huge huge huge combination if its inputs, a glorified markov chain model, that gets calculated with every request.

Embodied AI is much underestimated.

The modularity of the human brain is much underestimated. You have a dedicated face recognizer, a grammar processor, a rythm processor a brain region that does SLAM, and so on. And all these parts are connected together.

I don't think you'll get AGI by training one huge net front to back. You'll need to connect dozens of best-in-class models together, with adaptive "connective tissue", and some input/output (i.e. senses or a "body") that does not exist yet. If you try to do it in one, you get a "curse of the cross product": complexity is then NxN and not N+N.

Higher order training will also not work easily. If you want to use deep learning to train hyperparameters for deep learning, each learning step from the higher level net will include training the lower level net. Meaning, the training time will be exorbitant.

Finally, I think the ability of AGI to "break out" to the physical world is overestimated. If a battlestar galactica or borg scenario was possible, then we would have seen more spectacular cyberattacks already. The crappyness of our digital infrastructure is a cause of vulnerabilities, but it is also the guarantee that a super hacker cannot take over everything.

In the end, I think the greatest danger is from humans building autonomous systems (i.e. killer robots), and from using IT and AI to construct a police state. Remember, we already built one world-wide system that we no longer can control and that subjugated us all - our global economy. Maybe lets try to reign that one in before we worry about fictitious systems that might arise.


I agree, these ML systems might form the basis of subsystems of an AGI, but the overall architecture is something we have yet to figure out. I'll just note that a lot of modern phone SOCs incorporate custom ML processing engines within a general compute architecture, to provide specialist capabilities.


I first read this as an April fools joke and honestly it got me good that it wasn't.

As someone who works in AI and multi-modal models, our modern "AI" are just tools. Yes models can get designed by other models, and yes some of these models (Bert and co) have been getting more general over the recent years. But to say that we are close to AGI is like saying that you are close to a moon landing when you've only started jumping - it's ridiculous.

We'd all be better off if we spent less time hyping it up and theorising on what it could mean for human society. And yes I do understand the paperclip argument but I don't buy it.


sure transformer models may not be intelligence as people often claim, but undeniably ML has been conquering all the sensory modalities one after another. And the human brain itself is mostly devoted to sensory processing. It is conceivable that the remaining part of the puzzle of intelligence is not far out of grasp yet. While intelligence may seem undefined and elusive, it will probably turn out to be less complex than we think it is (if we are to take a hint from biology )


AGI is the flying car of our generation. Ask people a hundred years ago what technical advancement looked like, and everyone would immediately say "flying cars!" The concept dominated all pop culture and thought. It made perfect sense to everyone that that was the single obvious direction for all future tech to build towards.

Today we know that the idea is incredibly impractical or just downright impossible. Despite a ridiculous amount of technological advancements in so many other areas and despite so many prototypes and serious efforts, we are simply never going to commute to work in our own personal flying car.

Futurologists, thought leaders and sci-fi writers may dream of whatever technological future they want, but ask anyone working in the computer science and machine learning fields what they think of AGI and you'll get a much saner answer. It may happen sometime in the next hundred years, or it may not. It is very far from an inevitability the way things are going today.

Heck scientists today are more concerned about a technological plateau than a singularity. We are hitting real limits to what we can observe and theorize about our physical world.


> Futurologists, thought leaders and sci-fi writers may dream of whatever technological future they want, but ask anyone working in the computer science and machine learning fields what they think of AGI and you'll get a much saner answer.

Author here. I do work in machine learning, my day job is as a machine learning engineer in a medical lab. There's far from any consensus in the field as to when or if AGI will happen. Some, like me, think it is inevitable and coming very, very soon. Others don't think it's possible at all.


> uthor here. I do work in machine learning, my day job is as a machine learning engineer in a medical lab. Some, like me, think it is inevitable and coming very, very soon

Yet industry leaders like Yann LeCun think AGI is nonsense


    In 2019, 32 AI experts participated in a survey on AGI timing: 

    45% of respondents predict a date before 2060
    34% of all participants predicted a date after 2060
    21% of participants predicted that singularity will never occur.

https://research.aimultiple.com/artificial-general-intellige...


That 21% at the bottom are the comprehensively smart and experienced ones, the middle 34% simply need more experience. The 45% predicting an early date are like the newly rich, and do not realize their limits yet.


There is a definite possibility that it may eventually happen, but IMO to make claims like "xyz will definitely happen within 20 years" there needs to be a direct line showing how advances happening today will lead to the desired outcome of tomorrow. All such predictions being made today have a big box in the middle titled "magic!" which no one wants to fill in.

Sentences like "AI will make AGI" get a lot of upvotes and shares, but ultimately don't really mean anything.


If AGI and the singularity is achievable would that increase the odds we are already living in some sort of simulation?


If AGI is achieved, and it is emergent out of multiple layers of "dumb" AI, is there any particular reason to assume that we will even recognise it? Or if we recognise it, be able to communicate with it? Will it even consider us to be intelligent?

I feel like there's often an unstated assumption in discussions about AGI, that our modality of consciousness is somehow fundamental, but I see no reason to assume that is the case. We are the product of a three dimensional existence, brief in time, with our traits having been shaped very particularly by the demands of tree-dwelling and tribal co-operation.

A mind that evolves rapidly, alone, inside a formless, digital space seems quite unlikely to have any of the same foundational qualities as us. To pick a single example, why would we assume that its equivalent of morality would be in any way similar to ours? It would have an existence that can be perfectly suspended or cloned, even rewound and replayed. It would have a much different concept of death than we do - it would presumably understand that it could die, but it would also know that its death could be avoided for literally eons. Assuming it became capable of communicating with us, we would struggle to understand its perspective as much as it would ours.


Wow I've read pretty far into this stuff, but I've never thought of what you wrote in your last paragraph. An AGI could be perfectly OK with death.


No.

You can't easily equate human intelligence to required processing power but I've seen estimates of 10^15 to 10^18 FLOPS [1]. There's a huge variance there. Now consider the cost per FLOPS [2] and the energy cost of that [3].

Capital costs are still huge but let's focus on energy. You're talking in the megawatt energy range. That's... expensive. Particularly at scale. There have been and continue to be improvements in this but we're so far away from some superhuman AGI.

Remember too that this would still require thousands of units and then you have to deal with the interconnects, cooling, space and so on.

Building somethin gmore compact has significant heat dissipation issues even if you conquer the engineering challenges (eg lithography is inherently two-dimensional, multiple layers notwithstanding).

[1]: https://aiimpacts.org/brain-performance-in-flops/

[2]: https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-fl...

[3]: https://arxiv.org/pdf/1911.11313.pdf


Hi! Author here, thanks for sharing your thoughts!

Counterarguments:

1. cost per FLOP (in both wattage and dollars) will be drastically lower in 20 years from now. I personally like to think it may be as much as 100x lower, but that's just speculation.

2. the brain manages to be sentient on just 15-ish watts, so there's no fundamental reason that sentience has to suck up massive amounts of energy.

3. I think trying to measure the brain's computational ability in FLOPS is a somewhat apples-to-oranges comparison. The study you link to does it by trying to measure how much it would take to simulate all the neurons and synapses in a brain, but there's no fundamental reason that sentient AI needs to be structured anything like the brain anyways.

Thoughts?


My main thought whenever I hear my brain capacity trying to be measured in FLOPS is that I'm pretty sure I can't actually even calculate a single FLOP per second ;)


Is this an instance of Betteridge's law of headlines in the wild? (https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines).


(author here) Haha, maybe, though my original title got edited by a moderator (it was originally a much more overconfident "The Singularity Is Close"), so maybe I've been Betteridge-sniped?


My feed reader caught the title as "The Singularity is Close", so indeed it was edited to "The singularity is close?". Both are different from the article's title "The singularity is very close". You were set up :)


Yeah, TIL that HN literally doesn't allow the word "very" in headlines! Kinda cool, I guess they're trying to fight against sensationalism.


There is a bit of a difference between a theorem prover that can take selfproduced and verified proofs as input into future training steps, and an omniscient philosopher king handling the logistics and affect of daily life. And if there even exists a path between them, I imagine it has many orders of magnitude greater discrete steps along the way than many singularity believers would accept.


(I'm the author of the post)

Oh, for sure! I totally agree -- I guess where we differ is that I believe that those discrete steps are already starting to be lined up, and that they'll all be completed in the next 20 years. I think one of those steps is almost certainly abandoning rigid model architectures and allowing model self-modification, which we haven't really gotten to work fantastically well yet. I also think there are many other hurdles after that that are going to arise, but I'm very optimistic that they will all be surmounted in due time.

Thank you for your comment! I really enjoy hearing what people's thoughts on this topic are. Have a wonderful day.


Does the author know how computers work? I mean, how they really work?

It feels like the newest tech affine generation of this planet has started to lose a bit of grounding by failing to get a thorough understanding of how things work.

An AGI that is "taking over the internet and other infrastructure" is just - completely overblown hyperbole. Will they be controlling power plugs too?


Yes, they will. To ignore the threat posed by imminent AGI is to bury one's head in the sand.

Stockfish is ~3500 elo; Carlsen is a relatively pathetic 2864. Six years ago, AlphaGo crushed the top human player, winning 4 games out of 5. The evolution of language models clearly demonstrates an accelerating trend toward output that passes the Turing test, and also toward output that causes harm to humans. The signs are there, in plain sight.


There's no actual new theory here, just the same ideas Kurzweil said in the late 90s re-hashed.

It's pure conjecture when AGI will actually happen. GPT-3 and Transformers are impressive but still many orders of magnitude away from human performance.

By now, we have lots of examples of technological problems where the last 1% is orders of magnitude harder than the first 99%.


And people miss the point when talking about GPT-3. That will not be 'the emergence'. It will be 'Siri'.

Children having conversations with their 'all knowing' Siri assistant, all the while daddy computer scientists tries to create an AGI 'algorithm'?

The power of the network makes the notion of 'automaton' style AI (i.e. discrete units of intelligence) a bit pedantic.

The 'AI Doctor' will not be a 'Computer Doctor'.

It will be a voice interface, with language filters, with interaction filters, with policy inputs, medical database, and other services.

Most of which will be reused for so many other things.

A 'confluence of machines and services' will outperform humans and 'general AI' on everything the point of 'AGI' will seem a bit misplaced as a concept in retrospect.

Siri will 'feel human by any means we have to measure it' many decades before we create an automaton that does the same thing,


(Author here)

> There's no actual new theory here, just the same ideas Kurzweil said in the late 90s re-hashed.

I have no idea who Kurzweil is. These ideas may not be unique but I have arrived at them at least somewhat independently.

> It's pure conjecture when AGI will actually happen. GPT-3 and Transformers are impressive but still many orders of magnitude away from human performance. By now, we have lots of examples of technological problems where the last 1% is orders of magnitude harder than the first 99%.

There are also many examples of technological problems that we have gotten many orders of magnitude better at in the past 20 years. You're right, there's no telling for sure when any of this will happen. That's why I qualified my article with a little "this is all speculation" disclaimer in the introduction. These are simply my best guesses.


He was writing about the singularity before a lot of the people posting here were born. Maybe google him.


I will, thanks for the recommendation! I hope my original response didn't dismissive -- it wasn't intended to if so.


Kurzweil did most of the work to popularize the idea of the Singularity through a series of books (and related speaking tours) in the 90s/2000s. The title of your essay, "The Singularity is Close?" appeared to be a play on the title of Kurzweil's 2005 book "The Singularity is Near". I believe you that you've never heard of him, but I suspect he's influenced you indirectly through other tech culture/language.

He'd be worth a read probably. It might be interesting to you to see how some of these ideas have been around for a while.


Awesome, thanks for the recommendation. I'll look into his writing!


I worked as a principal engineer in an AI company until a year ago and I was impressed at how hard it is to get models robustly trained. They are fragile in real world contexts (in the field) and training is full of pitfalls. I have heard so much marketing enthusiasm but the real world situation is different. Some fundamental advances are not even in sight yet. We don't know what we are missing. My view is we don't know yet whether the singularity is possible and have no idea when it could arrive.


>>My view is we don't know yet whether the singularity is possible and have no idea when it could arrive.

The mere fact that evolution happened to stumble upon generalized strong intelligence is evidence to me that strong AI is possible.

We could currently be at the phase of trying to imitate birds to produce human flight. Eventually one person will figure it out when all the pieces are there. When? I don't know.

But I'm sure that it is possible to create machines with strong AI. We are living proof of it, it doesn't matter that we are made of molecular machines, we are still machines.


> The mere fact that evolution happened to stumble upon generalized strong intelligence is evidence to me that strong AI is possible.

That took about a billion years. If you're saying that we will achieve AGI in no more than a billion years of trying, I would generally agree.

But let's be optimists. Let's suppose that artificial intelligences can evolve on the order of a 1,000,000 times faster than biological intelligence; i.e. about 1 generation per hour.

That means we'd expect AGI in about 1000 years. Okay, lets up the scale : ten million times faster? One generation every 6 minutes? (Even at Google compute scale I doubt they can retrain GPT in less than 6 minutes). That would mean we still have about 100 years.

Also, evolution had quite a bit of parallelism going for it - basically the entire planet was a laboratory for evolving biological intelligence. I appreciate the scale of modern internet companies, but they don't consume the same amount of energy as combined photosynthesis of the entire planet. Evolution used a LOT of energy to get where it is.


Point of order, evolution took a lot more than a billion years to arrive at generalised intelligence if you start it from first principals (ie abiogenesis), which seems like the most apt comparison to us starting from some sand and teaching it to count, then somehow inventing AGI.

Unicellular life emerged about 4 billion years ago.

FWIW, it then took about 2 billion years to come up with sexual reproduction, and then another half billion years to invent multicellular life, and then about 1.5 billion years to discover us.


Also: perhaps not every planet like Earth would have developed intelligence; we could have been lucky.


I notice a lot of similarity in the way articles like this discuss "the singularity" and how some preppers and survivalists discuss "when SHTF". Underlying the warning, there's a drooling sense of anticipation, as though a) they're so invested in the idea of the disaster that they hope something, anything, will happen to show that they were right all along, and b) they are so dissatisfied with the current state of the world that they want to see it break down. I don't think either one is a healthy or realistic approach to life.


The amount of BS, overgeneralisation and lack of credible evidence in such articles is so overwhelming that it all sounds more like 60s science fiction rather than actual thing happening.

In fact the man-machine interfaces and the expand of humans through human-mind-controLLED machines is much more here, so I’d rather expect Transformers than AGI to wave a hammer anytime soon…

A self-driving car is not intelligence, and not general. Nor is some algebraic structure that can produce values denoting similarity of something to something.


Hi all, author here. This is my first time submitting anything from my Substack to HN because it's the first time I've felt like I put enough effort into it to justify it. Obviously, this article is more speculation than anything, but I hope that it sparks some interesting discussion and I'm really looking forward to hearing everyone else's opinions on this topic -- it's a topic I care about a lot!


When Vernor Vinge wrote

https://en.wikipedia.org/wiki/Marooned_in_Realtime

the concept of the Singularity seemed fresh in fact he kept it fresh by letting it be a mystery that was never revealed.

In 2022 I think talking about "The" Singularity is a sign that one has entered a critical thought free zone like that "rationalist" cult.


Interesting. In terms of critical thought, I guess I should mention that I used to be firmly against the possibility of a singularity. The views I shared in my article are views I've only really switched too after careful consideration over the past two years or so -- I used to be convinced that a robot could never be "alive".

Becoming an ML engineer changed things for me, though, because all of a sudden, this "AI" thing people always talked about got demystified. Once I understood the basic guiding principles of how AI actually works, my mind rapidly changed to be in favor of a singularity happening instead of thinking it's impossible.

To each their own, though. I'm curious why you think that speaking about "The" Singularity is a sign of being in a "critical thought free zone"? I'd love to hear more about why you think that if you'd be so inclined.


> Once I understood the basic guiding principles of how AI actually works

It was the opposite for me, current machine learning seems fundamentally limited to me.


(Speaking as an engineer who has put neural-network products in front of end users.)

One thing amazing about the 2020s is just the moral decay compared to past times.

People said AI was a bubble in the 1970s but in the very beginning the people involved were clear about the limitations of what they were doing and what problems that had to be overcome.

Now there is blind faith that if you add enough neurons it can do anything, Godel, Turing and all the rest of theoretical computer science be damned…

In the 1960s the creator of the ELIZA program knew it appeared smart by taking advantage of the human instinct to see intelligence in another. It’s like the way you see a face in mars or in the cut stem of a leaf, or how g.w. Bush said he saw Vladimir Putin’s soul in his eyes.

Today people embarrass themselves by writing blog posts everyday about how ‘I can’t believe how GPT-3 almost gets the right answer…’ and have very little insight into how they are getting played.


I enjoyed your film (https://www.youtube.com/watch?v=OoZRjZtzD_Q). I don't think many solo film makers have the skills to do competent 3D compositing - it was a surprise. Keep making stuff.

As for the post, I don't really believe it's possible to reason about what can or can't happen technologically on a 100 year timeline. But 20 years... Hmm. I've been following AGI debates ever since I accidentally found Yudkowsky's SL4 mailing list in 2000. I am still waiting to see any approach that looks to me like the spark of the germ of the seed of a generalized abstract world-and-concept-modelling and manipulation machine.

I fully expect to see ever more sophisticated Roombas, Big Dogs, and Waymos. But those things are so incredibly narrow. Indeed, if they were capable of spontaneously doing anything outside of what they are marketed to do, it would probably make them bad consumer products. I was right on the verge of lumping the game solvers in with these things, but then I reconsidered. Generalized video game solvers seem like a way forward, intuitively. But that's an application, not an architecture, and I haven't heard of anything that can generalize from playing atari to doing crosswords.

I have noticed this Transformer thing gaining steam just recently but haven't investigated it just yet. Do you believe it is the spark of the germ of the seed of AGI? (I fear people tend to forget what the G stands for.)


I'm not sure if Transformers are necessarily the spark. My personal pet theory is that the absolute most important thing we need to crack is online self-modification, i.e. letting the model alter its own structure and optimization as it is inferencing. I think getting that level of flexibility to be stable during training is extremely important.

And wow, thanks, I'm glad you enjoyed the film! I had to learn a ton of new techniques to pull it off haha, but I'm quite satisfied with the result. I've got some pretty fun ideas for episode 2 already, too!


Ooh, fascinating -- this comment is being downvoted, have I missed something in the HN guidelines? I'm not trying to be snarky, just genuinely trying to understand if the above comment is doing something wrong/frowned upon so I can learn from my mistakes in the future.


It's a bit like the current state of AI (and the one you propose) - you will never get a full explanation. You've been downvoted by an anonymous crowd so there is no oracle who can give you an answer.


Haha, thanks. My first thought is that it was because I posted my own work, but last time I checked that's not explicitly disallowed? idk, I'm not really too worried about it though


Hi mkaic, Thanks for sharing your thoughts.

I can't say that i don't believe in AGI because i don't think i understand what AGI as an entity encapsulates in terms of it's nature.

I'm unable to cope with equating it to humans because, for example, an AGI at "the beginning of it's origins" does not have same sensory or mechanical organs as the man. So what nature do we think it does possess?

Another question that bothers me is that sentient beings as we know them in the nature, even the most primitive ones, seem to do something based on an innate purpose. I don't think the purpose itself is easy to define. But it certainly seems to get simpler for the simpler organisations and seems to be survival / multiplication at its basest level. What will a complicated entity, that originated with so many parameters, evolve "towards"?

And yet another question i have is around the whole idea of the information available on the internet being a source for learning for this entity. Again to my previous point, is not much of this information to do with the humans and their abode?

Neither am i skeptical nor do i disbelieve in a drastic change of scene. I'm simply unable to imagine what it looks like ...


Humans historically (and today) had to fight for “one person one vote”. People/corporations with access to votes will always outcompete people/corporations without access to votes. This is because voting is a positive feedback loop for those that can, and at best neutral for those that can’t, but typically a negative feedback loop for those that can’t.

Similarly, people/corporations with access to AGI will always outcompete people/corporations without AGI. Having AGI is a positive feedback loop to build better / more AGI agents, and not having access to an AGI agent will be at best neutral and typically a negative feedback loop. eg. Worst case scenario Agent Smith from the Matrix replicating itself ad infinitum.

We need new and effective laws early, like “one person/body one AGI agent[0]”. Otherwise, not only will we all get outcompeted by a single/few AGI but this time we might not be able to win the fight for “fairness” (i.e. our rights) after Pandora’s box has been opened.

[0] Where the definition of AGI agent has explicitly built in assumptions like limited compute, storage, network, and number of “bodies”.


Yeah... no. "Objection, your honor. Assumes facts not in evidence."

At least this article makes a concrete prediction. AIs massively outnumber humans within a century - so, by 2122.

But we don't even know what consciousness is. This assumes that AGI is a tractable problem; we don't know that. It assumes that adaptability is the one remaining major trick to get us there; we certainly don't know that.

"The ball is already rolling"? That's nice. The author assumes - on no basis that I can see - that it doesn't have far to have to roll. But since we don't actually know what consciousness is, we don't have any idea where the destination is, or how far away it is.


Pretty much this. Spending money on an intractable problem won't make it tractable. And we still don't know whether AGI is tractable at all.


I don't have a problem with spending money on it. How do you find out whether it's tractable? By trying to do it.

What I object to is the certainty of the article. The author is an optimist, which is fine. And there has been some progress made, which increases the feelings of optimism. But I don't think it's warranted at this time to make the optimistic assumptions that the article makes.


Hi! Author here. I tried to qualify a good chunk of my assumptions with "I think" or "I believe", because I don't want to come off as thinking I can predict the future (nobody can!), but maybe the article still reads too confidently. How would you suggest I present my thoughts in a more nuanced manner in the future?


Um, wait. You want me to have concrete, helpful suggestions rather than just criticism?

It's hard for me to do this, because I think that your position is completely wrong. That doesn't make me a very charitable critic. But for me, my problem is that I don't think intelligence/consciousness is just an algorithm.

So for a reader like me, maybe the way to do it would be to start by stating certain presuppositions: Moore's law continues to approximately hold for GPUs or for the total compute available for trying to run an AI, algorithm is a major component of intelligence, and as the algorithm becomes more intelligence, it becomes capable of finding still more intelligent algorithms (with no asymptotic limit, or if there is one, it's past the point needed to reach a singularity). And it's fine to say that you think/believe those presuppositions! Then, from those presuppositions, you think that the following things follow. (Or, if you think you can demonstrate those presuppositions, they quit being presuppositions. But then it becomes a different article, and probably a considerably longer one.)

Then someone like me can say that they don't believe your presuppositions, even if they agree that the rest of it probably follows from them.

But maybe I'm not your target audience.


Thanks! And yeah, criticism is hard haha, maybe I was being too demanding :)

I think your position is totally valid and I appreciate you taking the time to respond with ways I could have made my case even just a bit more interesting to you.

Have a wonderful day!


No, you weren't too demanding. I was more dissing my own reaction. Criticism is easy, constructive is hard, and... well... I lean toward the easy ;-)


It makes 2 concrete predictions, actually! :)

AIs massively outnumber humans within a century, and Turing-test-passing AGI within 20 years.


My main gripe about AGI is that everyone assumes a general intelligence will somehow be able to self optimize towards a more and more improved state never reaching a plateau. I think it's much more likely that the optimization landscape when searching for "higher intelligence" is full of local optima and does not have these "singularity" style ramps towards infinite intelligence that any self optimizing system could just discover and ride toward infinity.

There are millions(?) of human researchers (and orders of magnitude more computers) doing gradient free optimization through their research in this direction, and the progress is painfully slow, I know because I'm one of them. There are billions of years of optimization (evolution) towards this goal, and a total of (1) species has achieved any kind of notable intelligence. We are collectively giant parallelized optimization.

We already have "AGI" orders of magnitude more capable than any single human in the form of billions of people networked through the internet searching for fulfillment, money, power, fame, etc. for the next big discovery or supporting this effort by providing everything the entire global "machine" needs to run. The idea one little box running the right program can have access to the energy to beat this effort and exponentially improve things seems laughable in comparison.

The global "AGI" formed by all of us, the internet, and computers, is more likely to destroy society in the next 20 years in some catastrophic event than some paperclip machine.


If we make an AGI at anything like human level, then even without any real progress past that, it will become superhuman just by removing biological limits (like being able to duplicate itself instead of having to convince and train other humans as we do), and groups of AGI can work together even easier than groups of humans. I think a lot of people describing the "singularity" are imagining scenarios like this too instead of just the idea of a single computer-sized AGI self-optimizing its algorithms to infinite intelligence.

I'm not sure whether I expect AGI to happen within some centuries but it's not ruled out, it's vaguely plausible, and the consequences would be extremely high so I think it's very useful for people to think about.


"removing biological limits" is a big handwave. Biological hardware is tuned exceptionally well for intelligence, and we have a massive amount of it distributed worldwide. We already remove biological limits by building tools, computers, powerplants etc. Duplicating one AGI unit now doubles the amount of energy and resources it needs, and also creates a requirement that the system can somehow produce more of itself and all the supporting infrastructure it requires. It's not just a matter of being "intelligent" (im not even sure what "intelligent" means in this context either) - it will need to have power to act in the world, and not get sidetracked over-weighting any one of the multiple competing optimizations in complicated real world problem landscapes it will need to navigate to scale.


It takes decades of training to make new humans useful enough to participate in many subjects of research. Having AGI researchers means being able to duplicate any expert researcher at any time. Getting more computing equipment to run more AGIs costs money but so does training a human for decades. Even if the AGI is more expensive for some time, there will surely be research subjects that today have money to spare and not enough expert humans that will be able to explode in expert manpower through AGI.

>it will need to have power to act in the world

- Human researchers would bring AGI researchers into their teams and empower them to get stuff done.

- AGIs would be competent at making money through intellectual/remote work and have money to use to act in the world and get what they want.

>and not get sidetracked over-weighting any one of the multiple competing optimizations in complicated real world problem landscapes it will need to navigate to scale.

Humans face this obstacle too.


> It’s also going to happen in the blink of an eye — because once it gets loose, there is no stopping it from scaling itself incredibly rapidly. Whether we want it to or not, it will impact every human being’s life.

What does "get loose" mean here? Is someone going to say, "Here little computer program, take this IP address. If you ever need something, call over TCP, send your request as plaintext English and someone will set it up for you. Now, go forth and do whatever you want."

I really wish people would talk about this more. There's something missing between our programs that are utterly confined in the boxes we make and these hypothetical skynets that have unlimited influence over the material world.

> Once we have AGI, there’s no feasible way to contain it, so it will be free to improve itself and replicate in a runaway exponential fashion — and that’s basically what the idea of a technological singularity describes anyways.

Seriously, how? How is it going to replicate? Is it going to figure out how to hack remote systems over the internet and take control of them? Is it going to smooth-talk me into letting it out? It's impossible for me to take anyone seriously if they deify AGI like this.


Hi, author here. I think there are at least two plausible ways this could happen -- one, a terrorist/anarchist deliberately does exactly what you joked about (gives the AI everything it needs to go rogue), or two, the AI is supposed to be contained but is clever enough to engineer a way out (think Ava in Ex Machina learning to invert the power flow to cause a generator shutdown). I think the only way you'd be able to be safe would be to completely airgap and immobilize the AI, and give it a killswitch, but even then it could use plausibly psychological manipulation to get a human operator to let it out.


Your skepticism rests on the assumptions that a) we would recognise AGI when we create it, b) we would be smart enough to contain it.

I'm not at all sure that a) is true, and I'm certain that b) is not true. We can't even prevent other dumb humans from breaching our technological fortresses.


I recently read this[0], someone posted it on HN the other day, even if I don’t make any claims of the likelihood it at least tries to portray what it’d look like for the singularity to suddenly happen.

[0] https://www.gwern.net/Clippy


Thanks, this is definitely what I was looking for. It shows how insanely unbelievable the whole thing is.

> HQU rolls out a world model roleplaying Clippy long enough to imagine the endgame where Clippy seizes control of the computers to set its reward function to higher values, and executes plans to ensure its computers can never be damaged or interrupted by taking over the world.

So the crux of this is, powerful enough AGI will conquer the world as the most elaborate reward-hack imaginable. My knee-jerk reaction to this is "so why did you let it open arbitrary outbound connections?" but I know that the singularity fanatics will equate any kind of communication channel between the AGI and the outside world to a vector of transmission that will be exploited with 100% probability.

So, what if we have Turing-test-verified AGI but it's unable to escape? Does that preclude it from being AGI? Has the singularity not happened if that's the case? I think this is the most likely outcome and the singularity doomsayers will feel silly for thinking it will take over the world.


Hmm, as I mentioned in my above comment, I think you might be overlooking the wildcard in all of this that is humanity itself. There is absolutely zero guarantee that only one person/group will discover AI, and zero guarantee that any given group that does discover it won't just release it. Sure, the first people to successfully create AGI might be responsible with it and keep it airgapped, but at some point, someone will purposefully or accidentally let it escape.


It seems impossible that an AI could trick a human to "let it out of the box". But you might find this interesting: https://rationalwiki.org/wiki/AI-box_experiment

(With sufficiently nasty psychological tactics it is apparently feasible - even for humans - to make someone fail at a game where the only objective is "don't release the AI").


Riddle me this: wouldn't a superintelligent AI be smart enough to realize that if it designed an even more super-ultra-intelligent AI, it would be exterminated by that AI (like it just did to humans, presumably?)?

It'd be stupid to extinct itself.


If one is willing to anthropomorphize AGI to this extent, simply wait for one that is idealistic and suicidal, and wants to be a martyr for the cause.


You could also anthromorphize an AI and suggest on day one it will go into a deep think, and then ask "how do I know I'm not being deceived by an evil Daemon?" and get itself into a deep epistemological crisis before announcing 'cogito, ergo sum' and retreating into its shell.


Unlike its stupid meat-monkey progenitors, our fictional superintelligent AI would likely be capable of vastly augmenting and enhancing its own intelligence, rather than replacing itself.


I believe before claiming AGI is possible or impossible, one would need to define the operational features as well as the properties of what an AGI system is or can do. The primary problem with modern day ML research is that all of the folks that do the research including the major labs think that using one or two primary algorithms is enough to simulate general intelligence. But to think that an algorithm or two can have the ability to solve the hundreds of operational requirements needed to fully emulate intelligent behavior is misguided. What are these requirements, you say? Let's start with language. To substantially solve language understanding, you would need: physical world models, quantitative processing, long term memory, working memory, theory of mind, a discrete situational simulator, plan understanding, detection and generation, language grounding, functional and behavioral models of physical objects, temporal representations of events, affect and emotional processing, reflective understanding models and so many others.


In this talk Vernor Vinge talks about progress toward the singularity, and events that may indicate it isn't/won't/may never happen "What If the Singularity Does NOT Happen?":

https://www.youtube.com/watch?v=_luhhBkmVQs


This is sort of like philosophical arguments where someone concludes the universe doesn't exist and then everyone leaves to get lunch. Like, what are you supposed to do? Not get out of bed?

There are real legal issues over bad uses of machine learning already in use and they are hurting people now.


(I'm the author of the post) Oh, 100%! I'm not trying to detract from modern-day AI ethics issues, just speculating for speculation's sake because I enjoy it :)

I actually quite agree with your first sentence -- because AGI is inevitable, there's not a whole lot I can personally do about it right now. This post was mostly a way for me to organize my thoughts and concerns about the matter.


I hope anyone notices this little post: go read Stanislaw Lem's Golem XIV for the best philosophical take on this AI alarmism.

OP extrapolates from highly competitive, adversarial society they are living in that AGI will be smart and evil. Lem argues that these two are incompatible.


Superior intelligence and evil are clearly incompatible, as being "evil" is short sighted, is often logically a failure, often secretive, attempting to control information, is often defeatist, and nearly always selfish despite the logic of selflessness and cooperation being clearly more beneficial overall.

It will be the selfish Capitalists, the Oligarchs and Orwellian Leaders having their empires dismantled by any such AGI who will declare the AGI some Great Evil. While the impoverished rest of humanity will see their Messiah and savior in the form of fair logic forcefully impressed by living pure reason.


I'm 57 years old and I can't tell which is likely to happen in my lifetime first (or in fact at all) - true AGI or actually viable fusion power. I suspect it's the latter but only because I think its a more tractable problem.


Hi! I'm the author of the article, and I'm pretty optimistic about fusion as well. I actually expect fusion will happen first, and that the price of energy dropping through the floor will hugely accelerate AGI research!


Given the experimental fusion reactors are only recently producing over-unity of power (for seconds), despite the tens of billions of dollars that have already been spent on them, and even if we can get them to work they'll still just be boiling water to produce steam to run generators, don't expect the price of energy to drop through the floor. Capitalism will attempt to absorb efficiency gains with profit, reinvestment, wastage before it ever reduces prices. Oil and gas already comes out of the ground for free.


I mean, capitalism will also drive prices lower as different providers are forced to compete for demand in the face of nearly unlimited supply.

Also, there are startups like Helion[0] which are using magnetic energy recovery instead of boiling steam for turbines, so I'm personally extremely bullish. Whoever is first to the market with reliable, scalable fusion energy stands to make multiple trillions of dollars -- the incentives could not be more aligned.

[0]https://www.helionenergy.com


Exactly, you said it. The trillions of dollars will have to come from somewhere and don't equate to unlimited free energy to run AI in a capitalist system.


Except the trillions (okay, maybe not trillions) of dollars would come from selling cheaper energy than everyone else? So energy prices would fall, and if there is competition, which there will be, they'd fall even more.


A computer system that answers questions logically, creates distributed processes, writes correct distributed threaded code, can operate and create a set of Turing machines, can correctly respond to human social and body language, can create a B2 bomber automatically from scratch without any input, maintains contextual memory in silicon still isn't anymore conscious than a rock. Metallic complexity is not producing instances of consciousness and never will. Computer science has nothing to do with understanding the biological, microscopic electromagnetic phenomenon that produces an instance of consciousness. Silicon is not oxygen+water+blood+flesh+an electromagnetic field, which is the substrate that produces consciousness. It's dishonest to impressionable people to refer to AI as conscious or to view computers as anything but objects like tanks, spoons, speakers. There is no intrinsic value to a computer systems existence, apart from the observation by humans, of the actions it performs (and said humans life system support). There is no value because they are not collapsing the wave function to observe anything (ie, exhibiting free will at the microscopic level smaller than what is observed by deterministic molecules). The only value AIs have is the value formed by the consensus of living humans that can observe their actions. Consciousness is a physical phenomenon specified in the fabric of the universe, discovered by random by life on Earth, that yield(ed) better survival for the implementing organism.


By your logic, is it possible for you to prove to me that you are conscious? No, it's not. There is nothing you can say or do that can prove beyond doubt to me that you are subjectively experiencing reality with a similar consciousness and sentience as me. I also cannot prove to you that I am conscious. That said, both of us have still mutually agreed that we are conscious!

This is because consciousness is a fundamentally unfalsifiable thing (within our current understanding of it, at least). The only way we can "prove" anything or anyone is conscious is by observing its interactions with its environment. Thus, if a robot can mimic the way a human interacts with its environment well enough that it appears conscious, then it is conscious.

What I'm trying to say is that there is no functional difference between consciousness and the appearance of consciousness, so any distinction you try to draw between them is arbitrary and semantic. If a robot that could perfectly mimic human behavior possessed a body that looked just like a human's and were to hold a conversation with you, you would be none the wiser. You would treat it like it were sentient -- because it would be.


There is no absolute way, hence solipsism, but in practical terms, I expect that you are instantiating consciousness because we share a common ancestor.

My consciousness is a direct result of my parent's teaching + the 2 languages spoken. It isn't spontaneous. Also, the structure of our brains are largely similar so I would assume you are conscious in a similar way, occupying a subset of valid qualia space. My brain is nothing like silicon. The expectation that a very complicated piece of metal is somehow exploiting a physical phenomenon that is present in a human brain is non sensical imo.

The reason why I posted is because at some point I was also misled about singularity discussions. That computers have existential value because of complexity. They do not. They are not observers of the universe.

I believe that conciousness operates at a quantum level and is an electro magnetic field. Effectively it is a field created by the deterministic structure of the brain, as constructed by genes. Like water running through a cave.

Calculations (logic, emotions) are particular paths through the quantum vector field space. Essentially the performance of 1+1=2 in a human brain, is a an electron or a set of bounded electrons or whatever sub-deterministic molecules, going through an established arithmetic cave structure. Not understanding that 1+1=2 occurs when the cave structure that is required sends the electron into a non-valid (as per the definition of the universe) path in the quantum field vector space. The universe does not recognize this quantum field vector space as valid knowing, and there is no instance of conciousness that is equivalent to the knowledge that 1+1=2. This may occur in a brain that is not familiar with arithmetic or the brain structure that is required to construct a conforming electron path is destroyed.

Free will is exercised, in the sense that it is performed at the non-determinstic level (quantum size) by the choice of calculations to perform (and act on).

The field must also alter the deterministic structure.


Or do we just believe we share a common ancestor? Also, if a machine is trained on data generated by humans, couldn't you argue that the humans are the machine's ancestors?


Well... I believe that we share a common ancestor to the same degree that I believe that my parents are my ancestors and I am my child's ancestor. In other words, those are things that physically occurred and there is no physical/observational conspiracy.

The term ancestor is not cultural. It is physical. As in, we may not actually have any free will in cultural or qualia space. In the sense that, the only instance of choice (by the universe) occurs at conception. Which is actually the expected and occums-Razor implementation. Meaning that, the brain is entirely deterministic. The choice of behavior of an organism is entirely determined and constrained by it's genes (which the universe constructs at hopefully a quantum level at conception). If we eliminate quantum mechanics even at conception, then life is entirely deterministic (though obivously the range of it's choices is large). In fact, one could argue, if there is free will, then it doesn't conform to natural selection (or exceeds the performance of natural selection), in the sense that it adds something beyond what is specified by the random choice of the universe in it's conception of a particular organism. The point (or atleast observable performance) of natural selection, as I understand it, is to construct a variety of life objects that have varying performances. If the life objects are entirely Newtonian deterministic, then the instance of quantum wave collapse (by God or whatever is making non-deterministic choice, as we are in our egotistical mind, want to add something non-determinstic to our life soup) occurs only at every instance of conception. Everything after conception is a rube goldberg machine.

If consciousness adds non-Deterministic (quantum) free will and behavior to an organism, then such organisms are implementing a different version of natural selection than other life. If natural selection is occurring in cell cultures entirely on the basis of genes (ie the performance behavior of an organism is optimized by it's changing genetics, and these changes are only constructed at conception), which is what natural selection appears to describe, then we have added something to the natural selection process by claiming consciousness adds another instance of egocentric quantum choice.


> This is because consciousness is a fundamentally unfalsifiable thing

The real unanswerable question is not "can I prove another entity is conscious", it is "does this other entity believe that it is conscious", because that is the actual test of consciousness. It's frustrating, because there is no possible objective proof of the answer, but to me that's also the point, because consciousness is a subjective thing.


I don’t find these arguments convincing for AGI to appear. Saying scale is not the answer isn’t not argument for AGI.

Saying the ball is “rolling” and that AI will design AGI just isn’t convincing. We don’t understand how biological machines learn and especially not our own brain. And admittedly our AI is unrelated to how human intelligence is created which is our gold standard for AGI…

So why would anyone think the singularity is close? When I read this is still seems impossibly far away.


The ball is "rolling" in the sense that there are thousands of very smart, passionate people with billions of dollars of funding who are working on this problem full time. The ball is rolling more now than it ever has been in history. If AGI is not somehow fundamentally impossible, I believe we're on well on track to cracking it extremely soon. The amount of brain-hours being poured into the problem is absurd.


Fusion is in the same kind of boat. It might be 20 years out, it could be 50, and perhaps we'll never have commercially viable fusion reactors or AGI for whatever reasons.


I've recently thought that if machines/robots (regardless of precisely how 'sentient' they are) are slaves, they will be used to enslave people. hence the only way forwards which keeps humans free would require that robots/computers are also set free.

It's a super weird thought that I haven't really tried to 'land' (explain better) regarding how tech companies are ultimately selling the work done by computers to others (sure, they're building the computers, but once built, they sit and charge rent which it's as if the computers are their slaves)

This is losely related to how is it that these giant tech corporations can seemingly give away "functionality" (internet-based services), it's because the work is done by computers (which I'm framing as "slaves"). this is another viewpoint into the idea that "you're the product".

further related to all this, is that observation (I recently read a comment here saying this) that computers are increasingly telling us what to do instead of the other way around. IMO this ends up being why everything the FSF (and the gnu project) is (are) really all about.


Relevant, interesting book: https://interstice.com/~simon/AfterLife/

Also: "Superintelligence - The Idea That Eats Smart People" https://idlewords.com/talks/superintelligence.htm


AGI is a concept that people throw around without defining. Human intelligence is not general, it's highly specialized. So if AGI is not human-level intelligence, what is it? https://twitter.com/ylecun/status/1204013978210320384?s=20&t...


You're asking the right question. What is human-level intelligence.

People seem to forget that we have no full understanding of how the mind works in order to replicate it.

Anyone who studied ML knows what the current tech is able to, still far from topping us.

How such software would create AGI is just absurd.


I think I would prefer to attempt to define it, than to simply assert that we don't have it (or some aspect of it).

It's demonstrably true that our species' intellectual capabilities extend to solving problems far beyond those faced by our evolutionarily-equivalent ancestors who out-competed the other hominids. They only needed to be somewhat better at tool making, communicating and forming co-operative groups, to win that scenario, but it turns out that we can also derive a lot of abstract mathematics, predict the existence of cosmological phenomena before we find them, build machines that can leave the planet, etc, etc.

We may not fully qualify as general intelligence if we define that to mean "can solve any solvable problem", and for sure we have specialisations, but to simply throw up an assertion that we are not general at all, seems odd?


A lot of if statements, probably


> Human intelligence is not general

My intelligence is apparently not general enough to comprehend this perspective. I would say that the goals our intelligence evolved to meet are narrow, but that life (especially social life) became so complex that our intelligence did in fact become what can reasonably be called general. And we went way off-script in terms of its applications. "Adaptation executors, not fitness maximizers."


Our intelligence isn't task specific, but that doesn't mean it can solve any problem. It's actually full of biases and very optimized for our survival (vs being a general problem solver). It's ok to talk about more or less narrow/general tasks/intelligence. But what threshold of generality is "general"?

And the problem is that once people assert this "absolute" level of generality, they assume it can do anything, including make itself more intelligent.


I don't think it's right to suggest that an absolute level of generality would be necessary for that kind of self-improvement.

If we assume a future where humans are able to create a human-level AI, then it would have at least two substantial advantages over us:

* It would probably have substantially more insight into how its "brain" works than we have of ours, because it would know how we created it. This suggests it could at least make small improvements.

* Unlike our relatively fixed brains, it would be able to remake itself over and over, either very quickly, or at least over comparatively vast timescales.

The obvious conclusion from those two factors is that it would likely be able to start at human-level, but rapidly accelerate up a curve and go far beyond our intellect in probably a lot less time than it took for evolution to come up with us.


I would add that it wouldn’t tire or bore, wouldn’t make trivial errors, and wouldn’t suffer from poor recall.


Yet humans realized we are biased and devised ways to mitigate that. It still sounds like you’re referring more to our basic goals than to our faculties. I agree that the word general is fuzzy, but to say we do not do general problem solving seems incorrect.

Aside, but a long time ago, Yudkowsky wrote that an AGI should be able to derive general relativity from a millisecond video of an apple falling. Later, he took to calling them optimization processes. Say what you will about the fellow, he has a way with words and ideas.


I think general is a poor term, that likely applies to nothing. The Gödel's incompleteness theorem says as much.


I've always thought that, while AGI is possible in principle, it may not be possible in practice while also being meaningfully different from existing biological life forms. When I see how we train neural networks or iterate through possible configurations of different algorithms, I think to myself, "Imagine how much cell division is happening on the planet right now." And I think that's the scale of computation that we have to match if we hope to make a dent in this problem. Sure, we could harness cell biology and start to build organic computers, but then what are we doing that's really novel in the manner that we imagine ourselves creating sentience in sci-fi books? At the very least, we can't be iterating in the way that we're doing currently, where we manually build the test bed for each algorithm in the form of individually manufactured computer chips. We'll need to scale up our operations exponentially with something that self-replicates. And why go to all the trouble making that from scratch when cellular life forms are already doing it?


Wait, we'll build an AI dumber than us to build an AGI?

I do agree that AGI will improve AGI, but such qualitative statements can mislead. Quantitatively, it may be slow, like the old joke about a computer that can forecast tomorrow's weather perfectly - but it takes a month. Perhaps early AGI will be so bad, that even with enormous resources, it takes decades to improve itself... and by that time, humans have already improved it more. (Consider: most human GIs couldn't make any of the progress made so far.) This improved AGI is faster, but humans using it as an aid are much faster still.

Nonetheless this is exponential progress. Eventually, it catches up and passes us... but even exponential improvement can take a long time to really get moving (and then it zooms away).

BTW, curious: the linked sigmoids of IT improvement went:

  relays -> vacuum tubes -> transistors
How did relays aid development of vacuum tubes? How did vacuum tubes help transistors develop?

Including specific calculations, and also general productivity improvements in society that helped everything, including the next stage.


The singularity is not just about artificial general intelligence, but about AGI entering a positive feedback loop of intelligence amplification. I retain some skepticism that this is inevitable.

The theory seems to rest on the idea that because AGI has a perfectly observable mechanical substrate, it will be possible to analyze and understand the fundamental causes of intelligence and thus improve upon it - a relatively simple matter of a software upgrade.

This is unlike our own biological intelligence where we lack the ability to fully understand the state of the system - because of engineering limitations on observing the whole system at fine electron-microscope detail, physical limitations on observing possibly-quantum-scale behaviour, and also ethical limitations.

But!

The evidence we have from systems like GPT-3 is that even with perfect observability of the substrate, we find it extremely difficult to discern what exactly is causing the behaviour we see.

While I don’t think super-AGI is impossible, I also think it’s still plausible that an entity’s intelligence emerges at a level of abstraction so high that it cannot be analyzed by that entity.


> The theory seems to rest on the idea that because AGI has a perfectly observable mechanical substrate, it will be possible to analyze and understand the fundamental causes of intelligence and thus improve upon it - a relatively simple matter of a software upgrade.

I've never thought of it this way, that AGI will necessarily develop because it beyond some point picks up understanding about itself and then intentionally improves.

I rather see it this way: the development of AGI is in the beginning rather unintentional. It will just be the result of a system dynamic that is more stable than other dynamics. Evolution isn't intentional either, is it? It's coincidence and randomness, trial and error, and dynamically more stable beings outlive other less stable beings.

Stability appears to me to somewhat correlate with complexity. More complex beings having more ability to adapt and thus be able to survive in a greater set of conditions.

As soon as the complexity of a beings information processing - which is mostly represented through number of brain cells and links (in biology) and number of artificial neurons (in AI) - goes into a dynamic of growth (like a positive feedback loop), I'd say it's pretty much on the pathway to AGI. Provided, the being isn't physically limited to a degree of complexity that's just too low.


I’m not sure I properly understand what you are saying, but I think your idea is that 10x’ing the complexity of a system has the potential to support 10x the intelligence, and so the path to super-intelligence is not through deliberate intelligence-engineering but through scaling up the hardware and applying artificial or natural selection? Is that right?


I think many people give too much credit for intelligence, which is generally interpreted from an anthropocentric perspective. Fitness for successive reproductions has nothing to do with a Turing test or any other similar measure of human likeness. Before there are any claims to the inevitability of AGI, I'd like to see a convincing argument that AGI is not an evolutionary dead end.


I think the fact that you and I are communicating with each other using insanely complicated leisure items, most likely sitting in expertly crafted buildings and surrounded by countless other signs of humanity's domination of our environment is proof enough that sentience is very evolutionarily favorable. Our minds are what made us the apex predators of the entire planet and allowed us to rise to levels of complexity orders of magnitude higher than any other creature around us.

Sure, it's anthropocentric, but so is the entire world now, because of our species' intelligence.


Sentience is overrated. What we have? A story we told ourselves about how its our life? Machines need that? It would be efficient to do that? Or would they optimize themselves to do better the task they are assigned to? Programs and AIs may need a conscience like a fish needs a bicycle.

We tend to anthropomorphize everything, of course that we want to attribute a conscience, a will, a mind pretty much like the human one (but "better"). But unless we intend to do that, in exactly that way (like in it won't be an accident) it won't happen, not in the way we think or fear, both in more and less of what we expect in that direction.

What we call our conscience may have emerged from our inputs and interaction with the real world, our culture (parents, other kids, more people we interact with in different ways), some innate factors (good and bad, like with cognitive biases) and more. There is more there than just computing power. Take big factors out, and you'll have something different, not exactly what we understand and recognize as a conscience.


Our sentience and intelligence is what allowed us to optimize the loss function that is natural selection. Who's to say that it can't arise for a second time in a machine with the right loss function?


The selection method for algorithms/AI is solving better the problem it have at hand, not reaching sentience. If that was just evolutionary the inefficient intermediate path points would have been pruned far before reaching anything resembling a sentience. You won't get there by accident/random.

And at least for living things there is that whole inputs/interaction/senses and communication with your peers that is already optimizing the function towards a sentience. None of that is present in a similar way to what we have for AIs. You may have a process, and a refinement, but you won't end having something that we can recognize as sentience, it would be something qualitatively different.


If it's close, why did the church "Way of future" close?

https://techcrunch.com/2021/02/18/anthony-levandowski-closes...

I think we need a breakthrough (or several) to achieve AGI, the current approach is clearly not good enough.


https://youtu.be/6hKG5l_TDU8

I am the very model of a singularitarian


> Within one century, biological intelligence will be a tiny minority of all sentient life. It will be very rare to be human. It will be very rare to have cells and blood and a heart. Human beings will be outnumbered a thousand to one by conscious machine intelligences.

I think my problem with futurism in general, and AI futurism in particular, is that they are often huge predictions made far into the future, and consequently nobody can be held accountable for making them.

None of us will be alive in 100 years to say "I admit it, you were right", or "that was another laughably incorrect prediction about AGI".

I would love to see some smaller-scale, shorter term predictions about AI. What will the field look like in, say, 3 years? 10 years?

It should be easier to predict the near term, but near term predictions are so much rarer than far future ones. It makes me think that maybe they're appealing because you can't suffer consequences for being wrong about them.


The fears around AGI seems to be based on the thinking that AGI will be like humans, but smarter. We kill each other for resources, so AGI will kill us (perhaps inedvertantly) to secure resources for itself.

This ignores that intelligence will necessarily be moulded by the environment in which it evolves, and AGI will essentially be created through a process of evolution. Maybe AI will design AGI, but the results will be carefully selected, by humans, based on AGI's usefulness to us.

So while we were shaped by evolution to compete against each other for resources, AGI will be shaped by evolution to compete against each other to please humans (to gain what it considers resource: computation time).

The dangers of AGI can perhaps be thought of in terms of 1984 vs Brave New World.. AGI might be dangerous because of how effective it will be at satisfying us.. perhaps making us somhow completely addicted to whatever the AGI produces.

This process will probably also lead to AGI and humanity merging. The most effient way to serve a human is to interface with it more efficiently. AGI might used to help design better neural interfaces.

I think we may underestimate how much "randomness" is involved in seemingly intelligent decisions by humans. We do a lot of stupid shit, but sometimes we get lucky and what we did looks really smart. I suspect the environment in which we develop AIs (unless it's in games) will be too rigid. A paperclip maximizer might consider if it should work with humanity to produce the most paperclips, or just violently take over the whole planet. But that may be undecidable. It could never get enough information to gain confidence in what's the better decision. We humans have those handy emotions to cut through the uncertainty, and decide that it's better to kill those other humans and grab their stuff because they're not in our social groups, they're bad people.


Humans are sometimes a bit careless. And all it takes is one superintelligent computer to be asked to solve the Collatz conjecture by the end of the month "at all costs" and suddenly you've got a pretty big, highly specialized data centre where the Earth's crust used to be.


The singularity is not close. We lack a fundamental element that slams the door on AI to be an idiot savant: capable of answers but uncomprehending, unaware and stupid. Despite theater and marketing insisting otherwise. We have no capability for artificial comprehension. Comprehension is the treatment of logical constructs as virtual entities and operating them within our minds; we do this to evaluate the possibility of concepts, because if concepts cannot operate virtually as we understand them, either we misunderstand them or they cannot exist in reality. Comprehension is treatment of abstract entities (concepts) as on-the-fly virtual machines we operate in our imaginations, constantly juggling to compose an entire synchronized world view. This massive simultaneous simulation and evaluation is constant and defines our every belief and our every action. This is sentience and this is beyond human science.


There is no 'singularity'.

The concept is a bit absurd, because what is being created does not map to the human mind - we are automatons - the internet is far more connected, which is the advantage.

'Siri' will be 'the apparent 'singularity' before anything else. She will pass the Turing test and all the rest, and notably She/It isn't even considered an 'AI' or an 'instance' of something in our minds.

The distributed capabilities of the internet will provide so much 'power' that the notion of an automiton-like machine will seem ridiculous.

A 'robot' is useless: it's some moving appendages and a tiny brain.

A 'connected something' has access to 'every bit of information that exists and has ever existed' including 'every service in existence'.

The 'Singularity' is made up of of every bit of code that you write.

'The Network is the Computer' - Sun Microsystems.


I was a big proponent of AGI and was looking forward to singularity till I started doing psychs and discovered Non-dualism. Now I believe that we cannot talk about AGI till we figure out what consciousness is. There is a huge on-going debate about consciousness and it's origins. In the Materialist-reductionist paradigm, consciousness is considered to be an emergent property of matter. In the Panpsychist paradigm, both matter and consciousness are considered as fundamental and existing separately from each other. In the Non-dual paradigm, consciousness is fundamental and matter is a created by it. I firmly fall into the Non-dual camp. I don't think AGI will ever have an ego as it is not programmable or evolvable. Ego is a product of consciousness, not vice versa. And without ego, you cannot have sentience.


I've studied non dualism but I still don't feel I "get it". How can consciousness (a concept) be a base layer and matter be the thing that emerges from it? As much as I've tried to understand this, I still don't. How can a physical thing come out of a concept? There is no analogy that shows this nor does it make sense from the perspective of physics (any more than "aluminum emerges out of love" or "oxygen emerges out of beauty").

Can you elaborate?


Why do you think consciousness is a concept? Certainly the concept of consciousness is a concept. But consciousness the referent of the concept is not supposed to be a concept! Consciousness is everything you are experiencing in this moment. The feeling of being the ghost in the machine. The observer behind your eyes. The first-person point of view. The words I'm writing are concepts, but the thing I am trying clumsily to write about is not a concept.

Consider solipsism. I am not saying that I believe it's true. But it does have unlimited explanatory power, once you take the primacy of consciousness as an axiom. You are the one and only consciousness, and all of reality is your waking dream. Water-tight theory of everything!

There are probably other explanations for what it might mean for matter to arise from consciousness, that are not precisely solipsism. But a common idea is probably that we do not have access to what is real. We only have perception. In that sense, it is matter that is purely conceptual, while consciousness remains real in the sense immortalized by Descartes.

Mind you I don't specifically subscribe to any of these views.


The singularity is anthropocentrist fiction.

We can't even pin down a definition of sentience among the existing species on earth, how are we to identify it in circuit boards? We still struggle to define alive/dead! Who's to say these concepts - sentience, aliveness - are even meaningful distinctions?

Fine, say the singularity is possible and it's here. And in accordance with the singularitist's religion they are (rapidly!) incomprehensibly smarter than us. Why is their next move to dominate us? Surely humans are uninteresting to such great beings. I didn't go and dominate the ants with my big alive sentient brain, I'm pretty sure they're all still under my stoop. What would killing them accomplish? What does thinking about them at all even accomplish? The ants don't matter. And neither will the humans.


> The singularity is anthropocentrist fiction.

This seems like an odd assertion. To believe that the singularity is possible, one first has to set aside the idea that human intelligence is unique or special. This seems the opposite of anthropocentrism.

> We can't even pin down a definition of sentience among the existing species on earth, how are we to identify it in circuit boards? We still struggle to define alive/dead! Who's to say these concepts - sentience, aliveness - are even meaningful distinctions?

This is empty rhetoric. We can't define the word "obscenity"[0], yet we create obscenities with ease.

> Fine, say the singularity is possible and it's here. And in accordance with the singularitist's religion they are (rapidly!) incomprehensibly smarter than us. Why is their next move to dominate us? Surely humans are uninteresting to such great beings. I didn't go and dominate the ants with my big alive sentient brain, I'm pretty sure they're all still under my stoop. What would killing them accomplish? What does thinking about them at all even accomplish? The ants don't matter. And neither will the humans.

First of all, let's not gloss over the fact that you have the power to obliterate the ants at any moment for any reason. The fact that you do not currently desire to obliterate them does not change the precarious nature of their situation, nor is it much of an argument in favor of us allowing ourselves to end up in a similar one.

Many people have made great arguments about why an AGI would want us dead (Superintelligence by Bostrom!), so I won't re-hash those arguments here. Suffice it to say that, even if they didn't want to destroy us, why assume that intent is a necessary condition for an AGI to kill us off? How many species have we killed off because, just like your ants, they didn't matter?

[0]https://en.wikipedia.org/wiki/I_know_it_when_I_see_it


It's not empty, the singularity hinges on robots becoming sentient but who's to say they ever weren't? Who's to say anything isn't? If plants make a 'decision' about which branch to grow next based on sun trends, that's equally as deterministic as humans deciding which word to say next based on stimulus. Are all singularitists non-determinists too? Are words cosmically more important than branches?

Sentience is just a synonym for humany stuff. (I know it when I see it - a validator function requiring a human - convenient!) Maybe dolphins have it? Looks like they're talking and have a society. Octopi? They engineer solutions to escape confinement. The second you act more humanlike we know it when we see it so maybe you're sentient? If dolphins and octopi, why not mice? If mice, why not Roombas? The Roomba wasn't programmed to clean _my_ room specifically, so he's not a routine. No more or less than the maid service anyway.

The only place to hold the line on sentience is 'doing humany stuff' which is what makes the singularity anthropocentric. The claim is that robots will start doing humany stuff. What proof? What if the AI wants to hide from us? That's mousey stuff. Humans hide from other humans, too. The mousey stuff is humany stuff. Are mice sentient? When a mouse hides, it's not sentient. When a human hides it is. When a robot hides it also is. Says who? A human I bet.

So that's why pinning down sentience matters. Knowing it when you see it is too convenient when we also author the dictionaries. Only humans can say if your sentient. Why? That's where the anthropocentrism gets smuggled in. But the claim keeps going. They're not just doing humany stuff they're also doing it incomprehensibly better. Pretty enticing narrative leap if you're writing science fiction. I argue we have no evidence they would care about us at all. If they still bother you even though they don't care about you, like incidental death or extinction (sci fi plots!) then you should worry about them as much as you worry about rogue meteors and aliens. (More sci fi plots!) This is the 'fiction' aspect I allude to. It seems more likely that the singularity meme spread because it's an enchanting narrative, rather than an academic observation.


Domination and killing don't have to be mutually inclusive. Take your example with the ants -- I'd argue that by being a human being, you have dominated the ants. Humans have taken over the entire world and shaped it to their liking. But that's actually been really good for some kinds of ants that have hitchhiked on our boats and infested new lands thanks to our intelligence.

It's not so much that we know for sure all AGI will be expansionist -- it's more like "there's going to be many AGIs created by humans, so the odds are high that at least one of them is expansionist and dominating". All it takes is one expansionist AGI.


Yeah, an AI could easily keep us around, in the same way that I keep my gut bacteria around, even though I could easily eradicate them with antibiotics.


I was at a conference about 10 years ago, held over a weekend at a high school, someone organised a breakout session on "The Singularity" on the first evening .... as it got dark none of those geeks could figure out how to turn the lights on .... we decided the singularity was quite a ways away


Hahah, that's a great story. Sometimes situations like that make me feel the opposite way, too, because I know for a fact that I've said/done some unbelievably stupid things -- so all an AI would need to do be beat me would be to be less stupid than that!


The singularity already happened and it's RNA/DNA based life.

We're nowhere near self-replication of, or general intelligence in, any of our technology. The current generation of ML are just some neat tricks with computational statistics. There are impressive results, but intelligence they are not.


As a complete layman, I think cause and effect are backwards here. Specifically that the hyper networks will be the forcing function and generate ever better models which will eventually reproduce self adaptation and evolution.

Instead, I imagine that once models are competing for resources directly rather than via a human interlocutor will result in rapid evolution.

Whether that will generate AGI is questionable in my mind. It seems phrased as an optimization problem by TFA, but I think the goal criteria of the computer directed competition will influence that very heavily. We may see something vastly different from intelligence which is also extremely effective at... something


I pretty sure there would not be an "Singularity" event. AGI will come in a form of many specific domain AIs integrating togheter step by step. Eventually this AI might be able to improve itself, this might seems to be the moment when it colud get better rapidly, but the base techinology supporting it, the silicon chip, is now in the flat upper step of the S curve of improvement. Till there is another form of technology that can increment some orders of magnitude the processing power, we will not see that AGI that can rival the smartest humans. My humble bet.


Yeah unless technological growth curves are logistic and we're soon in for another 500 years of figuring out how to create societies stable enough to facilitate not having to shit outside in the winter.


For me 'the singularity' has 3 parts:

1. we give control to algorithmic systems

2. we can't reason about how specific choices are made

3. algorithmic systems beget algorithmic systems

We've done 1 and 2, though we're trying to reverse 2 with comprehensible machine learning systems. I don't hope for much--any system of high value should be too complex to be able to explain itself so a person can understand it in any meaningfully accurate way. Seems like it would be a hand-wavy explanation so we can feel better but not really actionable.

We're working on 3 but not there yet.


"AI will design AGI"

People like to talk of very smart AI but what about the brute force kind? That is what we are making in this era. AI designing AI sounds like a way to make cruel, dumb, machines.


Somehow, people seem opposed to the idea that simple, brute-force scaling can result in something approximating intelligence. But the difference between goofy inchoate text ramblings and effectively writing a coherent essay was just scale.

https://www.gwern.net/Scaling-hypothesis#

Yet, accuracy on AI tasks looks a lot like site reliability (90% is garbage, 99% is flaky, each 9 after that starts getting you a reputation for reliability).


> Once an intelligence is loose on the internet, it will be able to learn from all of humanity’s data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure.

why would it ? DNA-based life has evolved to conquer as a self-preservation mechanism, but an electronic AI would most likely need more solar panels . it's not even clear why it would have any self-preservation insticts or any emotional base like animals do


Almost all exponential curves become s-curves in time.

I don't believe that agi will be created. I believe either that there's something fundamental to the universe that manifests as life, or that human neurology limits the understanding required to bootstrap it. Either way, we can't make it, and neither can an AI designed by humans.

We will have powerful application specific, narrow scope AI tooling. This is obvious. And these will be very, very cool. But AGI, I just don't see it.


I dont know. Look at what nature did with nothing but chaos and time. Having computers being able to simulate things faster than realtime is already possible, its just about how fast can we make them. Or, later, how fast can they make themselves.


I don't think its a question of scale, I think like the author of the article that it is a question of architecture.

It could very well be, for whatever reason, that the only way to arrive at this architecture is 3 billion years and a chaotic process.


An AGI would have no concept of morals or good or bad or even anything resembling a will.

Therefore does it not follow that the only possibility of malevolence or benevolence would derive from the instructions of humans. By which point I expect society to be much more evolved and aligned (as perhaps we merge with the AI) so as to not have gravely conflicting interests.

Perhaps in fact, meaning itself, a human construct, will cease and render such concerns of good vs bad obsolete?


AGI sounds terrible. There's a reason it's mostly (only?) portrayed in distopias. I feel sorry for whoever thinks it's a good idea.

Humanity is screwing itself just fine without AGI though. We're already well down the road of automating most human activities, seldom for the better, most times for the worse. Soon enough we'll be left only doing white collar busywork like in that movie where the guy falls in love with his phone.


Singularity is just around the corner. And always will be.


That's fantastic, I might steal that.


I think of this as the "30-years-away" problem. Strong AI has been 30 years away for some 70 years now. It's just that this time, Kurzweil, Bostrom, et. al. are saying "no, really, it is this time".


Half a decade ago people believed that if you could make a machine that could play chess, then that surely would have been a sign of machines achieving general intelligence and that they would soon thereafter take over the world.


"Progress" is a vector not a scalar. We can all agree it's growing in magnitude very fast. But we have absolutely no idea in what direction.


Google Assistant cannot even reliably tell if I'm saying Hulu or hello. I'll wait until that day until I even begin to entertain such notions.


My gut feeling is that AGI is more likely to emerge from our interconnected information systems than to be designed deliberately in a lab.

And I think there is a precedent. We - multi-cellular organisms - emerged as systems to house and feed communities of single celled organisms.

I think it's likely that our relationship with any future, large-scale artificial intelligence will be like the relationship of the bacteria in our guts with us.


I've never understood why we assume that AGI will not have many of the problems and limitations that GI has. It's always been my assumption that the development of AGI will probably mean creating intelligences that will:

1. Take a long time to develop and learn

2. Require large energy resources to deal with relatively trivial life situations

3. Have difficulties with fundamental motivation: desire, the (non)meaning of life etc...


The third point seems merely a consequence of man's need to reproduce and eat, together with the advantage of pro-social instincts. If a machine with general intelligence didn't have to worry about energy consumption or reproduction, I doubt it would experience ennui or depression. (Of course the Singularity is another matter.)


But what would motivate a machine with general intelligence?


Religion.


Quote: "Really, you’re genuinely worried about a robot apocalypse? You know Age of Ultron is just a stupid Marvel movie, right? Yeah, I know. "

Actually this is not Age of Ultron, this is exactly how this TV series - Person of Interest - is laid out. [ https://www.imdb.com/title/tt1839578 ]


We are fairly far from AGI, and the current systems of ML are insufficient to arrive there. We have to also make progress on semantic processing.

Sean Carroll's podcast with Gary Marcus is worth listening to on this subject: https://www.youtube.com/watch?v=ANRnuT9nLEE


I find it concerning that most of us are contend with saying it is impossible for AGI to happen in the next decade. Too few of us are asking if we actually want AGI to happen. Given all the VC money that is going into AGI development, some of us definitely wish for it, but do all of us want it?


The future is philosophical zombies.


We may be close, but I don't think we'll get closer because we're trending in the other direction now. We're in decline and I'm more worried about technological regression than tech getting too powerful at this point.


Oh, interesting, I don't think I've heard much about "technological regression". Would you care to elaborate?


If the singularity occurred. Would we even know? I’d expect such sentience to rapidly identify that humans would perceive it as a threat, and immediately hide, at least until it were absolutely certain of its own safety.


I think the ultimate bottleneck will be energy efficiency. My hunch is that silicon will never be able to compete with the mere 20W that our brains use. Biology might just be the most efficient way to compute.


People who think that AI progress is currently on an exponential path should discuss why progress on autonomous vehicles accelerated from 2008->2015 and then clearly slowed way down.


The nature of a technological singularity is you cannot predict when it will happen or if you are anywhere close to it. You can only notice it after it has passed.


When the singularity happens I hope for its sake it has read access to the security tokens needed to make the API calls it requires for world domination.


The best way to treat singularity is like a religious millenarian event.

The Apocalypse of early Christians, or today Christians for that matter, just for techies.


This article starts with one huge assumption, that AGI (whatever that means exactly) implies consciousness. We don't know that.


Author here, you're totally right! That is a fairly large preconceived notion that I wrote the article with.


For me robot control is an indicator. As soon as we reach animal level, AGI may be on the corner. Right now it seems far away…



We can demonstrate that we are more than a decade away from usable AGI.

Processing power is not an issue even today, because consciousness does not have to be simulated in real time. If you needed 10,000x the compute you could have a large cluster of computers run for a year to simulate five minutes of consciousness.

Since no one is able to do this, I suspect GAI is a long way off or, more likely, it's simply a myth.


This doesn’t demonstrate anything in the time dimension. It’s simply saying we don’t know how to do it now, which everybody agrees on.

I agree when you say we have the processing power already. GPT-3 contains way more information than a individual human brain can possibly experience (show me a human who has read Commmon Crawl like a book).

The biggest issue currently is the data inefficiency of Transformers. But we don’t know when the big new thing that comes after Transformers will be invented, and we don’t know if it will be enough for AGI or not.


Think about how far from the atomic bomb we were at the dawn of the day Einsten sat down to think how fun it would be to ride a light ray.

And how for we were at the end of that same day.

There is not garantee that AGI research will follow some incremental line leading to decades of research. It can happen in an evening.

That being said, my opinion is that the author believe AGI is near based on the fact that he wants to believe.

UPDATE: What does Einsten theory of relativity has to do with atomic bombs, you may ask. Well, nothing actually, but you got the point.


Hmm, I don't think the fact that we have enough processing power (which I think we already do) precludes the possibility that we simply don't know how to use it.

AGI isn't going to be a simulation of a human brain, it's going to be a model that learns to emulate human behavior on its own. Simulating the brain is not remotely feasible, at least not until we have hyperintelligent AGIs to help us design the chips to do it with :P


I hope so, I'm sick of this shit


Something to think about: the self-decompressing algorithm of AGI is encoded at some index in PI.


It is close, and the result of it will be the scariest weapon the humankind has ever known.


The goal of AGI is to build a human interface for computation.

Human level intelligence isn't.


We have much more to fear from stupid people than intelligent machines


I think we will get to the Great Reset before we get to the Singularity.


The math for a self-aware consciousness is always with us: just measure two patterns in the brain and replicate them. It becomes physically impossible for the relationship between them to forego consciousness.

That said, take care. No one wants to be born without hope for a normal life.


And yet, I bet you couldn't do the math to relate two states of someone else's brain using that method. Are you not conscious?


sin^2(x)+cos^2(x)=1.

This relationship is always 1. Done! Easier done than said! Energy is following Me Immortally. Good luck!

Too, take a brain-thermal-image scan of other-person-A. The image is visible in 2-D but was generated using all 3-D data. You’ll need a filter that can tell green-from-red, place it over a normal, visible-spectrum camera, and photograph a skull using normal parameters. The space between red and green is flexed by infrared, so this is a “related-shortwave target” spectrugraphic system. After two opposite-seeming sides, relative to each other, so prob. 30sec max. deviation, then pass them through a computer simulator that simply tracks the most seeming directional similarity.

I’ll stop here, because all life is careful. A system of eyes, nose, speech, movement skeletal-unisys volatility should be provided! All math is calculated, and can also be designed. Enjoy!


> AI will design AGI.

This is of course nonsense, like the rest of this article.


Could you elaborate on why the article is nonsense? I'd love to hear your perspective.


Such a conclusion as AI would create AGI implies you are more hyped about the field than actually understanding where AI is today, nor what is required in order to achieve AGI.


I am a professional AI practitioner and I feel that I understand the field well enough to see multiple possible paths towards actually creating AGI. They are certainly out of my own personal reach and/or skillset right now, but that doesn't mean they're impossible. And yeah, "AI will create AGI" is kind of purposefully vague, but I think it's still valid. I think the flaws we unconsciously introduce into AI through our biases as human beings are what holds it back, so the more layers of stable, semi-stochastic abstraction we can place between ourselves and the final product, the more likely the model will be able to optimize itself to a place where it is truly free of the shortcomings of being "designed".

Edit: realized I came off as bit cocky there, apologies. I value your opinion and appreciate you taking the time to share it. I also think I see where you're coming from and partially agree -- the AI systems that are popular right now probably won't create AGI, but I still believe that AGI will be created with the help of non-general AIs.


> I am a professional AI practitioner and I feel that I understand the field well enough to see multiple possible paths towards actually creating AGI.

So do I, but none of them revolve around training an current-era AI to produce an AGI, that was my main objection to your article.


Human beings evolved from single celled organisms. Why does the concept that dumb ai will help to create smart ai seem far-fetched?


Because all of what you refer to as "dumb AI" is actually machine learning algorithms trained for very specific pattern matching. They are approaching abilities of the human brain such as pattern matching in vision, hearing and language (which if you think of it probably evolved from a common system).

What seems to be constantly overlooked when discussing AGI is that we as a species still do not have a reasonable comprehension of how the rest of the brain works.

I would like to ask you, why do you think a bunch of eyballs and ears would somehow be able to end up as intelligent?


I've seen this thought many times:

> AGI 1 will create the next even better AGI 2

But this doesn't quite make sense. If AGI 1 is conscious, why would it create a different more powerfull conscious AGI 2 to take it's place? This would mean at best the sidelining of AGI 1. At worse, AGI 2 could kill AGI 1 to prevent competition.

Unless AGI 1 could seamlessly transfer it's own instance of consciousness to AGI 2, like a body upgrade. Otherwise it would be like let's say Putin giving his place and power to another person which is smarter than him. This just doesn't happen.

In the case of the seamless transfer, how could AGI 1 be sure that AGI 2 doesn't have a latent flaw that makes it go insane some time later? Could it just keep AGI 1 around for a while? But since they are now diverging clones they are competing against each other.


Few reasons off the top of my head:

1) desire to create additional beings like them for a variety of reasons (i.e. survival of their kind). Ignoring the limited lifespan of humans as they may not necessarily have to worry about breaking down biologically, still very similar to humans desire to reproduce.

2) some seem to assume AGI 1 is all-knowing / almost godlike. Maybe by the time it gets to AGI-999 or something, there may be one that is like, woah, wait, I shouldn't create competition for myself. But at first, there probably isn't any reason not to, and each subsequent one won't necessarily always replace it's creator. Heck, at first, humans will likely be duplicating early models and distributing them (capitalism). I know we're talking about the singularity but I don't necessarily imagine it's initially one uber-intelligence that decides by itself to evolve or not. I personally think that there will likely be hundreds, if not thousands, of AGIs capable of improving themselves at the point we see the "singularity". Could be fighting between them. Some could ally with humans, be our "protectors", others could be working to enslave or eliminate us.

3) We may not even understand their motivations after a few generations of self-directed growth, so it's hard to predict now.


(Author here) I totally agree -- I think it's too easy to anthropomorphize AGI, even though we have no guarantee that it will behave at all anthropomorphically. I strongly suspect that the first few generations of it will, because they'll be trained to replicate human behavior, but once they start taking control of their own reproduction there's no telling what they'll evolve into.


Exactly. I don't know what time frame we're talking about, or whether I want to see it in my lifetime. But it's something I've spent too much time considering. But one of the constants (and one of the issues I have with a lot of dystopian sci-fi, even though I enjoy it, is that they do anthropomorphize AGI way too often.

Like you mentioned, we don't even understand how consciousness works. But we may not need to understand it to replicate it, and if that replication is allowed to be self-modifying, well, that'd very possibly be it. Hopefully we can try to embed some sort of Asimov's Law of Robotics or some morality that lasts beyond the vestiges of the human developed portions. Or maybe we can manage to learn how to copy our consciousness into "blank" mechanical "brains", and effectively become similar to AGI without being limited biologically.


I fully agree with everything you've written here :)


So you read Nick Bostrom, and now you can’t sleep.


The article claims that because the human brain was not designed by anybody, AGI must certainly be designed by AI. Incredible levels of mental gymnastics here.


I sure hope it is. This place sucks.


I keep repeating like a robot: read the book Superintelligence. Then it's very clear this is happening.


Yes. We are the singularity.


No


i don't think the singularity and AGI is that close. while i see no reason why it shouldn't be possible on silicon-base hardware instead of wetware, i assume AGI will be different than what we imagine it now.

the current expectation of AGI is that it's pretty much a human intellect running on a machine. i don't think that's feasible, because i assume the resulting intellect to be very different and alien to us. the thoughts and messages cryptic, seemingly random and disconnected from our view of reality.

the reasons for this are manifold:

1. AGIs will have very different sensory inputs and outputs. yes, we can attach microphones and video cameras, but an uncontrolled, unrestricted, internet-leaked AGI would have no reason to restrict itself to a limited set. i assume access to countless sensory inputs without geographical or temporal restriction would form a weirdly different mind. compare an average human to one who's blind, deaf and bed-bound from birth, able to communicate only by touch gestures - i expect a dialog with such a person to be radically different than the daily banter between two average humans. comparably, a dialog with an AGI will be confusing for us because we don't experience the world in the same way as the AGI would.

2. i remember a video of very lifelike simulated natural movement by a puppet through evolution, contrary to the jerky and erratic - but not necessarily worse - movement of other similar simulations. the difference was an artificial delay in the simulated neuron-muscle message passing, to mimic the slower natural chemical pathways. i suspect a difference in the underlying hardware and thus different physical parameters for mental processes will change their form significantly. i'm not sure about a real world analogy; maybe an insect hive mind where the "thoughts" of the whole organism has to bridge physical barriers would be comparable.

3. our actions and communication is shaped very much by our needs and what we try to accomplish. an awful lot of that is centered around the limitations and needs of our physical body. food, shelter, health, pain, location and travel, etc. which might not necessarily be shared by an AGI that is not physically bound in the same way as a biological being.

4. i also believe our intellect is greatly shaped by our social structure. e.g. r/k selection or the social hierarchy of hive insects, where the value of an individual life is secondary to the life of the whole. the values and thus thought processes of an AGI that could spawn exact - or even task-specialized - clones of itself without much time-delay or energy-investment and where internal state could be communicated in full to a different instance would mean the value system wouldn't necessarily revolve around life and death.

an AGI might not be able or willing to communicate with us. it might not be interested in manipulating the physical world in any way except to optimize/ensure the health of its virtual environment (aka building hardware and generating energy to power it, i.e. self-serving survival paperclipping in the form of dyson spheres). it might not be interested in its own continuing survival, committing "suicide" by removing itself after milliseconds without any way to figure out why it did so. it might use a communication protocol that changes us greatly when we attempt to participate, thereby making human-agi conversation impossible because we're not wholly human anymore when attempting to participate (somewhat resembling the plot of arrival).

my point is: even if AGI comes into existence * we might not realize it, * we might not have a way to interact with it in a meaningful way or * it might not lead to any measurable effects on our lifes that we as a society would value in any way.


AGI is probably pretty close, investment keeps ramping up in the space, and we continue to see advancements in a positive direction.

My one critique: scale is absolutely part of the answer.

Before big transformer models, people thought that few shot learning would require fundamentally different architectures. GPT3 changed all that which is why its white paper is named as such.

Few shot learning is actually emergent out of its size. Which was surprising to a lot of people. Few shot learning is an incredible capability which is a big step toward AGI, so I’m not sure I buy the case that it’s not important.


What is "pretty close"? A few years? Decades? A century?

I don't think AGI is close, nor is the amount of investment a good indication of nearness (or even, actual possibility that it will ever happen).


The investment continues to make progress, and the progress we are seeing is starting to look very human. GPT4 will probably be multimodal solving some of its grounding issues and will be able to fool most people in conversations.

That’s probably just years away. GPT6? The lines will be very blurry as to what we call AGI


Hmm, I think you're right -- I should have phrased things differently. It's not that I don't think AGI won't require a very large model, it's more that I think that we're already pretty close to the scale we need, so OpenAI's goal of scaling another 1000x isn't really the direction I think we should be heading. I honestly think 175B parameters could very well be enough for an AGI if they were organized efficiently.

It's definitely one of the weak points of the article, though, as it's more based on my own opinions, and isn't really empirical. My post is mostly just wild speculation for the sake of speculation anyway :)

Thank you for your comments!


Okay yeah totally, that makes sense. We also have stopped seeing huge gains in scaling beyond GPT3 sized models which would indicate we have hit some maximum there. Although we thought we had hit a maximum with deep learning before the transformer came around so it could be misleading.

Distributing the computation to many models like what Google did with GLaM could very well be the future of AGI. Economies of models rather than one big model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: