Hacker News new | past | comments | ask | show | jobs | submit login

The article stands in direct contradiction to your supposed summary of it. To wit:

> Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.

> [Turing] concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written. This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken.

etc etc etc. He literally spends more than 60% of the (very long) essay arguing against your "summary".

Anyway, a real tl;dr is right at the top of the page: "Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough." and then the last sentence: "it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever."

In other words, AGI is provably possible, but the author believes that we're going about it all wrong (behaviorist-inspired neural nets running on training sets, etc.) and need a philosophical (specifically: epistemological) breakthrough to move forward.




The last thing that AGI needs is more philosophical thinking. Philosophy and the Turing Test have done enough damage to the field.

Philosophers can not seem to comprehend how a mass of meat can create consciousness and take leaps to try to rule out the mass of meat, regardless of the fact that they all know that when bad things happen to said mass of meat bad things also happen to said consciousness.

The Turing Test could not have been a more wrong direction to define AGI. Looking like something does not make you that thing, and it did not at all address capability, just the appearance of capability.

Insight into intelligence will come slowly, and it will come from examining, categorizing, and eventually understanding the internal workings of the human brain.


This is such a short-sighted and close-minded view of philosophy, it boggles the mind..

> Philosophers can not seem to comprehend how a mass of meat can create consciousness and take leaps to try to rule out the mass of meat, regardless of the fact that they all know that when bad things happen to said mass of meat bad things also happen to said consciousness.

You seem to be referring to dualism, a philosophical idea that is largely discredited among philosophers since the early 1900's. Functionalism, emergentism, materialism, and all other leading serious philosophies of mind that implicate the brain were first formalized and studied by philosophers, not neuroscientists.

> The Turing Test could not have been a more wrong direction to define AGI. Looking like something does not make you that thing, and it did not at all address capability, just the appearance of capability.

And philosophers are among the most vocal critics of the Turing Test. The Turing Test was made by Turing, who was first and foremost a mathematician & logician, not a philosopher of mind.

> Insight into intelligence will come slowly, and it will come from examining, categorizing, and eventually understanding the internal workings of the human brain.

Which, of course, is what modern philosophers of mind do.


As I already stated, the primary rebuke against the Turing Test was the Chinese room. It was a proper response to a horrible idea.

Then what did philosophers do? Well they applied the Chinese Room argument to all of conciseness, a blunder even worse than the Turing Test.


> Philosophers can not seem to comprehend how a mass of meat can create consciousness and take leaps to try to rule out the mass of meat, regardless of the fact that they all know that when bad things happen to said mass of meat bad things also happen to said consciousness.

Neither can anyone else, so far.

> Insight into intelligence will come slowly, and it will come from examining, categorizing, and eventually understanding the internal workings of the human brain.

Insight -- yes. AI -- not so certain. Here's why: it's plausible that intelligence in the human mind emerges as a consequence of an intractable process. We might not be able to understand that intractable process. But suppose that instead of understanding it, we mimic it: we build a replica of the human brain, and it works! But this is not quite artificial intelligence but a replica of natural intelligence. This is not just a semantic difference: even if this mimicry works we may not be able to direct it in any way. For example, we won't necessarily be able to give this brain replica super-human intelligence, without, say, giving it a mental illness at the same time.


> Neither can anyone else, so far.

But most AIG researchers don't throw their hands up and say, "Oh damn, it's impossible (intractable), it's magic because we don't understand." Forgive me if I feel this a less-than-useful sentiment.

As far as the possibility of intelligence being intractable, what are the chances of this. Is it likely? If we don't even know what AIG entails, how would we possibly put odds on this? If we don't have the odds, what use is there is thinking about it? Should we stop researching AIG because of the possibility that it might be too much for us to handle?


> But most AIG researchers don't throw their hands up and say, "Oh damn, it's impossible...

Neither does the author of the article.

> As far as the possibility of intelligence being intractable, what are the chances of this.

I think the odds are very good. Every known complex system -- the weather, for example, or an ant colony -- is intractable, let alone the human brain.

> Should we stop researching AIG because of the possibility that it might be too much for us to handle?

Intractability doesn't mean it's too much for us to handle. We can simulate intractable processes, and we do it all the time (that's how the weatherman knows if it's going to rain tomorrow), so of course it can be studied. The problem with intractable processes is not that they can't be artificially created, but they can't be artificially controlled (or predicted) beyond very short time frame. So it's very possible we could build an AGI, but won't know how to make it any smarter than us.


>So it's very possible we could build an AGI, but won't know how to make it any smarter than us.

Make it as smart as the smartest of us and then it'll figure out how to become smarter on its own. Then ask it to explain to us how we work.

Tongue in cheek but once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale. Or we can always try mutating the thing to see what comes out - crude but worked at least once.


> once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale.

But this doesn't necessarily follow. The experience of a mind working at a higher speed will be a slowing down of time -- imagine what you'd do if time slowed down. The machine will not necessarily learn more (we might have a limit of how much we can learn), and will probably experience boredom and frustration (as everything around it will be slow) that might drive it crazy.


I don't expect professional philosophers will contribute much either. But it's plausible that a philosophical advance (a new way of thinking) within an existing field such as computer science will lead to the breakthrough. Deutsch has claimed elsewhere that all great advances in science are like this. For example, Darwin's Theory of Evolution represented a fundamentally new mode of explanation.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: