Hacker News new | past | comments | ask | show | jobs | submit login

Depending on your belief system, the answer ranges from "of course you can" to "no, just no" and everything in between.

From a scientific/engineering point of view the answer is yes in principle but we don't quite know how yet. The slightly longer answer is that whatever consciousness is, it appears to be an emerging property of a bit of wetware that we can simulate only partly and in a very imperfect way and only at a very modest scale currently that definitely shows no signs of being conscious. We know it's the brain responsible for this stuff because damaging it or chemically manipulating seems to change people's personality, sense of self, mood, etc. We can even read parts of the brain out and interface with it at a primitive level.

Scaling all that up is an engineering challenge with a mildly predictable roadmap measured in a couple of decades and exponentially larger than that beyond. We get better at hardware and at some point the complexity of the hardware exceeds that of the wetware.

However, fixing our algorithms and simulation detail (i.e. how this hardware is wired together) is a different matter. As my neural networks teacher used to joke: "this is a linear algebra class; if you came here for a biology lecture you are in the wrong place". Full disclosure, I dropped out of that course because I was a bit out of my depth on the math front. But simply put, the math behind this stuff is a vast simplification based on a naive model of what a brain cell might do that happens to produce interesting enough results for specific use cases that have so far very little to do with emulating consciousness.

There seem to be lots of researchers assuming other researcher are actively working on that but mostly what is going on is people trying to get more practical short term results. Deep learning is a good example of such a thing. It might have emergent properties if you scale that up that might resemble something like a conscious. But doing that or validating that assumption is not actually something a lot of people work on and nor is it actually a goal for most AI researchers. Their goal is simply to figure out how to get this stuff to do things for us (image recognition, playing go, etc.).

But do we actually need to be exact with our modeling here? Mostly our brains seem to self organize from information built into our DNA. Those blueprints are at a different level of complexity than the end result by a few orders of magnitudes. And we know that personalities for the same sets of DNAs can widely differ (e.g. identical twins).

The way brains work is biochemically convenient under the constraints that life emerged under. But if you get rid of some of those constraints, there are probably other ways to get similar enough results.

IMHO a clean room replication of a brain like AI is unlikely to happen before we manage to drastically enhance the capabilities of an existing brain; which is a much easier engineering challenge. If you take that to the extreme, at what point is the wetware no longer essential and what happens when that is disconnected? Once you enhance or replace most of a brain, at what point does the resulting conscious hybrid entity begin and end? That seems a more likely path to producing a conscious AI. Experiments on that front are likely to be extremely unpopular for a while given the risk. But at the same time a lot of this stuff is already happening on a small scale.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: