Because we integrate, learn, plan, abstract, create, have way more senses, complex memory, societal and cultural embeddings, and a whole host of things that make us human. A text box on hacker news is not the place for such a reply. Go take some cognitive science, neuroscience, anthropology, sociology, psychology, and philosophy classes and you’ll see why. “AI” as we know it is a joke, particularly as media has hyped it.
The original guys (McCarthy, minskey, etc) at least understand the gigantic complexity of the undertaking in so far as we understand how the mind works at all, and they never reduced things purely to classifiers. That’s like 0.1% of the job. The real researchers take philosophy of mind seriously — technologists want a quick and cheap solution that cannot exist.
This is way more than what's required for sentience though. Sentience is having a notion of self, many animals are sentient.
Maybe you meant sapience or intelligence, but I wonder if people think computers won't be sentient (as in self aware) in our lifetime, even if they are still not able to match human cognition.
> Because we integrate, learn, plan, abstract, create, have way more senses, complex memory, societal and cultural embeddings, and a whole host of things that make us human.
And yet all this complexity and purpose may potentially be simply the product of putting a large collection of matter in a heat bath for a few billion years.
Who says that this process can't be repeated via brute force simulation? That the emergence of complexity and life under thermal disequilibrium + sufficient number of degrees of freedom is actually a basic fact of the universe?
I don't see any rule saying why, with sufficient brute force computational power, we won't be able to set up better and better simulations that will allow for the arise of true artificial intelligences.
I’m sure some smarter people than me has tried to take stabs at estimating what’s required to do this, so I can’t positively say this will or will not ever happen (especially with quantum computing). But do consider:
We think the brain has 100 billion neurons. Making those work is a very complex network, lots of chemical gradients, bio-electrical systems, cellular-level systems, and probably a host of other things we don’t know about. So it wouldn’t surprise me if for single neuron you’d need to model another billion things at minimum.
Now factor in all of the other input senses, and those 100B neurons have increased dramatically for I/O. Plus a multiplier to model everything going on inside a neuron. Then there’s also modeling the communication layers between, effects of myelin sheaths and all that.
So we’ve established that just to run a single “step in time” (whatever that means, as that’s a computing concept), we’re into probably billions of trillions of calculations.
Now what those model have to make sense and do something coherent. Every future step. So now you need some kind of meta model that is able to push this forward and continue in the right trajectory. Now complexity is dramatically even higher.
Now that we’ve somehow figured out how to model everything required for intelligence, we need to search that space through brute force, as you put it.
To me it seems like the time required will be astronomical — Sun might have blown up by then.
So why go down this route? Nature solved this in parallel relatively cheaply through billions of iterations and billions of permutations over tens of thousands of years.
Seems easier to simplify the problem down to more basic stuff that works well enough and iterate / combine systems. Either that or find a way to grow a “brain in a test tube” that you can hookup to a computer. There’s also always humans available at mass scale for pretty cheap across the globe — sophisticated intelligence built in.
> And yet all this complexity and purpose may potentially be simply the product of putting a large collection of matter in a heat bath for a few billion years.
It doesn't have to be. There are alternatives to that.
The original guys (McCarthy, minskey, etc) at least understand the gigantic complexity of the undertaking in so far as we understand how the mind works at all, and they never reduced things purely to classifiers. That’s like 0.1% of the job. The real researchers take philosophy of mind seriously — technologists want a quick and cheap solution that cannot exist.