Hacker News new | past | comments | ask | show | jobs | submit login

I think we need more advances in neuroscience and, I know this will be controversial, psychology before we really know what the cake even is.

Edit:

I actually think the major AI breakthrough will come from either of those two fields, not computer science.




I disagree. In engineering we've shown great advancements not by exactly reproducing biology from cases like transportation (wheel vs legs) or flight (engine and wing vs flapping feathers). We can't even mass produce synthetic muscles and instead use a geared motor.

The growth in building increasingly sophisticated AI is faster than our efforts to reverse engineer biology. I could see that changing with improved observational techniques like optogenetics or bacteria and viruses we can "program" to explore.

Researchers are already focusing on concrete insights from cognitive science, neuroscience, etc such as one shot learning or memory that we haven't yet figured out in a cohesive machine learning framework. For the time being I'd bet on more advaces coming without significant changes in biological understanding.


I think you're right, and AlphaGo is an example. The big advancements in flight give us supersonic jets far faster than birds, but we've only recently developed microdrones that can land on a tree branch. We have wheeled vehicles far faster than horses, but haven't got a single vehicle that can complete an Eventing course. So in AI we have computers far better at Chess than any human, far better at indexing and searching databases, and now (or certainly very soon) far better at playing Go.

But we are still nowhere near developing a computer that can learn to play Monopoly, Risk or Axis & Allies just from reading the rule book and looking at the components. If you aim your machine at a very narrow subset of a problem, you can optimise it to amazing degrees, far exceeding humans or nature. But developing a machine that has the whole package is staggeringly hard in comparison.

But you know what? That's fine. Special purpose, single domain tools are fantastically useful and are easier to make highly reliable, with well understood limitations.


I feel that your analogy does not disprove OP's claim. You would first have to prove that the analogy is actually applicable. Theoretically, one could argue that motion is substantially different from intelligence such that the analogy does not hold.

But I am with you in that engineering / CS / ... should not wait for neuroscience to make further discoveries but continue the journey.


Interesting point, but which do you think is a more reasonable prior: that intelligence is a deterministic process similar other's we've encountered or that there is something uniquely different about intelligence than other physical phenomena? I think the latter requires more assumptions so I would argue philosophically it is the one that requires the proof or more evidence :)

Regardless, my feeling is that there is a healthy dose of human hubris around intelligence. If I train a dog to go fetch me a beer from the fridge, that seems pretty smart. It learned how to understand a request, execute a complex sequence of actions for motion and planning, reason around occluded objects, understand depth and 3 dimensional space, differentiate between objects, and more without me writing a sequence of rules to follow. I'd be happy to have a robot that intelligent. Plants dont have brains or neurons yet react to sensory stimulus such as light touch or sound, communicate, and even have memory and learn. It's not at a scale to do something interesting within one plant but communities of plants are arguably the most successful organisms on the planet.

Andrew Ng likes to point to a Ferret experiment [0] where experimental neuroscientists rewired the visual inputs to the auditory cortex and the auditory cortex learned to "see" the visual signals! This suggests that there may be some amount of unified "learning" rules to the brain. Biology is never so clean but if humans have whatever this intelligence thing is that lesser organisms do not, there is another angle to look at things. We have a lot of neurons which suggests less per neuron specialization than say a C. elegans; basically large populations of neurons perform tasks that in lesser creatures single or few neurons may perform. While the trees are complex and important to understand for biology and medicine, the forest may have some high level rules.

Looking at something that appeared intelligent 50-100 years ago but seems mechanical now, we have text to speech. NETtalk was a simplified computational neuroscience model from the 80s that could synthesize human speech. Today we have far better quality techniques that came out of R&D focused on things like large high quality labeled datasets for training, better soundcards, more processing power, and algorithmic improvements. Researchers didn't continue trying to model the brain and instead threw an HMM and a couple other tricks at it. Now we're going full circle back to neural networks but they aren't using any advances from biology and certainly arent produced by computational neuroscientists like Terry Sejnowski.

It's funny because at the time of NETtalk they thought that learning to read would be an extremely hard problem because it incorporates so many components of the human brain [1]. While it certainly wasn't a trivial problem, state of the art OCR and object recognition came from similar artificial neural networks a decade later with LeNet and MNIST * . And no, ANNs != biological neuronal networks. The models of computational neuroscientists are different; for example look at [2, 3] for high level models or [4] for a tool.

Now I'm even more convinced than before that understanding the brain is great for humanity but wont be necessary for building intelligent systems that can perform tasks similar to biological ones.

[0] http://www.nature.com/nature/journal/v404/n6780/full/404871a...

[1] https://en.wikipedia.org/wiki/NETtalk_(artificial_neural_net...

[2] http://science.sciencemag.org/content/338/6111/1202

[3] http://ganguli-gang.stanford.edu/pdf/InvModelTheory.pdf

[4] http://neuralensemble.org/docs/PyNN/index.html

* Perhaps you could argue that convolutions are loosely inspired by neuron structure, but that sort of knowledge had existed for quite some time, with inspiration arguably within Camillo Golgi's amazing neuronal physiology diagrams from the 1870s let alone the 1960-80s. It's telling that papers on CNNs have little to no neuroscience and a lot of applied math :)


I did not mean to have implied that I think there is anything magical about intelligence. Of course, it is based on physical phenomena. I am doing my PhD right now and I try to incorporate as much AI/ML into it as possible.

What I meant to say is that our ANNs are so ridiculously simplified versions of real neural networks that there might still be something to be learnt from the real brain. This shall not imply that to achieve intelligence, the solution necessarily has to mimic a biological brain.

(Thank you for your detailed response. I love to read about this stuff!)

edit: missing word


That's an equivalent of saying that advancements in airplanes will come from biology rather than engineering fields. Biology at best can give hints to improve aerodynamics, and it still can be solved mathematically better. Same will be with neuroscience.

I believe it's more likely that engineering of AI will bring new ideas to neuroscience instead, just like after building helicopters we gained some intuition and understanding on why certain features of dragonflies exist.


The Wright brothers studied bird flight extensively, and drew important ideas from birds. An aeronautical engineer today has the luxury of a more mature field, and can probably afford to put less thought into bird flight.

Despite the history and significant progress in AI we still don't know that much about what approach will result in the first strong AI, or even if it's possible to make one. In an important sense AI is more like aeronautical engineering in 1902 than aeronautical engineering today, so it's possible that better understanding of biology will result in an important innovation.


> I know this will be controversial, psychology

Why would that be controversial? It seems to make extremely good sense and even though it may be doubted in some circles I think that most people involved in AI research are painfully aware of our limited understanding of our own psyche.


I disagree with hacknat (and the reason I suspect it's controversial) because psychology is generally far too high-level (e.g. focused on things like traits and high-level behavior and vague "perceptions") and has far too little empirical rigor for engineers to be able to build any sort of actual system from insights gleaned from psychology. The recent replication crisis in psychology does little to help this reputation.

Neuroscience does concern itself a great deal with low level biological mechanisms and architectures, and is more amenable to cross-pollination with machine learning.

Though I would like to point out that thus far deep learning has taken few ideas from neuroscience.


Neural networks are the big driver and arguably were taken from neuroscience.

The reason why psychology is 'too high-level' is exactly what is meant with 'we don't understand it', we're approaching the psyche at the macro level of observable traits, but there is a very large gap between the 'wiring' and the 'traits', some of that gap belongs to neuroscience but quite possibly the larger parts belongs to psychology. The two will meet somewhere in the middle.

A similar thing happens in biology with dna and genetics on the one side and embryology on the other.


They were taken from neuroscience... 50 years ago. Since then, very few ideas have been explicitly taken from neuroscience.


Exactly, and even then, half of the details or how real neurons work were ignored or discarded. NN are now often taught by sticking closely to the math and avoiding the term and any reference to biology completely.


Does it matter? As long as the original source for the idea is remembered I don't particularly care which field originated it. There are so many instances and examples of cross pollination between IT and other sciences that in the end whether or not we stick to the original in a literal fashion or not should not matter (and if we did we'd probably lose out on a good bit of progress).


That's like saying there has been no development in automobiles since 1890. Sure, at first glance the cars from then are still like the cars from now. ICE (or electrical) power source, maybe a gearbox, some seats, a steering wheel and something to keep you out of the weather. But that's at the same time ignoring many years of work on those concepts in order to further refine them.

The neural networks that were taken 'explicitly from neuroscience' have gone through a vast transformation and that + a whole lot of work on training and other stuff besides is what powers the current crop of AI software. All the way to computers that learn about games, that label images and that drive cars with impressive accuracy to date.

The problem is - and I think that was what the original question was about - that neuroscience is rather very low level. We need something at the intermediate level, a 'useful building block' approach if you will, something that is not quite a fully formed intelligence but also not so basic as plumbing and wiring.


This analogy doesn't make any sense. Since automobiles were never patterned off anything in biology (they were the improvement of horse drawn carriages), I'm not sure what point you're trying to make with the analogy.

I never said there weren't any developments in neural nets, I'm just saying that few ideas have been taken from neuroscience (there certainly have been some ideas, like convolutions). In fact most things (including most tricks in convolutional neural nets in their current state) a neural net does, we know a brain does not do.


All it took for us to make aircraft was to look at birds and to realize something could be done (flying even though you weigh more than the air). The first wanna-be aircraft looked like birds, and some even attempted to work like them. Most of those designs went nowhere. Just like legs got replaced by the wheel in ground transportation aircraft engines and eventually jets replaced the muscles of birds. It doesn't really matter what sets you on a particular part, what matters is the end result.

Right now, we have a state of AI that is very much limited by what we know about how our own machinery works. Better understanding of that machinery (just like better understanding of things like aerodynamics, rolling resistance and explosions led to better transportation) will help us in some way or other. And it may take another 50 years before we hit that next milestone, but the current progress all leads more or less directly back to one poor analogy with biological systems. Quite probably there are more waiting in the wings, whether through literal re-implementation or merely as inspiration it doesn't really matter.


> It doesn't really matter what sets you on a particular part, what matters is the end result.

To put it in other words: Birds are limited by evolution. They are not an optimal design - they are a successful reproductive design in a wide ecosystem where flying is a tool.

Our intelligence is no different.

This is something Feynman addressed in this beautiful talk (in the Q&A iirc): https://www.youtube.com/watch?v=EKWGGDXe5MA


I like the comparison to developing wings. What's interesting about the development of plane wings is that, while we used the underlying physics of how wings work to make plane wings, a plane gets it's trust differently than a bird, and looks and flies differently. Flapping wasn't very useful to us for planes; what things about the way minds work will not be useful for AI? I think once we understand the algorithms behind AI / intelligence / learning, what we choose to make may be very different than what a lot of people currently imagine having AI or robots will to be like.


> and has far too little empirical rigor for engineers

I think this is one of the key points of the problem: we, as engineers, are expecting this problem (imitating the human psyche/getting to true AI) to be able to be defined rigorously. What if it can't?

I'm not a religious person and I don't generally believe in something inside us that it's "undefinable", but looking at our cultural and intellectual history of about 2,500 years I can see that there are lots of things that we we haven't been able to define rigorously, but which are quintessential to human psyche: poetry, telling jokes, word puns, the sentiment of nostalgia and I could go on and on.


If anything the replication crisis in Psychology is proof of just how little we really know about intelligence. I think Psychology is necessary, because I think it needs to come up with an adequate definition of what intelligence is first, then neuroscience can say, "this is how that definition actually works", then engineers can build it.


Even now, deep learning is basically 1980s-era backdrop with a few tweaks and better hardware - it turns out that despite decades of backlash against it, plain old backdrop pretty much is the shit, as long as you have a lot of compute power and a few tricks up your sleeve!


Well, it just seems unlikely that psychology will produce significant insights here, because it mostly just looks at how the brain behaves. While this can be insightful, I doubt that it will explain intelligence in the end, because this happens a layer below. That's precisely what neuroscience covers. The other approach (just thinking about the problem and trying to to build a AI from first principles) is CS.

So I would expect strong AI from CS, possibly with a paradigm shift caused by an advance in neuroscience.


I think someone needs to come up with a good theory of what intelligence even is, then we can try to discover its mechanism(s).


Here are three potential definitions:

1) Acting like a human (this is the Turing test approach)

2) Thinking like a human (this is cognitive science, and for now is focused on figuring out how humans think)

3) Thinking rationally, ie, following formal logic (the difficulty here is encoding all the information in the world as formal logic)

4) Acting rationally. That is, entities that react rationally to their goals (this one is notable because it allows fairly stupid entities).

These are all explained in more detail in Artificial Intelligence, a Modern Approach by Russell and Norvig


Those aren't broad definitions at all! Quick question though, what is "acting", what is "thinking", what is "rational"? Your definition of intelligence practically includes the term in its definition(s).


I like the acting human approach (Turing) because it can be stated more precisely. One test is the classic Turing test -- fool a human judge.

Another line for acting human would be the ability to self direct learning in a variety of situations in which a reasonably intelligent human can learn. That means a single algorithmic framework that can learn go, navigate a maze, solve Sudoku, carry on a conversation and decide which of those things to do at any given time. The key is that the go playing skill would need to be acquired without explicitly programming for go.

I believe a lot of our intelligence is the ability to perform solved AI problems given the situation. The key is combining those skills (whether as a single algorithm or a variety of algorithms with an arbiter) and the ability to intelligently direct focus. That's why most researchers aren't confusing alphago with general intelligence. It can play go - period.


You can go to the source I cited to find further explanation.


AI has some good definitions as far as intelligence is concerned. Perhaps you are worried about consciousness or something, but this is not needed for a definition of what intelligence means.


Shane Legg (a DeepMind founder) and Marcus Hutter collected and categorized 70 different definitions of intelligence: http://arxiv.org/abs/0706.3639

Their attempt at a definition that synthesizes all the others is:

> Intelligence measures an agent's ability to achieve goals in a wide range of environments. - S. Legg and M. Hutter


Just like ornithologists invented the first airplane, and horse vets built the first automobile, etc.


While they weren't card-carrying ornithologists, the Wright brothers studied bird flight extensively and explicitly used it for their designs.

But they concluded that the flapping motion wasn't essential for propulsion, and could more easily be achieved by a propeller.


Though airplane designers probably looked at bird wings for inspiration. Similar to how Deep Mind are looking at the brain.


Early airplane inventors tried very hard to imitate flapping bird wings for propulsion.

https://en.wikipedia.org/wiki/Ornithopter

Some even worked.


I disagree. I think we need advances in philosophy to truly answer these questions in depth.


I'm not so sure about this. If I had to guess, I suspect things will go like this:

1. Scientists get a really good understanding of how learning works.

2. One third of the philosophers claim that their philosophical ideas are vindicated, another third claim that their models aren't really contradicted by this new scientific model, and the final third claim that the scientists have somehow missed the point, and that special philosophical learning is still unexplained.


Read David Deutsch's book eh?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: