Hacker News new | past | comments | ask | show | jobs | submit login

I disagree with hacknat (and the reason I suspect it's controversial) because psychology is generally far too high-level (e.g. focused on things like traits and high-level behavior and vague "perceptions") and has far too little empirical rigor for engineers to be able to build any sort of actual system from insights gleaned from psychology. The recent replication crisis in psychology does little to help this reputation.

Neuroscience does concern itself a great deal with low level biological mechanisms and architectures, and is more amenable to cross-pollination with machine learning.

Though I would like to point out that thus far deep learning has taken few ideas from neuroscience.




Neural networks are the big driver and arguably were taken from neuroscience.

The reason why psychology is 'too high-level' is exactly what is meant with 'we don't understand it', we're approaching the psyche at the macro level of observable traits, but there is a very large gap between the 'wiring' and the 'traits', some of that gap belongs to neuroscience but quite possibly the larger parts belongs to psychology. The two will meet somewhere in the middle.

A similar thing happens in biology with dna and genetics on the one side and embryology on the other.


They were taken from neuroscience... 50 years ago. Since then, very few ideas have been explicitly taken from neuroscience.


Exactly, and even then, half of the details or how real neurons work were ignored or discarded. NN are now often taught by sticking closely to the math and avoiding the term and any reference to biology completely.


Does it matter? As long as the original source for the idea is remembered I don't particularly care which field originated it. There are so many instances and examples of cross pollination between IT and other sciences that in the end whether or not we stick to the original in a literal fashion or not should not matter (and if we did we'd probably lose out on a good bit of progress).


That's like saying there has been no development in automobiles since 1890. Sure, at first glance the cars from then are still like the cars from now. ICE (or electrical) power source, maybe a gearbox, some seats, a steering wheel and something to keep you out of the weather. But that's at the same time ignoring many years of work on those concepts in order to further refine them.

The neural networks that were taken 'explicitly from neuroscience' have gone through a vast transformation and that + a whole lot of work on training and other stuff besides is what powers the current crop of AI software. All the way to computers that learn about games, that label images and that drive cars with impressive accuracy to date.

The problem is - and I think that was what the original question was about - that neuroscience is rather very low level. We need something at the intermediate level, a 'useful building block' approach if you will, something that is not quite a fully formed intelligence but also not so basic as plumbing and wiring.


This analogy doesn't make any sense. Since automobiles were never patterned off anything in biology (they were the improvement of horse drawn carriages), I'm not sure what point you're trying to make with the analogy.

I never said there weren't any developments in neural nets, I'm just saying that few ideas have been taken from neuroscience (there certainly have been some ideas, like convolutions). In fact most things (including most tricks in convolutional neural nets in their current state) a neural net does, we know a brain does not do.


All it took for us to make aircraft was to look at birds and to realize something could be done (flying even though you weigh more than the air). The first wanna-be aircraft looked like birds, and some even attempted to work like them. Most of those designs went nowhere. Just like legs got replaced by the wheel in ground transportation aircraft engines and eventually jets replaced the muscles of birds. It doesn't really matter what sets you on a particular part, what matters is the end result.

Right now, we have a state of AI that is very much limited by what we know about how our own machinery works. Better understanding of that machinery (just like better understanding of things like aerodynamics, rolling resistance and explosions led to better transportation) will help us in some way or other. And it may take another 50 years before we hit that next milestone, but the current progress all leads more or less directly back to one poor analogy with biological systems. Quite probably there are more waiting in the wings, whether through literal re-implementation or merely as inspiration it doesn't really matter.


> It doesn't really matter what sets you on a particular part, what matters is the end result.

To put it in other words: Birds are limited by evolution. They are not an optimal design - they are a successful reproductive design in a wide ecosystem where flying is a tool.

Our intelligence is no different.

This is something Feynman addressed in this beautiful talk (in the Q&A iirc): https://www.youtube.com/watch?v=EKWGGDXe5MA


I like the comparison to developing wings. What's interesting about the development of plane wings is that, while we used the underlying physics of how wings work to make plane wings, a plane gets it's trust differently than a bird, and looks and flies differently. Flapping wasn't very useful to us for planes; what things about the way minds work will not be useful for AI? I think once we understand the algorithms behind AI / intelligence / learning, what we choose to make may be very different than what a lot of people currently imagine having AI or robots will to be like.


> and has far too little empirical rigor for engineers

I think this is one of the key points of the problem: we, as engineers, are expecting this problem (imitating the human psyche/getting to true AI) to be able to be defined rigorously. What if it can't?

I'm not a religious person and I don't generally believe in something inside us that it's "undefinable", but looking at our cultural and intellectual history of about 2,500 years I can see that there are lots of things that we we haven't been able to define rigorously, but which are quintessential to human psyche: poetry, telling jokes, word puns, the sentiment of nostalgia and I could go on and on.


If anything the replication crisis in Psychology is proof of just how little we really know about intelligence. I think Psychology is necessary, because I think it needs to come up with an adequate definition of what intelligence is first, then neuroscience can say, "this is how that definition actually works", then engineers can build it.


Even now, deep learning is basically 1980s-era backdrop with a few tweaks and better hardware - it turns out that despite decades of backlash against it, plain old backdrop pretty much is the shit, as long as you have a lot of compute power and a few tricks up your sleeve!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: