Hacker News new | past | comments | ask | show | jobs | submit login
Jeff Hawkins talk on modeling neocortex and its impact on machine intelligence (numenta.com)
53 points by psawaya on Nov 27, 2010 | hide | past | favorite | 27 comments



I watched the whole thing and like the philosophy of the approach. Unfortunately there isn't a single demonstration of the software, nor a direct comparison between the software and other machine learning algorithms.

At one point Hawkins even says he's not going to waste time by talking about specific data sets they've applied the software to because there are too many! Then pick one! Pick the best!


Vitamin D's software is built atop Numenta's platform: http://vitamindinc.com/demo.php


AFAIK this is not based on the new architecture introduced in this talk.


In the presentation it was mentioned they will release some software in 2011. He also mentioned there are papers + pseudo-code on the website for research purposes to play with. So for people really interested in it (and it does sound very interesting and promising), it should be possible to make those kind of demos/comparisons. Though I have to agree with you that one demo would have been nice.


Hawkins has also given some great talks on starting a company. He has an interesting take on entrepreneurship in that he views it as a tool of last resort to be used when you can't get something made within an existing organization:

http://ecorner.stanford.edu/author/jeff_hawkins


This was really fun to watch. I read On Intelligence, by Jeff Hawkins several years ago and wondered when or if Numenta was ever going to get off the ground. It looks like they're getting close to their goal. Yeah, there were no real demonstrations, but this looks a lot more promising than previous AI/Neural Network stuff.


Have they compared their algorithm against standard well-known algorithms for the tasks they claim to solve? Last time I checked they still hadn't done that, or at least hadn't reported any such results. Without that, they're not worth anyone's time.


Very interesting video, these concepts seemed to have progressed a lot since last I checked in on them.

That being said, I'm finding it very difficult to find any objective comparisons of these algorithms to other, more mainstream machine learning techniques. In the talk, he gave the impression that there was a tremendous amount of data to back up these claims, and that he just didn't have time to present it all. I went through many of the white papers available on the Numenta website. Many were just overall outlines of the approach. A few of them demonstrate tasks for which some form of learning is occurring, however, it was hard for me to know, in the absence of objective comparisons to other techniques, just how good the results really are.

So far, the only objective comparison I could find involved handwritten character recognition, and that was against what appeared to be a standard feed forward neural network with only a single hidden layer. Not exactly state of the art.. why not compare to SVMs, convolutional NNs, deep belief nets, etc..?

So I am at this point hopeful, but fairly skeptical. If nothing else these are some inspiring ideas.


Coincidentally, I just finished On Intelligence, and I found it pretty mind-blowing. If you're interested in learning and the brain, read it! You can also read about the HTM algorithm they are working on here: http://www.numenta.com/htm-overview/education.php According to the first paper, there's enough detail there for you to implement the algorithm yourself. Cool!


Don't be fooled by approaches like this. HTM is an oversimplification that doesn't bring us any closer to real machine intelligence. I read the book a few years back when I was taking courses in machine learning and metaheuristics, and I recall being impressed. After picking up a molecular biology background, however, I've become skeptical of any claims to model "algorithms" after the brain or neuron or neocortex. Regardless of the level of abstraction chosen by the investigator, it isn't enough.

To put it simply, I strongly feel that achieving any kind of biologically-inspired intelligent agent will require a systems biologic approach where we model every minute molecular detail in silico. This isn't an undertaking that we even have the technology for at present. We don't have the raw speed, level of parallelism, or even the molecular/cell physiologic details necessary to model even parts of the brain. (Even of Drosophila!)

The Blue Brain project is nice and is worth following for nothing other than learning best engineering practices for developing the architecture behind something of this scale and complexity--but every simplification we make introduces error. (Imagine patching it! Imagine the "oops" moment, when some molecular mechanism doesn't work as we expected--and that's a regularly occurring event.) I'm not even sure how much simplification we can make before the emergent properties of the brain no longer function. Some of my colleagues say membrane potentials and the cytoskeletal system have key quantum interactions that encode state information--something we don't even understand yet. (I can't comment much on that, since I haven't studied quantum physics.)

I'm actually learning to develop algorithms that will focus on the interplay of the genomic machinery (promoters/enhancers, tx, translation, chaperones/folding, etc), biochemical pathways and kinetics, concentration levels, receptors, etc. in the hopes that one day we will be able to model systems like the brain. But from my limited knowledge, a project on the brain scale will only succeed after we solve the "much less complicated" problems: cancer, alzheimers, and aging, all of which are all cell-level problems. That's where we have to focus at present--and you can see how much more remains to be done.


I completely agree with you. I asked one of my professors at my Machine Learning masters course about this, and he said "if I've never heard of it, how good can it be?"

Algorithms that replicate biological processes are popular because people can easily grasp them. Anyone can understand why evolutionary systems work, or why neural networks work (because it's in nature, dummy!), but they don't work as well as other, purely mathematical methods.

In the end, these sorts of things tend to be toys or marginally useful, where other, more mathematically sound algorithms dominate the landscape. I got very excited about HTMs too, when I didn't know as much about ML as I do now, but I've realised that it hasn't made even a dent in academic circles (you know, the ones with the thousands of people who study these things for a living).


I also agree but while neural networks are a sort of black box they are also entirely mathematical. Also some of the most impressive current research is in the area of deep learners, an example of which is a type of neural network: RBMs and Deep Belief Nets. In particular Deep Belief Nets have the added advantaged of being able to display their interal abstractions and state and so are not so boxed up.


I will concede that, but they're a very roundabout way to building a classifier. An SVM, for example, is really much simpler and works much, much better than a neural network. I'm not familiar with RBMs and DBNs, so I can't comment on that, sadly...


SVMs may make a lot of intuitive sense due to the intuitive nature of the max-margin method of building a classifier, but that doesn't make them "much, much better" than a neural network.

Neural networks are "universal" function approximators in the sense that given sufficient number of hidden layer units and training data, they can approximate any Borel measurable function. This makes them theoretically powerful. (There are issues with this "universalness" claim though when doing cognitive modelling. See "Cognition and the computational power of connectionist networks", Hadley at http://www.informaworld.com/smpp/content~db=all?content=10.1...)

In practice, the best handwritten digits classifier is still neural networks. Lecun keeps scores at http://yann.lecun.com/exdb/mnist/.

Are there challenges with training neural networks? Certainly. But for many applications, there are good reasons to use a neural network. For instance, inference in a neural network is fast, and the learnt model is small in size (in feedforward networks, just a few large matrices).

SVMs on the other hand are "easier" to train, but care has to be taken to use the "right" kernel, and the learnt model in practice has to, in many cases, carry around with it up to half or more of the training data set (ie, the support vectors), making the model bigger in size, and inference slower (although there is work in making a sparser set of support vectors. See "An Effective Method of Pruning Support Vector Machine Classifiers", Liang, at http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5342...).

I know neural network research is kinda old school these days, has been overhyped in the past, and SVM research is kinda "in" nowadays, but that's no reason to think SVM is "much, much better" than ANN.


Hmm, I didn't know that about handwritten digits, thanks. I agree that you have to carry around the SVMs, but I've gotten better results with an SVM. That said, it might have just been a suitable problem.


I would remove one maybe both muches. I agree that in general SVMs are better but they are also every bit as black box. Deep belief networks though, are on another level. literally, svms are depth 2, dbn's are depth unbounded, i think.

http://www.youtube.com/watch?v=VdIURAu1-aU


That was just an example of my original point, though. These sorts of things perform better than "natural" processes, even though they aren't as easy to understand.


On very large real world data sets my experience has been the opposite.


Typical academic hubris.

Neural networks aren't toys. I've seen billion dollar systems based on them. And I believe an RBM was one of the strongest components of the winning Netflix algorithm.

The ML academics are biased against neural nets because they're difficult to analyze mathematically making it hard to fill blackboards with impressive looking equations.


I don't think you can make the statement that "everything matters so we need to model everything" until after you've figured out how the brain actually works.

Until you've discovered how it works you won't know which features of the system are actually essential to its functionality and which are mere evolutionary artifacts which can be drastically simplified.

I think the problem will be solved, like other scientific problems have in the past. People will have ideas about what is essential and they will construct hypotheses and build and test models based on those ideas.

I think that arguing that the problem is too complex therefore there's no point in people having ideas until we can model everything is not a productive approach.


I don't entirely disagree, but what components are the ones we can simplify? Send a computer back in time and ask the people there what causes a paricular bug: without a very detailed model (which imparts the ability to know what things you can abstract away from your present investigation), you'd be ill equipped.

The systems we think are simple are only so because we already possess the knowledge necessary to think abstractly. (Or somebody else does.) Now consider this too: our knowledge of cellular biology is infantile. Can we put the entire brain together without understanding its parts? I bet you we can't.

We have to have a model to start with. In this case I don't mean a computational model that simulates everything--just a mental model. A physiologic model. Something that can tell us xyz happens in response to foo or absense of bar. We've been building that for a while now, but we're nowhere near complete. Not even enough to "wing it".

Complex problems can be solved with adequate information. We, unfortunately, are at a loss for that. (At least as far as applications at this scale are concerned.)

I'll make an informed prediction though: you'll see simulated brains working after we've cured cancer and aging.


The systems we think are simple are only so because we already possess the knowledge necessary to think abstractly.

If by that you mean that science proceeds by first understanding systems in their entirety and then abstracting away the essential elements from that complexity then I can't agree. (Science education may do that but that's an entirely different matter).

Historically we've gained understanding of systems by creating simple models which capture their essential behavior and only later adding more complex effects. One good example is the dynamics of the solar system. The basic phenomenology of central force motion (ie. elliptical orbits) was understood long before N-body interactions were accounted for. I think you would find similar examples in biology. For instance the "rule" that DNA is transcribed to RNA was established long before exceptions like retroviruses were understood.

The challenge is and always has been to somehow "see" or intuit the fundamental behavior in a mass of confusing data then build testable models based on that intuition.


Or a YouTube video of a presentation from earlier this year: http://www.youtube.com/watch?v=TDzr0_fbnVk#.

Flash video is sadly unfriendly to my iPad (where I am now)


So, "Flash video is sadly unfriendly" to your welcoming little iPad? And the Mac needed its own proprietary browser, because others were so unfriendly. Talk about a reality distortion field....


Jeff Hawkins began a few decades ago by criticizing all those neural-net researchers who were all promises and no results and (surprise!) he's become one of them now.


Apparently false. See the above comment: http://news.ycombinator.com/item?id=1945885


Oh-come-on. With the same logic, neural-nets were applied in a thousand and one domains. The problem of object tracking has been solved a thousand times in the past. Even if Numenta's algorithm is better at it than every other solution, you still can't call this a major result!

Even Hawkins himself has admitted that his hierarchical network design hasn't caused the huge bang that he expected it to cause.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: