Friday, May 7, 2010

May 2010 Numenta newsletter

In case you missed it, this week Numenta issued a newsletter with an update on the status of the new algorithms. Its an interesting read, with a write up by Jeff Hawkins himself. He talks about how last fall they decided to take a fresh look at their node learning algorithms, realizing that the current version had shortcomings that could not be overcome. They went back and looked at the brain for inspiration on how to improve the learning capabilities. Here are a few keys points I pulled out of the article:

1. The new algorithms can be learning and inferring at the same time. The old algorithms had a separate learning and inference stage, so this will be a significant improvement for many applications with real time data where the system needs to be able to learn, infer, and predict in real time (like a real brain).

2. The sparse distributed nature of the system makes it scale much better to large problems, and makes it very robust to noise. In other words, the system will work very well with messy, incomplete data.

3. Variable order sequence learning- A real brain can start listening to a song midway through and almost immediately identify the song. Likewise, we can predict the future based on learned sequences of various lengths that occurred a short time ago or years ago. The new software will be useful in doing these types of things.

4. Much more biologically realistic- This is the first version of the software that will basically be emulating the cortex at the level of neurons and synapses. Of course, the downside is the higher system requirements. Hawkins notes that Numenta is having to spent a great deal more time optimizing the software to be able to work on something that isn't a supercomputer.

As an aside, I am surprised at the level of skepticism of some of the mainstream AI people regarding Numenta. Ben Goertzel, for one, seems determined to believe that Numenta is on the wrong track. He went out of his way recently to claim that Itamar Arel's DESTIN system is a better hierarchical pattern recognition system. I have looked into DESTIN, and it actually seems very similar to Numenta's work. It learns temporal sequences of spatial patterns in a hierarchical nature, and performs bayesian inference. I have not been able to find any evidence showing that DESTIN has, so far, done more in the computer vision arena than HTM. If I am wrong, someone can correct me. For instance, in a December 2009 paper regarding DESTIN, Arel noted that they had conducted an experiment showing character recognition. It was recognition of letters in a binary (black or white) environment. Numenta was demonstrating that level of work at least three years ago. My sense is that DESTIN is on the right track, and perhaps Arel and Hawkins will collaborate at some point (maybe they already are) but I have no idea how Goertzel reaches his conclusion.

I was happy to see that Shane Legg (another AI critic of Numenta) seemed to change his mind about Numenta after seeing Hawkins' recent talk in March on the new algorithms. If Numenta can come through in a big way with its next software release, I think that there will be many more converts to the HTM theory of AI.

3 comments:

  1. Hey Sean,

    It's great to hear somebody talk about HTM and DESTIN. There seems to be too much secrecy around DESTIN! I've read the paper that you mention and it seems to me that there are quite a few contradictions in it. Like in that letter recognition application, they set the top node to have only three centroids... however, elsewhere in the paper they suggest nodes should have a fixed and predetermined, but preferably large, number of centroids!

    If DESTIN is so great, why don't they publish more about it?

    Cheers,
    Noelia

    ReplyDelete
  2. Noelia:

    Thanks for the comment. I found an article by Goertzel and Arel entitled "An Integrative Cognitive Architecture Aimed at Emulating Early Childhood Intelligence in a Humanoid Robot." In it, they spend a paragraph talking about the advantages of DESTIN versus HTM. A couple of the points are very technical (and a bit beyond me), but here are a couple of criticisms that I take issue with:

    1. They assert that HTM, to date, has not been able to capture temporal data. That is simply false. Version 1.6 of Numenta's NUPIC (released in June 2008, nearly two years ago) shipped with higher order temporal learning, and temporal inference at the first level of the hierarchy. For the authors to simply pretend that HTM's still only exist in their March 2007 form is absolutely wrong. Even the original algorithms captured first-order time data for learning.

    2. The authors note that HTM learning is layer by layer, which is not the way biology does it. This may have been a fair criticism at the time the paper was written, although I note that the new algorithms do not have this limitation (as I noted above).

    3. Finally, the authors claim that HTM's have had limited success with "high dimensional" images. My response here is that the authors should take a look at Numenta's Vision Toolkit, which recognizes classes of objects in gray scale images. Or, take a look at Vitamin D's person and action recognition program for webcams. That software isn't even just recognizing static images, but video. In other words, there are actual commercial products based on HTM that are doing exactly what Arel and Goertzel say that HTM cannot do.

    ReplyDelete
  3. Hi

    DeSTIN is not a very mature software system at the moment, but Itamar's "HDRN" system -- which is proprietary within Binatix Corp. -- is more fully developed.

    Based on demos I've seen, I believe Binatix's software is dramatically more functional for vision processing than Numenta's. DeSTIN follows similar principles to Binatix software, but is less mature code and the demonstrated functionality is less to date...

    -- Ben Goertzel

    ReplyDelete