In case you missed it, this week Numenta issued a newsletter with an update on the status of the new algorithms. Its an interesting read, with a write up by Jeff Hawkins himself. He talks about how last fall they decided to take a fresh look at their node learning algorithms, realizing that the current version had shortcomings that could not be overcome. They went back and looked at the brain for inspiration on how to improve the learning capabilities. Here are a few keys points I pulled out of the article:
1. The new algorithms can be learning and inferring at the same time. The old algorithms had a separate learning and inference stage, so this will be a significant improvement for many applications with real time data where the system needs to be able to learn, infer, and predict in real time (like a real brain).
2. The sparse distributed nature of the system makes it scale much better to large problems, and makes it very robust to noise. In other words, the system will work very well with messy, incomplete data.
3. Variable order sequence learning- A real brain can start listening to a song midway through and almost immediately identify the song. Likewise, we can predict the future based on learned sequences of various lengths that occurred a short time ago or years ago. The new software will be useful in doing these types of things.
4. Much more biologically realistic- This is the first version of the software that will basically be emulating the cortex at the level of neurons and synapses. Of course, the downside is the higher system requirements. Hawkins notes that Numenta is having to spent a great deal more time optimizing the software to be able to work on something that isn't a supercomputer.
As an aside, I am surprised at the level of skepticism of some of the mainstream AI people regarding Numenta. Ben Goertzel, for one, seems determined to believe that Numenta is on the wrong track. He went out of his way recently to claim that Itamar Arel's DESTIN system is a better hierarchical pattern recognition system. I have looked into DESTIN, and it actually seems very similar to Numenta's work. It learns temporal sequences of spatial patterns in a hierarchical nature, and performs bayesian inference. I have not been able to find any evidence showing that DESTIN has, so far, done more in the computer vision arena than HTM. If I am wrong, someone can correct me. For instance, in a December 2009 paper regarding DESTIN, Arel noted that they had conducted an experiment showing character recognition. It was recognition of letters in a binary (black or white) environment. Numenta was demonstrating that level of work at least three years ago. My sense is that DESTIN is on the right track, and perhaps Arel and Hawkins will collaborate at some point (maybe they already are) but I have no idea how Goertzel reaches his conclusion.
I was happy to see that Shane Legg (another AI critic of Numenta) seemed to change his mind about Numenta after seeing Hawkins' recent talk in March on the new algorithms. If Numenta can come through in a big way with its next software release, I think that there will be many more converts to the HTM theory of AI.