Big Numenta news today. Numenta's website indicates that Dileep George is on an extended "personal" leave of absence. He, along with Jeff Hawkins, co-founded Numenta back in 2005. It is hard to overstate his importance to the company over the years. He was the guy who read "On Intelligence" and figured out how to turn Hawkins' neuroscience theories into a mathematical expression that could be created in software. I went to George's website, and he says there that he left Numenta so that he could form a new company focused more on applications of the HTM technology.
I am not sure what to think about this. On the one hand, this could signal that George simply thinks that the technology is finally in a state that serious commercial applications can now be created with HTM. Numenta has always been more about the basics of the theory than applications, so it might be that Dileep just wants to hurry along the commercialization process. This could be a signal that the new algorithms really are going to be that big of a step forward for AI.
Hopefully this move doesn't mean some type of rift has opened between George and the company. Numenta could really use his talents down the road. Given George's stated reason for leaving, and given that it is called a "leave of absence" rather than outright resignation, I am inclined to go with the more optimistic interpretation.
Wednesday, May 19, 2010
Friday, May 7, 2010
May 2010 Numenta newsletter
In case you missed it, this week Numenta issued a newsletter with an update on the status of the new algorithms. Its an interesting read, with a write up by Jeff Hawkins himself. He talks about how last fall they decided to take a fresh look at their node learning algorithms, realizing that the current version had shortcomings that could not be overcome. They went back and looked at the brain for inspiration on how to improve the learning capabilities. Here are a few keys points I pulled out of the article:
1. The new algorithms can be learning and inferring at the same time. The old algorithms had a separate learning and inference stage, so this will be a significant improvement for many applications with real time data where the system needs to be able to learn, infer, and predict in real time (like a real brain).
2. The sparse distributed nature of the system makes it scale much better to large problems, and makes it very robust to noise. In other words, the system will work very well with messy, incomplete data.
3. Variable order sequence learning- A real brain can start listening to a song midway through and almost immediately identify the song. Likewise, we can predict the future based on learned sequences of various lengths that occurred a short time ago or years ago. The new software will be useful in doing these types of things.
4. Much more biologically realistic- This is the first version of the software that will basically be emulating the cortex at the level of neurons and synapses. Of course, the downside is the higher system requirements. Hawkins notes that Numenta is having to spent a great deal more time optimizing the software to be able to work on something that isn't a supercomputer.
As an aside, I am surprised at the level of skepticism of some of the mainstream AI people regarding Numenta. Ben Goertzel, for one, seems determined to believe that Numenta is on the wrong track. He went out of his way recently to claim that Itamar Arel's DESTIN system is a better hierarchical pattern recognition system. I have looked into DESTIN, and it actually seems very similar to Numenta's work. It learns temporal sequences of spatial patterns in a hierarchical nature, and performs bayesian inference. I have not been able to find any evidence showing that DESTIN has, so far, done more in the computer vision arena than HTM. If I am wrong, someone can correct me. For instance, in a December 2009 paper regarding DESTIN, Arel noted that they had conducted an experiment showing character recognition. It was recognition of letters in a binary (black or white) environment. Numenta was demonstrating that level of work at least three years ago. My sense is that DESTIN is on the right track, and perhaps Arel and Hawkins will collaborate at some point (maybe they already are) but I have no idea how Goertzel reaches his conclusion.
I was happy to see that Shane Legg (another AI critic of Numenta) seemed to change his mind about Numenta after seeing Hawkins' recent talk in March on the new algorithms. If Numenta can come through in a big way with its next software release, I think that there will be many more converts to the HTM theory of AI.
1. The new algorithms can be learning and inferring at the same time. The old algorithms had a separate learning and inference stage, so this will be a significant improvement for many applications with real time data where the system needs to be able to learn, infer, and predict in real time (like a real brain).
2. The sparse distributed nature of the system makes it scale much better to large problems, and makes it very robust to noise. In other words, the system will work very well with messy, incomplete data.
3. Variable order sequence learning- A real brain can start listening to a song midway through and almost immediately identify the song. Likewise, we can predict the future based on learned sequences of various lengths that occurred a short time ago or years ago. The new software will be useful in doing these types of things.
4. Much more biologically realistic- This is the first version of the software that will basically be emulating the cortex at the level of neurons and synapses. Of course, the downside is the higher system requirements. Hawkins notes that Numenta is having to spent a great deal more time optimizing the software to be able to work on something that isn't a supercomputer.
As an aside, I am surprised at the level of skepticism of some of the mainstream AI people regarding Numenta. Ben Goertzel, for one, seems determined to believe that Numenta is on the wrong track. He went out of his way recently to claim that Itamar Arel's DESTIN system is a better hierarchical pattern recognition system. I have looked into DESTIN, and it actually seems very similar to Numenta's work. It learns temporal sequences of spatial patterns in a hierarchical nature, and performs bayesian inference. I have not been able to find any evidence showing that DESTIN has, so far, done more in the computer vision arena than HTM. If I am wrong, someone can correct me. For instance, in a December 2009 paper regarding DESTIN, Arel noted that they had conducted an experiment showing character recognition. It was recognition of letters in a binary (black or white) environment. Numenta was demonstrating that level of work at least three years ago. My sense is that DESTIN is on the right track, and perhaps Arel and Hawkins will collaborate at some point (maybe they already are) but I have no idea how Goertzel reaches his conclusion.
I was happy to see that Shane Legg (another AI critic of Numenta) seemed to change his mind about Numenta after seeing Hawkins' recent talk in March on the new algorithms. If Numenta can come through in a big way with its next software release, I think that there will be many more converts to the HTM theory of AI.
Subscribe to:
Posts (Atom)