A few days ago Numenta sent out a newsletter with a quick update on their work. The newsletter notes that Numenta has now posted the Smith Group lecture on its website (it runs much more smoothly than the version on the University's website). It also announced the first additions/updates to the new learning algorithm documentation. The new additions were helpful, particularly the addition of an appendix that goes into some depth about the neuron model used by the HTM software. It includes some of the graphics used by Numenta in the online lecture.
Sadly, the newsletter noted that Numenta is temporarily deferring its work on computer vision problems in favor of applications that are more focused on temporal patterns, such as web click prediction and credit card fraud prediction. I guess that I can't say that I am too surprised by this. In hindsight, based on the online video and the whitepaper, it is clear that Numenta ran into some problems with its vision experiments with the new algorithms. The current algorithms can model layer three or four of the cortex (layer 3 for variable order time based learning or layer 4 for learning that does not rely on context). The whitepaper hypothesizes that layer four allows the brain to learn spatial invariance while layer three allows the brain to learn temporal invariance but that for vision problems the brain is somehow combining layers 3 and 4 to create spatial and temporal invariance at the same time. Until Numenta figures out how to model both layers at the same time working together like the real brain, computer vision probably isn't going to work terribly well.
Saturday, November 27, 2010
Friday, November 19, 2010
Smithgroup Hawkins November 12 lecture available online
Surprisingly, Jeff Hawkins' recent lecture is now available online:
http://www.beckman.illinois.edu/gallery/video.aspx?webSiteID=o2RWiAWUQEKPgd_8QBTYOA&videoID=fYhfoB6NFE2ytFPl7XLnTA
http://www.beckman.illinois.edu/gallery/video.aspx?webSiteID=o2RWiAWUQEKPgd_8QBTYOA&videoID=fYhfoB6NFE2ytFPl7XLnTA
Saturday, November 13, 2010
Comparing the new algorithms to the first generation
It is interesting to compare the new algorithms to the original zeta1 algorithms released by Numenta back in 2007. In a May 2007 blog post, Numenta discussed the limitations of those algorithms. Here were the limitations noted at that time (see this link for the blog post):
1. The 2007 algorithms had no time based inference, so inference was based only on, for instance, a single snapshot of a picture to recognize an object. Now, of course, the algorithms fully employ time-based inference, which should make computer vision applications (and other applications) based on HTM much more powerful.
2. In 2007, time was used for learning, but it was only "first-order" time based learning. That meant that when the software was attempting to learn sequences of patterns, it would only account for the current time step and one prior time step. Imagine trying to learn invariant representations for dogs, cars, people, and other complex objects based on only two consecutive still pictures of data. Our brains learn by seeing many "movies" of objects in the world around us, so this was a very significant limitation on the power of HTM. Now, it appears that HTM can learn sequences of essentially unlimited length.
3. The 2007 algorithms had abrupt discrete "nodes" without overlapping boundaries. According to the blog, this diminished the ability of the system to create invariant representations. Now, the levels of the HTM hierarchy are one continuous region (no more nodes). This is a big change that I actually wasn't expecting, which is good, because continuous regions of neurons rather than discrete nodes is the way that the brain works.
4. The 2007 algorithms did not use sparse distributed representations, which also severely limited the scalability of the algorithms due to memory requirements. Now, it goes without saying that sparse distributed representations are the key to making the new algorithms work. Not only does this make the algorithms much, much more scalable, it also facilitates generalization.
In short, every single major listed shortcoming of the original HTM software has now been addressed. I expect to see many commercial applications start to come from Numenta's work. Hopefully my blog will soon be able to focus as much on applications as on the core technology. It will be interesting to see the extent to which this technology takes off over the next few years. Personally, I am particularly interested in robotics, and hope to see HTMs begin to be used to create robots that can intelligently perceive the world and perform useful tasks. Navigation, object recognition and manipulation, and language understanding are all things that could theoretically be done by HTM.
1. The 2007 algorithms had no time based inference, so inference was based only on, for instance, a single snapshot of a picture to recognize an object. Now, of course, the algorithms fully employ time-based inference, which should make computer vision applications (and other applications) based on HTM much more powerful.
2. In 2007, time was used for learning, but it was only "first-order" time based learning. That meant that when the software was attempting to learn sequences of patterns, it would only account for the current time step and one prior time step. Imagine trying to learn invariant representations for dogs, cars, people, and other complex objects based on only two consecutive still pictures of data. Our brains learn by seeing many "movies" of objects in the world around us, so this was a very significant limitation on the power of HTM. Now, it appears that HTM can learn sequences of essentially unlimited length.
3. The 2007 algorithms had abrupt discrete "nodes" without overlapping boundaries. According to the blog, this diminished the ability of the system to create invariant representations. Now, the levels of the HTM hierarchy are one continuous region (no more nodes). This is a big change that I actually wasn't expecting, which is good, because continuous regions of neurons rather than discrete nodes is the way that the brain works.
4. The 2007 algorithms did not use sparse distributed representations, which also severely limited the scalability of the algorithms due to memory requirements. Now, it goes without saying that sparse distributed representations are the key to making the new algorithms work. Not only does this make the algorithms much, much more scalable, it also facilitates generalization.
In short, every single major listed shortcoming of the original HTM software has now been addressed. I expect to see many commercial applications start to come from Numenta's work. Hopefully my blog will soon be able to focus as much on applications as on the core technology. It will be interesting to see the extent to which this technology takes off over the next few years. Personally, I am particularly interested in robotics, and hope to see HTMs begin to be used to create robots that can intelligently perceive the world and perform useful tasks. Navigation, object recognition and manipulation, and language understanding are all things that could theoretically be done by HTM.
Friday, November 12, 2010
Dileep George's departure from Numenta is now permanent
That's a big loss, as Dileep was the guy who was able to take the ideas in On Intelligence and create a mathematical model and the first working software implementation of HTM.
According to Dileep's blog, he has now started a venture-capital funded company called Vicarious Systems, Inc. Its stated goal is to develop AI applications, starting with computer vision applications. If you go to vicariousinc.com, you can sign up for a corporate email newsletter.
It now makes sense that Numenta is trying to hire a Senior Software Engineer.
According to Dileep's blog, he has now started a venture-capital funded company called Vicarious Systems, Inc. Its stated goal is to develop AI applications, starting with computer vision applications. If you go to vicariousinc.com, you can sign up for a corporate email newsletter.
It now makes sense that Numenta is trying to hire a Senior Software Engineer.
Wednesday, November 10, 2010
Paper on new learning algorithms now available on Numenta's website
This was a very interesting read that hopefully some of the Numenta skeptics will take a look at closely. A few points that I pulled out of the paper were as follows:
1. The new learning algorithms are very closely tied to the biology of the brain. The new HTM software models the hierarchical levels, columns of neurons, the neurons themselves, and even the dentrites and synapses. Numenta clearly believes that HTM now learns in a similar way as the neocortex.
2. The algorithms appear to be scalable to any size. It sounds like the user can set the number of columns, the number of neurons per column, the number of levels, etc. and the only real limiting factor on scalability is the power of your computer and the amount of memory that you have available.
3. For the first time, prediction is now at the center of the HTM algorithms. On Intelligence, of course, postulated that prediction is at the heart of what the brain does, and is what makes us intelligent, but HTM until now really didn't implement prediction. Now that a more brain-like method is being used for the learning of sequences of patterns, HTM appears to have a powerful prediction mechanism. According to the paper, anytime HTM recognizes that an input is part of a sequence, it will automatically predict future inputs. It can do this based on sequences that go far into the past, and can predict not just one, but a number of time steps into the future. These capabilities will be important when someone decides to use HTM to control a robot, since according to Numenta, prediction and directing motor behavior are very similar activities. For instance, when a robot has a goal to accomplish some task, it will use prediction based on its remembrance of learned sequences that constitute prior motor actions to direct its future actions.
4. A number of theoretical HTM capabilities are not yet implemented in the software. Numenta specifically mentioned attention mechanisms, motor behavior for a robot or some other physical embodiment of an HTM, and specific timing for the learning of sequences that happen at particular speeds (such as music). Still, it will be very interesting to see the acceleration of commercial applications with the significant advance that these algorithms represent.
5. This paper is only a working draft. It was mentioned that several future chapters are planned for the book, including a chapter on the mapping to biology, and a chapter on how the algorithms have been and can be applied to applications.
Here is the link:
http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf
1. The new learning algorithms are very closely tied to the biology of the brain. The new HTM software models the hierarchical levels, columns of neurons, the neurons themselves, and even the dentrites and synapses. Numenta clearly believes that HTM now learns in a similar way as the neocortex.
2. The algorithms appear to be scalable to any size. It sounds like the user can set the number of columns, the number of neurons per column, the number of levels, etc. and the only real limiting factor on scalability is the power of your computer and the amount of memory that you have available.
3. For the first time, prediction is now at the center of the HTM algorithms. On Intelligence, of course, postulated that prediction is at the heart of what the brain does, and is what makes us intelligent, but HTM until now really didn't implement prediction. Now that a more brain-like method is being used for the learning of sequences of patterns, HTM appears to have a powerful prediction mechanism. According to the paper, anytime HTM recognizes that an input is part of a sequence, it will automatically predict future inputs. It can do this based on sequences that go far into the past, and can predict not just one, but a number of time steps into the future. These capabilities will be important when someone decides to use HTM to control a robot, since according to Numenta, prediction and directing motor behavior are very similar activities. For instance, when a robot has a goal to accomplish some task, it will use prediction based on its remembrance of learned sequences that constitute prior motor actions to direct its future actions.
4. A number of theoretical HTM capabilities are not yet implemented in the software. Numenta specifically mentioned attention mechanisms, motor behavior for a robot or some other physical embodiment of an HTM, and specific timing for the learning of sequences that happen at particular speeds (such as music). Still, it will be very interesting to see the acceleration of commercial applications with the significant advance that these algorithms represent.
5. This paper is only a working draft. It was mentioned that several future chapters are planned for the book, including a chapter on the mapping to biology, and a chapter on how the algorithms have been and can be applied to applications.
Here is the link:
http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf
Thursday, November 4, 2010
New Whitepaper, etc.
The news keeps coming lately from Numenta. Their most recent Twitter post says that the new whitepaper (replacing the original one from March 2007) that describes the new algorithms is going to be published before the end of the month.
Also, I ran across an interesting exchange between a Numenta defender and Ben Goertzel. I am sure you can guess which side of the argument I am on:
http://knol.google.com/k/angelo-c/opencog-numenta-and-artificial-general/1luetnln973wm/3#
Also, I ran across an interesting exchange between a Numenta defender and Ben Goertzel. I am sure you can guess which side of the argument I am on:
http://knol.google.com/k/angelo-c/opencog-numenta-and-artificial-general/1luetnln973wm/3#
Subscribe to:
Posts (Atom)