It is interesting to compare the new algorithms to the original zeta1 algorithms released by Numenta back in 2007. In a May 2007 blog post, Numenta discussed the limitations of those algorithms. Here were the limitations noted at that time (see this link for the blog post):
1. The 2007 algorithms had no time based inference, so inference was based only on, for instance, a single snapshot of a picture to recognize an object. Now, of course, the algorithms fully employ time-based inference, which should make computer vision applications (and other applications) based on HTM much more powerful.
2. In 2007, time was used for learning, but it was only "first-order" time based learning. That meant that when the software was attempting to learn sequences of patterns, it would only account for the current time step and one prior time step. Imagine trying to learn invariant representations for dogs, cars, people, and other complex objects based on only two consecutive still pictures of data. Our brains learn by seeing many "movies" of objects in the world around us, so this was a very significant limitation on the power of HTM. Now, it appears that HTM can learn sequences of essentially unlimited length.
3. The 2007 algorithms had abrupt discrete "nodes" without overlapping boundaries. According to the blog, this diminished the ability of the system to create invariant representations. Now, the levels of the HTM hierarchy are one continuous region (no more nodes). This is a big change that I actually wasn't expecting, which is good, because continuous regions of neurons rather than discrete nodes is the way that the brain works.
4. The 2007 algorithms did not use sparse distributed representations, which also severely limited the scalability of the algorithms due to memory requirements. Now, it goes without saying that sparse distributed representations are the key to making the new algorithms work. Not only does this make the algorithms much, much more scalable, it also facilitates generalization.
In short, every single major listed shortcoming of the original HTM software has now been addressed. I expect to see many commercial applications start to come from Numenta's work. Hopefully my blog will soon be able to focus as much on applications as on the core technology. It will be interesting to see the extent to which this technology takes off over the next few years. Personally, I am particularly interested in robotics, and hope to see HTMs begin to be used to create robots that can intelligently perceive the world and perform useful tasks. Navigation, object recognition and manipulation, and language understanding are all things that could theoretically be done by HTM.