A couple of weeks ago, Numenta sent out a newsletter in which it revealed that it plans a major new release of its software implementation of HTM. It will be released in October 2010. The newsletter said that the company has had some recent insights into the HTM learning algorithms based on a deeper understanding of the biology of the neocortex. Numenta said that the new algorithms have the potential for a "large" increase in scalability and robustness.
One of the things that makes Numenta such a solid AI company is that when they run into problems with issues like scalability and robustness, they look to the brain itself for solutions. Even to a non-expert like me, it is obvious that the whole field of artificial intelligence has floundered for the past 70 years precisely because it has ignored the only known example of real intelligence, the neocortex of the mammalian brain. Jeff Hawkins' book made this very point. He decided in the mid 1980's that he wanted to enter a PHD program to create intelligent machines using the brain as his guide to doing it. He tried to apply to MIT, the leading AI lab in the country, and they basically laughed him out of the building for believing that it was necessary to understand how the brain works to create real AI. Now, 25 years later, MIT has a research group doing exactly what Hawkins suggested as a graduate student.
Hawkins' ideas may not all be correct, but the progress that has been made in AI in the last five years or so seems to be much more biologically grounded than 20 years ago, so Numenta is clearly on the right track by emulating the brain. If the new software really is a large improvement in its ability to scale (currently it is quite limited in many ways), we might actually begin to see the software begin to approach human-level ability at certain tasks, such as visual pattern recognition.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment