Obviously I haven't written on here for more than a year. I am not as enchanted with Numenta's technology as I once was, for a combination of reasons. Mostly, the more I look at what is going on in the AI world as a whole, the less impressive Numenta looks in comparison. I remember when I saw IBM's Watson defeat the world champions in Jeopardy in February 2011. Such a marvel of AI simply isn't very compatible with Jeff Hawkins' contention that mainstream AI is stuck in a rut. You could say the same thing about Siri, Kinect, self-driving cars, and a host of other recent achievements of AI. I get the sense that Hawkins isn't even very familiar with the advances happening around him in the AI world.
Even if you get down in the weeds with the biologically inspired AI research that is going on, there are some very impressive efforts going on. Hawkins often denigrates the overly simplified neural networks of AI researchers compared to Numenta's more biologically realistic neuron models, but those simpler neurons are producing real world results. Further, they are becoming increasingly realistic and capable. Perhaps Hawkins deserves some credit for this with the buzz generated by "On Intelligence," but the last five or ten years has seen a huge increase in interest in neural networks for AI. Just to take one example, Jurgen Schmidhuber is building recurrent neural networks that operate both in time and as a hierarchy (sound familiar?) that are beginning to produce results on computer vision benchmarks that rival the capability of humans (on limited tasks). Numenta, meanwhile, has never (to my knowledge) published any benchmarks regarding the capabilities of their algorithms. Hawkins has said on more than one occasion that there aren't suitable benchmarks for a hierarchical temporal memory, but that simply is not true. Many of the "deep learning" and "neural net" researchers are beginning to work with neural nets that operate in both space and time and are publishing research results on their work. Schmidhuber, Andrew Ng, and Geoff Hinton, some of the leaders in the field, have all done this type of work.
Maybe I will be proven wrong and we will shortly see something amazing from Numenta, but I doubt it. They are building a data prediction tool, but if I were them I would be worried given that Google and other big players already have such products on the market. I still keep an eye on the company, but I am also watching the progress of the rest of the biologically inspired AI community, which is making much more demonstrable progress in AI than what Numenta has shown. Here is a link to a good talk by Schmidhuber summarizing some of their impressive and fairly recent results with their neural nets:
http://www.youtube.com/watch?v=rkCNbi26Hds&feature=player_embedded
I admit that I am probably being a bit hard on Numenta, so let me throw this out there. It may not be an accident that the last five or ten years is the period in which the loosely bio-inspired multi-level neural networks have begun to dominate mainstream AI (Schmidhuber says as much in the above talk). I remember reading that Andrew Ng of Stanford read "On Intelligence" and was very inspired by it. Seemingly around that same period of time he began to move away from the traditional AI to the more bio-inspired version. It may well be that Hawkins' book played a role in jump starting this new and apparently much more successful approach to AI both for him and for others. It just seems that other AI researchers are doing more with that inspiration than Numenta has been able to do.
Subscribe to:
Post Comments (Atom)
Watson is not impressive. It simply was faster on the trigger. In Jeopardy, trigger speed is 2/3 of the battle. The two champions do not want to complain about it too much because that's exactly how they themselves became champions. I was not impressed at all with Watson.
ReplyDeleteThere are a lot of A.I. efforts that are essentially the same. For example, economics agents are essentially the same as minimization of energy methods. The language is just different and the math details look different, but the effective logic is very similar. As another example, when a node has the ability to change its wiring with other nodes (like Numenta) is it an agent interacting with other agents or a node that can change its wiring? When there is feedback in a NN whose nodes can change their wiring in response to measuring changes in energy miniimization, then it's not effectively different from a government looking at the GDP of an economic system of agents. Sparse representation is simply another way of determining the highest bidder in an economic system.
We need an A.I. system on smart phones that can organize all smartphone owners like one huge brain in making government and economic decisions. Paid advertising for products and politicians could be made obsolete. Money could be made reliable. Goals could be defined (how about happiness for all?) and achieved.
This comment has been removed by the author.
ReplyDeleteYou are right that Numenta's technology leaves much to be desired but not for the reasons you raised. In the long run, biologically inspired AI has nothing to fear from the Watsons of the world. IBM's Watson is imprisoned in its own text-based model. It is useless in a visual or tactile world where it would have to understand and interact with its environment in real time. Numenta's HTM, by contrast, is potentially universal in its application. That is, if only it worked as promised.
ReplyDeleteNumenta's model is not really new, something that Dileep George has admitted on his blog. HTM is just a hierarchical classifier based on Bayesian statistics. This stuff has been around. But that is not the problem. The problem is that the model is not as biologically inspired as Jeff Hawkins would like us to believe. Sure, it is based on some aspects of brain organization but there is no evidence to suggest that the brain is Bayesian. Indeed, there is strong evidence to suggest that the brain uses population modulation for recognition and not synaptic strength modulation. For example, stimulus intensity is not coded in either signal strength, frequency or pulse width, as one might expect. Intensity is really a function of the number of sensors that fire simultaneously in a small region of the sensory space.
As bad as it is, the use of Bayesian statistics is not the worst problem plaguing the HTM approach. What really handicaps the model is that concurrent patterns are stored at every level of the hierarchy. This is a major law as it is guaranteed to severely weaken the invariant recognition capability of the model. It would take too much space to explain why in a comment, so click on the links below if you're interested in more details.
A Fundamental Flaw in Numenta's HTM Design
How Jeff Hawkins Reneged on his Owm Principles
Given the above, is it any wonder why it's taking so long for Numenta to come out with a product that will knock everybody's socks off?
Hi Sean, welcome back and thank you for an insightful post.
ReplyDeleteI agree that Numenta must show some product soon, or it will look like they can't implement their model of the brain.
I don't think that question is can Numenta create a real AI-technology tomorrow or after 10 years. It depends not only of their efforts and whether is their way right. You are right there are lots of other companies and researchers and everyone may use ideas and approaches of each other. If they make sense and seem useful of course. In other words that problem is not in vacuum and its future not depend from one single company.
ReplyDelete