Friday, April 6, 2012

Update- comparing Numenta to mainstream AI

Obviously I haven't written on here for more than a year. I am not as enchanted with Numenta's technology as I once was, for a combination of reasons. Mostly, the more I look at what is going on in the AI world as a whole, the less impressive Numenta looks in comparison. I remember when I saw IBM's Watson defeat the world champions in Jeopardy in February 2011. Such a marvel of AI simply isn't very compatible with Jeff Hawkins' contention that mainstream AI is stuck in a rut. You could say the same thing about Siri, Kinect, self-driving cars, and a host of other recent achievements of AI. I get the sense that Hawkins isn't even very familiar with the advances happening around him in the AI world.

Even if you get down in the weeds with the biologically inspired AI research that is going on, there are some very impressive efforts going on. Hawkins often denigrates the overly simplified neural networks of AI researchers compared to Numenta's more biologically realistic neuron models, but those simpler neurons are producing real world results. Further, they are becoming increasingly realistic and capable. Perhaps Hawkins deserves some credit for this with the buzz generated by "On Intelligence," but the last five or ten years has seen a huge increase in interest in neural networks for AI. Just to take one example, Jurgen Schmidhuber is building recurrent neural networks that operate both in time and as a hierarchy (sound familiar?) that are beginning to produce results on computer vision benchmarks that rival the capability of humans (on limited tasks). Numenta, meanwhile, has never (to my knowledge) published any benchmarks regarding the capabilities of their algorithms. Hawkins has said on more than one occasion that there aren't suitable benchmarks for a hierarchical temporal memory, but that simply is not true. Many of the "deep learning" and "neural net" researchers are beginning to work with neural nets that operate in both space and time and are publishing research results on their work. Schmidhuber, Andrew Ng, and Geoff Hinton, some of the leaders in the field, have all done this type of work.

Maybe I will be proven wrong and we will shortly see something amazing from Numenta, but I doubt it. They are building a data prediction tool, but if I were them I would be worried given that Google and other big players already have such products on the market. I still keep an eye on the company, but I am also watching the progress of the rest of the biologically inspired AI community, which is making much more demonstrable progress in AI than what Numenta has shown. Here is a link to a good talk by Schmidhuber summarizing some of their impressive and fairly recent results with their neural nets:

http://www.youtube.com/watch?v=rkCNbi26Hds&feature=player_embedded

I admit that I am probably being a bit hard on Numenta, so let me throw this out there. It may not be an accident that the last five or ten years is the period in which the loosely bio-inspired multi-level neural networks have begun to dominate mainstream AI (Schmidhuber says as much in the above talk). I remember reading that Andrew Ng of Stanford read "On Intelligence" and was very inspired by it. Seemingly around that same period of time he began to move away from the traditional AI to the more bio-inspired version. It may well be that Hawkins' book played a role in jump starting this new and apparently much more successful approach to AI both for him and for others. It just seems that other AI researchers are doing more with that inspiration than Numenta has been able to do.