Friday, April 6, 2012

Update- comparing Numenta to mainstream AI

Obviously I haven't written on here for more than a year. I am not as enchanted with Numenta's technology as I once was, for a combination of reasons. Mostly, the more I look at what is going on in the AI world as a whole, the less impressive Numenta looks in comparison. I remember when I saw IBM's Watson defeat the world champions in Jeopardy in February 2011. Such a marvel of AI simply isn't very compatible with Jeff Hawkins' contention that mainstream AI is stuck in a rut. You could say the same thing about Siri, Kinect, self-driving cars, and a host of other recent achievements of AI. I get the sense that Hawkins isn't even very familiar with the advances happening around him in the AI world.

Even if you get down in the weeds with the biologically inspired AI research that is going on, there are some very impressive efforts going on. Hawkins often denigrates the overly simplified neural networks of AI researchers compared to Numenta's more biologically realistic neuron models, but those simpler neurons are producing real world results. Further, they are becoming increasingly realistic and capable. Perhaps Hawkins deserves some credit for this with the buzz generated by "On Intelligence," but the last five or ten years has seen a huge increase in interest in neural networks for AI. Just to take one example, Jurgen Schmidhuber is building recurrent neural networks that operate both in time and as a hierarchy (sound familiar?) that are beginning to produce results on computer vision benchmarks that rival the capability of humans (on limited tasks). Numenta, meanwhile, has never (to my knowledge) published any benchmarks regarding the capabilities of their algorithms. Hawkins has said on more than one occasion that there aren't suitable benchmarks for a hierarchical temporal memory, but that simply is not true. Many of the "deep learning" and "neural net" researchers are beginning to work with neural nets that operate in both space and time and are publishing research results on their work. Schmidhuber, Andrew Ng, and Geoff Hinton, some of the leaders in the field, have all done this type of work.

Maybe I will be proven wrong and we will shortly see something amazing from Numenta, but I doubt it. They are building a data prediction tool, but if I were them I would be worried given that Google and other big players already have such products on the market. I still keep an eye on the company, but I am also watching the progress of the rest of the biologically inspired AI community, which is making much more demonstrable progress in AI than what Numenta has shown. Here is a link to a good talk by Schmidhuber summarizing some of their impressive and fairly recent results with their neural nets:

http://www.youtube.com/watch?v=rkCNbi26Hds&feature=player_embedded

I admit that I am probably being a bit hard on Numenta, so let me throw this out there. It may not be an accident that the last five or ten years is the period in which the loosely bio-inspired multi-level neural networks have begun to dominate mainstream AI (Schmidhuber says as much in the above talk). I remember reading that Andrew Ng of Stanford read "On Intelligence" and was very inspired by it. Seemingly around that same period of time he began to move away from the traditional AI to the more bio-inspired version. It may well be that Hawkins' book played a role in jump starting this new and apparently much more successful approach to AI both for him and for others. It just seems that other AI researchers are doing more with that inspiration than Numenta has been able to do.

Tuesday, December 21, 2010

Singularity Summit speech by Demis Hassabis

I just saw an interesting talk from the 2010 singularity summit by demis Hassabis. He spoke about the failings of traditional ai, one of which is that it ignored for decades the only known example of high-level intelligence (the human brain). At the other end of the spectrum, he mentioned the brain simulation projects such as blue brain and darpa synapse, which tend to understand the wiring of the brain, but not its functions. Hassabis argued for a middle ground approach that combines the best of machine learning and neuroscience, which of course is the approach being taken by Numenta.

Hassabis mentioned that brain-inspired deep learning approaches such as htm and deep belief nets have made significant progress. He made the interesting point that these systems are becoming good at sensory perception, but that as of yet, it is not known how we can create the brain's conceptual knowledge from sensory knowledge. Hassabis clearly believes that something like htm cannot alone produce abstract knowledge. I personally am not convinced that sensory knowledge can't lead to abstract knowledge. The fact of the matter is that everything we know is derived from our sensory experiences, past and present. I am not naive enough to think that HTM theory is a comprehensive explanation of brain function. It just seems to me that sensory data could over time produce increasingly abstract knowledge. The whole idea of a hierarchy of space and time is that successively higher layers of the hierarchy contain increasingly invariant, abstract representations, so I don't see even see a clear difference between perceptual and abstract knowledge. Our ideas about love, hate, and anger all arise from past and present sensory experiences, from seeing, hearing, touching, and otherwise experiencing the good and bad of humanity, learning to represent in our minds these abstract ideas.


http://vimeo.com/17513841

Monday, December 20, 2010

Working developer implementation of HTM

A developer at a company called Provisio has developed his own working version of the new HTM algorithms to which he posted a link on Numenta's forums. It is a nice tool that you can go in and play with and see visually how the columns are processing data through many time steps as sequences of letters are presented to the algorithm. It is cool over time to see the system begin to predict the next letter in the sequence. It is still an early version of the software, but is fun to play around with:

http://research.provisio.com/HTM/TemporalMemoryLab01.html

Technology Review article on Numenta

Thanks to Martin for bringing this article to my attention. It discusses how the new alorithms are sufficiently powerful that commercial applications of the technology are now imminent. I thought it was interesting that Itamar Arel was quoted with positive things to say about Numenta's tech. I have mentioned him before on this blog, and have wondered why he hasn't worked more closely with Numenta since they have similar goals. Arel has a competing deep learning system known as DESTIN that Ben Goertzel wants to use as the sensory perception portion of a child-like robot (if there is such a thing, the idea of a child-like robot seems to be a bit creepy to me as the parent of living, breathing children). Here the link to the article:

http://www.technologyreview.com/business/26811/

Sunday, December 5, 2010

Another new Hawkins talk

On December 2, Jeff Hawkins gave a talk at Berkeley. It is similar to the talk from three weeks ago, but with some added tidbits sprinkled throughout. One nugget was Hawkins' statement that there is no existing machine learning model that comes even close to HTM for the depth to which it maps to the real cortical anatomy. This is exactly the point I made in my debate with Michael Anissimov on his blog. Here's the link:

http://www.archive.org/details/Redwood_Center_2010_12_02_vs265_26_Jeff_Hawkins

Thursday, December 2, 2010

HTM hardware implementation

I came across an interesting Powerpoint document by Dan Hammerstrom, a Professor in the Electrical and Computer Engineering Department at Portland State University. He has in the past published papers discussing potential hardware versions of HTM, and works with the DARPA SYNAPSE team that is attempting to create brain-like hardware.

In any event, he is collaborating with Numenta to create a hardware implementation of the new learning algorithms (he calls them "HTM3" as opposed to the prior software ("HTM2"). Hammerstrom says that Numenta is running into serious scaling problems with the new algorithms due to the limitations of present-day CPU's, and they are concerned that this will impact the wide adoption of their algorithms. Interestingly, they have tried using GPU's, but it hasn't helped much, so they are looking at more custom hardware tailored specifically to their algorithms. Working with Hammerstrom, they are looking at three possibilities:

1. More optimal use of CPU's and GPU's
2. FPGA's
3. Custom silicon created specifically for Numenta

Now it is even more clear that Numenta is not focusing on computer vision yet because today's computers don't have the horsepower to run the software. In any event, here is a link:

http://web.cecs.pdx.edu/~strom/talks/hh_my_research_web.pdf

Saturday, November 27, 2010

Latest Numenta newsletter

A few days ago Numenta sent out a newsletter with a quick update on their work. The newsletter notes that Numenta has now posted the Smith Group lecture on its website (it runs much more smoothly than the version on the University's website). It also announced the first additions/updates to the new learning algorithm documentation. The new additions were helpful, particularly the addition of an appendix that goes into some depth about the neuron model used by the HTM software. It includes some of the graphics used by Numenta in the online lecture.

Sadly, the newsletter noted that Numenta is temporarily deferring its work on computer vision problems in favor of applications that are more focused on temporal patterns, such as web click prediction and credit card fraud prediction. I guess that I can't say that I am too surprised by this. In hindsight, based on the online video and the whitepaper, it is clear that Numenta ran into some problems with its vision experiments with the new algorithms. The current algorithms can model layer three or four of the cortex (layer 3 for variable order time based learning or layer 4 for learning that does not rely on context). The whitepaper hypothesizes that layer four allows the brain to learn spatial invariance while layer three allows the brain to learn temporal invariance but that for vision problems the brain is somehow combining layers 3 and 4 to create spatial and temporal invariance at the same time. Until Numenta figures out how to model both layers at the same time working together like the real brain, computer vision probably isn't going to work terribly well.