I just saw an interesting talk from the 2010 singularity summit by demis Hassabis. He spoke about the failings of traditional ai, one of which is that it ignored for decades the only known example of high-level intelligence (the human brain). At the other end of the spectrum, he mentioned the brain simulation projects such as blue brain and darpa synapse, which tend to understand the wiring of the brain, but not its functions. Hassabis argued for a middle ground approach that combines the best of machine learning and neuroscience, which of course is the approach being taken by Numenta.
Hassabis mentioned that brain-inspired deep learning approaches such as htm and deep belief nets have made significant progress. He made the interesting point that these systems are becoming good at sensory perception, but that as of yet, it is not known how we can create the brain's conceptual knowledge from sensory knowledge. Hassabis clearly believes that something like htm cannot alone produce abstract knowledge. I personally am not convinced that sensory knowledge can't lead to abstract knowledge. The fact of the matter is that everything we know is derived from our sensory experiences, past and present. I am not naive enough to think that HTM theory is a comprehensive explanation of brain function. It just seems to me that sensory data could over time produce increasingly abstract knowledge. The whole idea of a hierarchy of space and time is that successively higher layers of the hierarchy contain increasingly invariant, abstract representations, so I don't see even see a clear difference between perceptual and abstract knowledge. Our ideas about love, hate, and anger all arise from past and present sensory experiences, from seeing, hearing, touching, and otherwise experiencing the good and bad of humanity, learning to represent in our minds these abstract ideas.
http://vimeo.com/17513841
Tuesday, December 21, 2010
Monday, December 20, 2010
Working developer implementation of HTM
A developer at a company called Provisio has developed his own working version of the new HTM algorithms to which he posted a link on Numenta's forums. It is a nice tool that you can go in and play with and see visually how the columns are processing data through many time steps as sequences of letters are presented to the algorithm. It is cool over time to see the system begin to predict the next letter in the sequence. It is still an early version of the software, but is fun to play around with:
http://research.provisio.com/HTM/TemporalMemoryLab01.html
http://research.provisio.com/HTM/TemporalMemoryLab01.html
Technology Review article on Numenta
Thanks to Martin for bringing this article to my attention. It discusses how the new alorithms are sufficiently powerful that commercial applications of the technology are now imminent. I thought it was interesting that Itamar Arel was quoted with positive things to say about Numenta's tech. I have mentioned him before on this blog, and have wondered why he hasn't worked more closely with Numenta since they have similar goals. Arel has a competing deep learning system known as DESTIN that Ben Goertzel wants to use as the sensory perception portion of a child-like robot (if there is such a thing, the idea of a child-like robot seems to be a bit creepy to me as the parent of living, breathing children). Here the link to the article:
http://www.technologyreview.com/business/26811/
http://www.technologyreview.com/business/26811/
Sunday, December 5, 2010
Another new Hawkins talk
On December 2, Jeff Hawkins gave a talk at Berkeley. It is similar to the talk from three weeks ago, but with some added tidbits sprinkled throughout. One nugget was Hawkins' statement that there is no existing machine learning model that comes even close to HTM for the depth to which it maps to the real cortical anatomy. This is exactly the point I made in my debate with Michael Anissimov on his blog. Here's the link:
http://www.archive.org/details/Redwood_Center_2010_12_02_vs265_26_Jeff_Hawkins
http://www.archive.org/details/Redwood_Center_2010_12_02_vs265_26_Jeff_Hawkins
Thursday, December 2, 2010
HTM hardware implementation
I came across an interesting Powerpoint document by Dan Hammerstrom, a Professor in the Electrical and Computer Engineering Department at Portland State University. He has in the past published papers discussing potential hardware versions of HTM, and works with the DARPA SYNAPSE team that is attempting to create brain-like hardware.
In any event, he is collaborating with Numenta to create a hardware implementation of the new learning algorithms (he calls them "HTM3" as opposed to the prior software ("HTM2"). Hammerstrom says that Numenta is running into serious scaling problems with the new algorithms due to the limitations of present-day CPU's, and they are concerned that this will impact the wide adoption of their algorithms. Interestingly, they have tried using GPU's, but it hasn't helped much, so they are looking at more custom hardware tailored specifically to their algorithms. Working with Hammerstrom, they are looking at three possibilities:
1. More optimal use of CPU's and GPU's
2. FPGA's
3. Custom silicon created specifically for Numenta
Now it is even more clear that Numenta is not focusing on computer vision yet because today's computers don't have the horsepower to run the software. In any event, here is a link:
http://web.cecs.pdx.edu/~strom/talks/hh_my_research_web.pdf
In any event, he is collaborating with Numenta to create a hardware implementation of the new learning algorithms (he calls them "HTM3" as opposed to the prior software ("HTM2"). Hammerstrom says that Numenta is running into serious scaling problems with the new algorithms due to the limitations of present-day CPU's, and they are concerned that this will impact the wide adoption of their algorithms. Interestingly, they have tried using GPU's, but it hasn't helped much, so they are looking at more custom hardware tailored specifically to their algorithms. Working with Hammerstrom, they are looking at three possibilities:
1. More optimal use of CPU's and GPU's
2. FPGA's
3. Custom silicon created specifically for Numenta
Now it is even more clear that Numenta is not focusing on computer vision yet because today's computers don't have the horsepower to run the software. In any event, here is a link:
http://web.cecs.pdx.edu/~strom/talks/hh_my_research_web.pdf
Subscribe to:
Posts (Atom)