Thursday, December 2, 2010

HTM hardware implementation

I came across an interesting Powerpoint document by Dan Hammerstrom, a Professor in the Electrical and Computer Engineering Department at Portland State University. He has in the past published papers discussing potential hardware versions of HTM, and works with the DARPA SYNAPSE team that is attempting to create brain-like hardware.

In any event, he is collaborating with Numenta to create a hardware implementation of the new learning algorithms (he calls them "HTM3" as opposed to the prior software ("HTM2"). Hammerstrom says that Numenta is running into serious scaling problems with the new algorithms due to the limitations of present-day CPU's, and they are concerned that this will impact the wide adoption of their algorithms. Interestingly, they have tried using GPU's, but it hasn't helped much, so they are looking at more custom hardware tailored specifically to their algorithms. Working with Hammerstrom, they are looking at three possibilities:

1. More optimal use of CPU's and GPU's
2. FPGA's
3. Custom silicon created specifically for Numenta

Now it is even more clear that Numenta is not focusing on computer vision yet because today's computers don't have the horsepower to run the software. In any event, here is a link:


  1. This is an interesting interview with this Robotic Vision expert who is also using a hierarchical system. He says essentially the same thing:

  2. Here's a (long) article about Memristors and building a brain. Quite optimistic tone.

  3. This comment has been removed by the author.

  4. Thanks, guys. Interesting articles. With regard to that IEEE article, I note that Hammerstrom's presentation to which I linked above mentioned the IEEE piece as another example of exaggerated claims about how far things have progressed with DARPA SYNAPSE's project (since he is part of the project I give his statement a great deal of weight). The fact of the matter is that having a memristor that can simulate the functionality of a synapse is going to be very valuable in the future, but not until people figure out exactly how the brain is wired together using synapses to create the marvel of intelligence that constitutes our brain. A good example of this is Numenta's own algorithms. They are only just now starting to figure out how to incorporate synapses into their models. When I hear that someone has figured out how to create an artificial brain with hardware, my first question is always "OK, but what model of intelligence are you going to use to wire all that hardware together?"

  5. Sean: I read the article in IEEE again, and to me this is something I expect to read in 10 or 15 years, not today. So as you say it's too optimistic. The authors start out by saying that some expected AI's already in the 1980's, but not much has happened in the last 50 years.
    And then they hype the memristor as something that will change it all. Maybe it will, but try to be more down to earth about it.

    We haven't figured out yet how the brain works in every detail, so there could still be surprises. For instance, some neurons have connections that stretches far away to other regions of the brain (I have read this somewhere). If these connections have great importance, that would complicate matters.

    The IEEE article says that next year they will test some thousands of "brains" and pick those with the best wiring. So they haven't even started yet. How can they be so sure it doesn't end in failure.

  6. Right. The fact that they are creating thousands of "brains" shows almost a random approach to the task, like evolution. Memristors are going to have immediate importance for some things, but their importance to AI advances specifically are going to depend on the success of the work of entities such as Numenta.

  7. Have you actually tried to do anything in your life that resemble what the IEEE proposes? Are you experts, or what? If not, it is wiser to shut up.