Friday, October 29, 2010

Jeff Hawkins speech on November 12

For the first time in eight months, it looks like Jeff Hawkins will be speaking publicly about Numenta's work. The bad news is that, per the contact for the event, it will not be broadcast or recorded. I am hoping that that information turns out to be incorrect. Hawkins is giving the 2010 SmithGroup Lecture at the University of Illinois at Urbana-Champaign. Based on the abstract provided by Numenta, Hawkins will discuss Numenta's overall neocortical theory, the new learning algorithms, and how he believes hierarchical learning machine intelligence models will evolve in the future. Perhaps we will get lucky and someone will create an amateur video of the event. Definitely sounds interesting.

UPDATE: Interestingly, it looks like Hawkins is going to be giving the same talk at MIT's Center for Biological and Computational Learning on November 10. This is great to see, because MIT's CBCL has some of the leading research going on in the field of biologically inspired machine learning. For instance, Tomaso Poggio, who I have mentioned on this blog, is a part of CBCL. This is the type of publicity that I was hoping to see for Numenta. Here is a link:

http://www.facebook.com/CBCL.MIT#!/event.php?eid=150521674991840

Thursday, October 14, 2010

DARPA and hierarchical temporal memory

In a recent comment, I linked to an article entitled "Deep Machine Learning- A new Frontier in Artificial Intelligence Research.".

In it, the creators of the DESTIN architecture, who I have mentioned before in this blog, attempt to summarize the field of work going on with Deep Machine Learning, or the idea of using hierarchies to learn in a more brain-like manner. What was interesting to me about the article was that it mentioned a DARPA research effort involving deep machine learning architectures. In April 2009, DARPA put out a request for proposals for deep machine learning architectures. The military is increasingly worried about the vast amount of data that it collects that goes unanalyzed due to the sheer volume of data that humans do not have the time to analyze. DARPA seeks an HTM-like algorithm that will find patterns in this vast data. The DARPA announcement closed in April 2010, and to my shock, I don't see any indication that Numenta put in a proposal (among others, it appears that the DESTIN folks did). In a briefing, DARPA set out a list of desirable properties that would be features of the algorithms resulting from the multi-year research effort. Here is the list:

1. Single learning algorithm and uniform architecture for all applications

2. Unsupervised and supervised learning, including with attention mechanisms

3. Increasingly complex representations as you ascend the hierarchy. Sparse representations were mentioned here

4. The ability to learn sequences and recall them auto-associatively

5. Recognize novelties at each level and escalate them in the hierarchy

6. Feedback for predictions to fill-in missing input

7. Online learning

8. Parameters set themselves and need no tweaking

9. Neuroscience insights to inform the algorithms and architecture

Essentially, that list of desirable features in DARPA's envisioned software is a description of the HTM algorithms. Its difficult to imagine why Numenta didn't throw their hat in the ring given the amount of money potentially involved if the technology catches the eye of the military. In any event, DARPA's document was very interesting reading.

Monday, October 4, 2010

The complexity of the problem faced by Numenta

I have been following Dileep George's new blog, and he made a couple of responses to posts by myself and Dave (perhaps the same Dave who occasionally posts here).

In my post, I asked Dileep how the traditional tree-shaped hierarchy can explain the vast number of qualities that can come into play when we recognise, for instance, a shoe. For example, when we see a shoe, we recognize that it is a certain color, a certain texture, has a certain design on it, and many other features. In other words, recognizing one object requires the brain to have connections to a number of other invariant representations of different types of objects and concepts. In my mind, I couldn't see how a simple tree structured hierarchy could represent this complexity, and Dileep confirmed that I was correct, stating that the brain likely has a number of different hierarchies that communicate with one another. Since that time, I will say that I think that I was mixing up how we recognize a particular instantiation of a shoe with how we recognize the invariant representation of "shoe" that is stored in the brain. The simple tree-shaped hierarchy might be sufficient to store the invariant "shoe" concept while not working to recognize a particular shoe.

Dave's question to Dileep focused on whether a single HTM network could recognize both an object (such as a shoe) and an action (like running or walking). Surprisingly to me, Dileep answered that you would need two separate HTM networks to handle two separate types of knowledge. My conclusion how is that the simple traditional tree shaped hierarchy is not sufficient even to represent all invariant concepts known by the brain, much less the particular instantiations of those representations that we learn (i.e. particular faces of persons as opposed to the general idea of "face").

This goes to show that even if Numenta's new algorithms have licked the problem of how the brain learns within a region and does inference, figuring out how the brain as a whole learns many types of objects and concepts, both invariantly and specifically, and how it ties all of this knowledge together in the amazing way that our brain works, is still something that we are only beginning to figure out.