Monday, October 4, 2010

The complexity of the problem faced by Numenta

I have been following Dileep George's new blog, and he made a couple of responses to posts by myself and Dave (perhaps the same Dave who occasionally posts here).

In my post, I asked Dileep how the traditional tree-shaped hierarchy can explain the vast number of qualities that can come into play when we recognise, for instance, a shoe. For example, when we see a shoe, we recognize that it is a certain color, a certain texture, has a certain design on it, and many other features. In other words, recognizing one object requires the brain to have connections to a number of other invariant representations of different types of objects and concepts. In my mind, I couldn't see how a simple tree structured hierarchy could represent this complexity, and Dileep confirmed that I was correct, stating that the brain likely has a number of different hierarchies that communicate with one another. Since that time, I will say that I think that I was mixing up how we recognize a particular instantiation of a shoe with how we recognize the invariant representation of "shoe" that is stored in the brain. The simple tree-shaped hierarchy might be sufficient to store the invariant "shoe" concept while not working to recognize a particular shoe.

Dave's question to Dileep focused on whether a single HTM network could recognize both an object (such as a shoe) and an action (like running or walking). Surprisingly to me, Dileep answered that you would need two separate HTM networks to handle two separate types of knowledge. My conclusion how is that the simple traditional tree shaped hierarchy is not sufficient even to represent all invariant concepts known by the brain, much less the particular instantiations of those representations that we learn (i.e. particular faces of persons as opposed to the general idea of "face").

This goes to show that even if Numenta's new algorithms have licked the problem of how the brain learns within a region and does inference, figuring out how the brain as a whole learns many types of objects and concepts, both invariantly and specifically, and how it ties all of this knowledge together in the amazing way that our brain works, is still something that we are only beginning to figure out.

10 comments:

  1. Yes, that was me.. :)

    Thinking about this problem for a while, I've realized that I think language plays a much bigger role in our actually thinking process than we give it credit for. I think language itself allows us to organize our own perceptions into meaningful categories. This is probably why they say you can't remember anything before learning language.

    That definitely is only part of it, however. After all, mammals can recognize different types of objects. The type of learning they do must simply be associative among things - e.g., intrinsically, this is like that. But they don't have the language to categorize all things with those characteristics.

    In this way, I think language in a way allows for more steps of the hierarchy, allowing groups of observations to be reasoned about as a whole, as opposed to just the observations themselves (and then of course groups of groups, etc.). In this way, language is fundamental to our own ability to learn and to think.

    It's very interesting to me that our minds allow us to kind of leverage our own learning like this. Perhaps its because language intrinsically is just a manifestation of the same type of communication that happens within the brain itself. In any case, this self-organizing principle seems like it may be an emergent property of these types of systems, as they allow for a lot of compression.

    ReplyDelete
  2. Many Philosophy professors (including one of mine in college) believe, as you hypothesize, that language is the essence of high level thought. One could wonder, though, if we only think that because humans think by using language. It is simply difficult to imagine how one goes about thinking without having language, but at some level, most mammals apparently do it. At the heart of Numenta's new algorithms is their ability to not only learn sequences, but to pass the names of the sequences to the next higher level in the HTM hierarchy. If this theory is correct, I think that even without the ability to do language, our brains are naming the sequences of patterns that are recognized as invariant representations. In an HTM node, the name is simply a bunch of numbers until a human tells the HTM what to call an object that it is seeing.

    Whether this means that a monkey sees a familiar object and somehow names it for future reference, I am not sure.

    I definitely agree that solving the big AI problem and solving the ability of computers to understand language are intimately tied together. If a computer can't do language, it won't be recognizable as something smart enough to be anywhere near human-level. On the flip side, if a computer doesn't have an HTM-like system approaching human intelligence, I don't think that language recognition will approach humanlike capabilities. People wonder why Dragon dictation 11 still can't do nearly as well as humans, and the answer is, and will continue to be, because the software does not have a humanlike world model in its brain that allows it to understand the meaning of the words that it is attempting to understand. Knowing that, it actually surprises me how well voice recognition software does these days. I am not holding my breath for human-level voice recognition by Numenta or anyone else until some of the broader AI problems begin to be solved.

    ReplyDelete
  3. This article has an interesting discussion of the various cognitive architectures out there - and explaining how HTMs (or DeSTIN, or whatever) can play a role.

    http://www.ece.utk.edu/~itamar/Papers/NeuroComputing2010.pdf

    ReplyDelete
  4. Thanks, Dave, I just saw that article last night. I actually was going to post something about one of the cites in that article...

    ReplyDelete
  5. I have to correct myself. That is not the article that I found last night. I skimmed that one, though, and it looks interesting. I see that the DESTIN guys once again give HTM's short shrift, using as their only reference "On Intelligence" (using an incorrect citation to it). When On Intelligence was written, HTM as a machine learning theory did not exist, so that doesn't seem like the best cite to use. You have to wonder if they have even read the much more recent "Mathematical Theory of Cortical Microcircuits." Anyway, here's a link to the article I found:

    http://www.ece.utk.edu/~itamar/Papers/CIM2010.pdf

    ReplyDelete
  6. I agree that they seem to disregard HTMs too easily. I honestly think it's because they want their idea to sound novel when it is basically the exact same thing as HTMs, except about 4 years behind.

    In any case, I think the ideas they present about using these types of memories as one component of an overall AGI is very interesting, and it certainly seems like something that HTMs can fit into if/when they prove to be superior.

    By the way, this Neil Jacobstein guy (who was just appointed president of Singularity University) is pretty high on HTMs in his AI lecture:

    http://www.youtube.com/watch?v=lhzbIIffA64

    ReplyDelete
  7. Dave:

    That was a good talk, thanks. Numenta's had little or no public involvement with the "singularity" movement as far as I can tell, so maybe Jacobstein's enthusiasm will help change this. Numenta has always seemed to keep its distance from the traditional AI community (which might explain some of the dismissiveness from people like B. Goertzel). I wish that Numenta was out more in that community attempting to make its case. As I have noted, particularly in the last year or so, Numenta has gone virtually silent in terms of speeches, papers, etc. I wish that I knew what the strategy behind that change is.

    I thought it was interesting that Jacobstein essentially said that that strong AI was more likely to come first from the more biologically inspired approaches like Numenta's than from the less biologically-based approaches such as Goertzel's Novamente/DESTIN approach. I have to agree that the fact that Numenta always looks to biology when it hits hurdles in the road is one of the reasons that it will likely be a leader in the AI field for the foreseeable future. Of course, Numenta's funding and brainpower are also important factors. Hawkins and Co. are not only very knowledgeable in both neuroscience and AI techniques, they made a lot of money from their Palm days.

    ReplyDelete
  8. Yeah, I think they are trying really hard to get people to take them seriously, and they think ascribing to the singularity movement or whatever will hurt their credibility.

    I'm not sure they really care what that community thinks of them. They are much more concerned about people like Google, I am sure, who can license their technology (or buy them out). The singularity community has too much baggage (i.e. transhumanism, etc) that gives it difficult PR circumstances.

    ReplyDelete
  9. True, I hear what you are saying about some in the singularity community. Hearing people say seriously that they look forward to downloading their brains into a computer chip to continue their existence is creepy, to say the least.

    Having said that, the broader idea of a technological singularity seems to be going mainstream. For instance, on the Singularity Summit website, the partners include Google and the Scientific American publication. I remain puzzled by the utter lack of any participation by Numenta in any conferences, whether mainstream AI or otherwise.

    ReplyDelete
  10. Yeah, that's true. It could also be that they don't want to "reveal themselves" until things are really extraordinary, so they don't run the risk of overhype-letdown that has plagued AI.

    ReplyDelete