Wednesday, February 17, 2010

Narrow versus broad AI

One interesting question to think about is whether Numenta's current focus on implementing its HTM algorithms on narrow AI problems actually hurts its usefulness. I say that because AI is actually getting pretty good at certain narrow AI problems (handwriting and speech recognition, certain types of computer vision, game playing, etc.). For instance, in a recent study, HTM's were in the middle of the pack in recognition accuracy when tested on optical character recognition. Of course, the counterargument is that HTM's don't just do character recognition, but can be used for many, many other applications. Yet, that gets to my point. One wonders if, for any narrow AI endeavor, HTM's will be outflanked by competing AI techniques designed specifically for that one, narrow task.

If this is truly a problem, the obvious solution would be for Numenta to ultimately focus HTM's on broad competence AI (i.e. robots that can carry on a conversation with you, reason intelligently on novel problems, make independent decisions, and otherwise learn like a human). Yet, Jeff Hawkins himself in his book envisioned little or no role of this type for his technology, at least in terms of robots.

The difference between the human brain and any computer is not that the human brain can do all thinking/reasoning tasks better than all computers. In fact, computers are now much better at certain tasks than humans, such as number crunching and playing chess. What is different is that the human chess player can do that and a million other things, and can understand how the million different things all relate together in a complete world model.

Dileep George of Numenta has actually publicly touched on a related topic in recent months. He discussed the No Free Lunch theorem, which states that no learning algorithm is inherently superior to other learning algorithms. If an algorithm seems to be better at a certain task than other algorithms, it is only because that algorithms is written based on certain assumptions about the world that are applicable to that task. The more assumptions that an algorithm makes about the world, the better it will be at a task in an area where the algorithm can exploit its world assumptions. Yet, that also means that while the algorithm will be better at specific problems where those assumptions are applicable, it will be worse at other problems where you can't make those assumptions.

Numenta believes that HTM's take advantage of two properties of the world that are also exploited by the human brain: 1) the world is hierarchical in nature, and 2) all learning must be done through time. In other words, HTM's work because we live in a hierarchical world in space and time. In narrow domains such as chess playing, the AI algorithms are designed specifically and only for that single task. The engineer thus makes many, many assumptions about the world in coding the algorithm. For this reason, a chess playing computer can beat the world's best human at chess, but knows absolutely nothing else about the world.

The No Free Lunch theorem brings me back to my initial point. Would Numenta be better off focusing on broad competence AI? Assuming that HTM's are based on the same assumptions about the world as the human brain, HTM's will be the best means of emulating human-level AI. On the other hand, for the many, many different narrow AI problems out there, the algorithms being developed specifically for such problems might be better. Time will tell. So far, Numenta has mostly focused on computer vision, and that might actually be a broad rather than narrow AI problem, given the huge amount a computer needs to know about the world to truly be able to understand what it is seeing as well as a human. It will be interesting to see the direction Numenta takes in coming years.

2 comments:

  1. Nice analysis - keep the posts up! I'm glad I found this site. Thanks!

    ReplyDelete
  2. Indeed, excellent blog, very well written :-)

    ReplyDelete