Tuesday, December 21, 2010

Singularity Summit speech by Demis Hassabis

I just saw an interesting talk from the 2010 singularity summit by demis Hassabis. He spoke about the failings of traditional ai, one of which is that it ignored for decades the only known example of high-level intelligence (the human brain). At the other end of the spectrum, he mentioned the brain simulation projects such as blue brain and darpa synapse, which tend to understand the wiring of the brain, but not its functions. Hassabis argued for a middle ground approach that combines the best of machine learning and neuroscience, which of course is the approach being taken by Numenta.

Hassabis mentioned that brain-inspired deep learning approaches such as htm and deep belief nets have made significant progress. He made the interesting point that these systems are becoming good at sensory perception, but that as of yet, it is not known how we can create the brain's conceptual knowledge from sensory knowledge. Hassabis clearly believes that something like htm cannot alone produce abstract knowledge. I personally am not convinced that sensory knowledge can't lead to abstract knowledge. The fact of the matter is that everything we know is derived from our sensory experiences, past and present. I am not naive enough to think that HTM theory is a comprehensive explanation of brain function. It just seems to me that sensory data could over time produce increasingly abstract knowledge. The whole idea of a hierarchy of space and time is that successively higher layers of the hierarchy contain increasingly invariant, abstract representations, so I don't see even see a clear difference between perceptual and abstract knowledge. Our ideas about love, hate, and anger all arise from past and present sensory experiences, from seeing, hearing, touching, and otherwise experiencing the good and bad of humanity, learning to represent in our minds these abstract ideas.


http://vimeo.com/17513841

Monday, December 20, 2010

Working developer implementation of HTM

A developer at a company called Provisio has developed his own working version of the new HTM algorithms to which he posted a link on Numenta's forums. It is a nice tool that you can go in and play with and see visually how the columns are processing data through many time steps as sequences of letters are presented to the algorithm. It is cool over time to see the system begin to predict the next letter in the sequence. It is still an early version of the software, but is fun to play around with:

http://research.provisio.com/HTM/TemporalMemoryLab01.html

Technology Review article on Numenta

Thanks to Martin for bringing this article to my attention. It discusses how the new alorithms are sufficiently powerful that commercial applications of the technology are now imminent. I thought it was interesting that Itamar Arel was quoted with positive things to say about Numenta's tech. I have mentioned him before on this blog, and have wondered why he hasn't worked more closely with Numenta since they have similar goals. Arel has a competing deep learning system known as DESTIN that Ben Goertzel wants to use as the sensory perception portion of a child-like robot (if there is such a thing, the idea of a child-like robot seems to be a bit creepy to me as the parent of living, breathing children). Here the link to the article:

http://www.technologyreview.com/business/26811/

Sunday, December 5, 2010

Another new Hawkins talk

On December 2, Jeff Hawkins gave a talk at Berkeley. It is similar to the talk from three weeks ago, but with some added tidbits sprinkled throughout. One nugget was Hawkins' statement that there is no existing machine learning model that comes even close to HTM for the depth to which it maps to the real cortical anatomy. This is exactly the point I made in my debate with Michael Anissimov on his blog. Here's the link:

http://www.archive.org/details/Redwood_Center_2010_12_02_vs265_26_Jeff_Hawkins

Thursday, December 2, 2010

HTM hardware implementation

I came across an interesting Powerpoint document by Dan Hammerstrom, a Professor in the Electrical and Computer Engineering Department at Portland State University. He has in the past published papers discussing potential hardware versions of HTM, and works with the DARPA SYNAPSE team that is attempting to create brain-like hardware.

In any event, he is collaborating with Numenta to create a hardware implementation of the new learning algorithms (he calls them "HTM3" as opposed to the prior software ("HTM2"). Hammerstrom says that Numenta is running into serious scaling problems with the new algorithms due to the limitations of present-day CPU's, and they are concerned that this will impact the wide adoption of their algorithms. Interestingly, they have tried using GPU's, but it hasn't helped much, so they are looking at more custom hardware tailored specifically to their algorithms. Working with Hammerstrom, they are looking at three possibilities:

1. More optimal use of CPU's and GPU's
2. FPGA's
3. Custom silicon created specifically for Numenta

Now it is even more clear that Numenta is not focusing on computer vision yet because today's computers don't have the horsepower to run the software. In any event, here is a link:

http://web.cecs.pdx.edu/~strom/talks/hh_my_research_web.pdf

Saturday, November 27, 2010

Latest Numenta newsletter

A few days ago Numenta sent out a newsletter with a quick update on their work. The newsletter notes that Numenta has now posted the Smith Group lecture on its website (it runs much more smoothly than the version on the University's website). It also announced the first additions/updates to the new learning algorithm documentation. The new additions were helpful, particularly the addition of an appendix that goes into some depth about the neuron model used by the HTM software. It includes some of the graphics used by Numenta in the online lecture.

Sadly, the newsletter noted that Numenta is temporarily deferring its work on computer vision problems in favor of applications that are more focused on temporal patterns, such as web click prediction and credit card fraud prediction. I guess that I can't say that I am too surprised by this. In hindsight, based on the online video and the whitepaper, it is clear that Numenta ran into some problems with its vision experiments with the new algorithms. The current algorithms can model layer three or four of the cortex (layer 3 for variable order time based learning or layer 4 for learning that does not rely on context). The whitepaper hypothesizes that layer four allows the brain to learn spatial invariance while layer three allows the brain to learn temporal invariance but that for vision problems the brain is somehow combining layers 3 and 4 to create spatial and temporal invariance at the same time. Until Numenta figures out how to model both layers at the same time working together like the real brain, computer vision probably isn't going to work terribly well.

Saturday, November 13, 2010

Comparing the new algorithms to the first generation

It is interesting to compare the new algorithms to the original zeta1 algorithms released by Numenta back in 2007. In a May 2007 blog post, Numenta discussed the limitations of those algorithms. Here were the limitations noted at that time (see this link for the blog post):

1. The 2007 algorithms had no time based inference, so inference was based only on, for instance, a single snapshot of a picture to recognize an object. Now, of course, the algorithms fully employ time-based inference, which should make computer vision applications (and other applications) based on HTM much more powerful.

2. In 2007, time was used for learning, but it was only "first-order" time based learning. That meant that when the software was attempting to learn sequences of patterns, it would only account for the current time step and one prior time step. Imagine trying to learn invariant representations for dogs, cars, people, and other complex objects based on only two consecutive still pictures of data. Our brains learn by seeing many "movies" of objects in the world around us, so this was a very significant limitation on the power of HTM. Now, it appears that HTM can learn sequences of essentially unlimited length.

3. The 2007 algorithms had abrupt discrete "nodes" without overlapping boundaries. According to the blog, this diminished the ability of the system to create invariant representations. Now, the levels of the HTM hierarchy are one continuous region (no more nodes). This is a big change that I actually wasn't expecting, which is good, because continuous regions of neurons rather than discrete nodes is the way that the brain works.

4. The 2007 algorithms did not use sparse distributed representations, which also severely limited the scalability of the algorithms due to memory requirements. Now, it goes without saying that sparse distributed representations are the key to making the new algorithms work. Not only does this make the algorithms much, much more scalable, it also facilitates generalization.

In short, every single major listed shortcoming of the original HTM software has now been addressed. I expect to see many commercial applications start to come from Numenta's work. Hopefully my blog will soon be able to focus as much on applications as on the core technology. It will be interesting to see the extent to which this technology takes off over the next few years. Personally, I am particularly interested in robotics, and hope to see HTMs begin to be used to create robots that can intelligently perceive the world and perform useful tasks. Navigation, object recognition and manipulation, and language understanding are all things that could theoretically be done by HTM.

Friday, November 12, 2010

Dileep George's departure from Numenta is now permanent

That's a big loss, as Dileep was the guy who was able to take the ideas in On Intelligence and create a mathematical model and the first working software implementation of HTM.

According to Dileep's blog, he has now started a venture-capital funded company called Vicarious Systems, Inc. Its stated goal is to develop AI applications, starting with computer vision applications. If you go to vicariousinc.com, you can sign up for a corporate email newsletter.

It now makes sense that Numenta is trying to hire a Senior Software Engineer.

Wednesday, November 10, 2010

Paper on new learning algorithms now available on Numenta's website

This was a very interesting read that hopefully some of the Numenta skeptics will take a look at closely. A few points that I pulled out of the paper were as follows:

1. The new learning algorithms are very closely tied to the biology of the brain. The new HTM software models the hierarchical levels, columns of neurons, the neurons themselves, and even the dentrites and synapses. Numenta clearly believes that HTM now learns in a similar way as the neocortex.

2. The algorithms appear to be scalable to any size. It sounds like the user can set the number of columns, the number of neurons per column, the number of levels, etc. and the only real limiting factor on scalability is the power of your computer and the amount of memory that you have available.

3. For the first time, prediction is now at the center of the HTM algorithms. On Intelligence, of course, postulated that prediction is at the heart of what the brain does, and is what makes us intelligent, but HTM until now really didn't implement prediction. Now that a more brain-like method is being used for the learning of sequences of patterns, HTM appears to have a powerful prediction mechanism. According to the paper, anytime HTM recognizes that an input is part of a sequence, it will automatically predict future inputs. It can do this based on sequences that go far into the past, and can predict not just one, but a number of time steps into the future. These capabilities will be important when someone decides to use HTM to control a robot, since according to Numenta, prediction and directing motor behavior are very similar activities. For instance, when a robot has a goal to accomplish some task, it will use prediction based on its remembrance of learned sequences that constitute prior motor actions to direct its future actions.

4. A number of theoretical HTM capabilities are not yet implemented in the software. Numenta specifically mentioned attention mechanisms, motor behavior for a robot or some other physical embodiment of an HTM, and specific timing for the learning of sequences that happen at particular speeds (such as music). Still, it will be very interesting to see the acceleration of commercial applications with the significant advance that these algorithms represent.

5. This paper is only a working draft. It was mentioned that several future chapters are planned for the book, including a chapter on the mapping to biology, and a chapter on how the algorithms have been and can be applied to applications.


Here is the link:

http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf

Thursday, November 4, 2010

New Whitepaper, etc.

The news keeps coming lately from Numenta. Their most recent Twitter post says that the new whitepaper (replacing the original one from March 2007) that describes the new algorithms is going to be published before the end of the month.

Also, I ran across an interesting exchange between a Numenta defender and Ben Goertzel. I am sure you can guess which side of the argument I am on:

http://knol.google.com/k/angelo-c/opencog-numenta-and-artificial-general/1luetnln973wm/3#

Friday, October 29, 2010

Jeff Hawkins speech on November 12

For the first time in eight months, it looks like Jeff Hawkins will be speaking publicly about Numenta's work. The bad news is that, per the contact for the event, it will not be broadcast or recorded. I am hoping that that information turns out to be incorrect. Hawkins is giving the 2010 SmithGroup Lecture at the University of Illinois at Urbana-Champaign. Based on the abstract provided by Numenta, Hawkins will discuss Numenta's overall neocortical theory, the new learning algorithms, and how he believes hierarchical learning machine intelligence models will evolve in the future. Perhaps we will get lucky and someone will create an amateur video of the event. Definitely sounds interesting.

UPDATE: Interestingly, it looks like Hawkins is going to be giving the same talk at MIT's Center for Biological and Computational Learning on November 10. This is great to see, because MIT's CBCL has some of the leading research going on in the field of biologically inspired machine learning. For instance, Tomaso Poggio, who I have mentioned on this blog, is a part of CBCL. This is the type of publicity that I was hoping to see for Numenta. Here is a link:

http://www.facebook.com/CBCL.MIT#!/event.php?eid=150521674991840

Thursday, October 14, 2010

DARPA and hierarchical temporal memory

In a recent comment, I linked to an article entitled "Deep Machine Learning- A new Frontier in Artificial Intelligence Research.".

In it, the creators of the DESTIN architecture, who I have mentioned before in this blog, attempt to summarize the field of work going on with Deep Machine Learning, or the idea of using hierarchies to learn in a more brain-like manner. What was interesting to me about the article was that it mentioned a DARPA research effort involving deep machine learning architectures. In April 2009, DARPA put out a request for proposals for deep machine learning architectures. The military is increasingly worried about the vast amount of data that it collects that goes unanalyzed due to the sheer volume of data that humans do not have the time to analyze. DARPA seeks an HTM-like algorithm that will find patterns in this vast data. The DARPA announcement closed in April 2010, and to my shock, I don't see any indication that Numenta put in a proposal (among others, it appears that the DESTIN folks did). In a briefing, DARPA set out a list of desirable properties that would be features of the algorithms resulting from the multi-year research effort. Here is the list:

1. Single learning algorithm and uniform architecture for all applications

2. Unsupervised and supervised learning, including with attention mechanisms

3. Increasingly complex representations as you ascend the hierarchy. Sparse representations were mentioned here

4. The ability to learn sequences and recall them auto-associatively

5. Recognize novelties at each level and escalate them in the hierarchy

6. Feedback for predictions to fill-in missing input

7. Online learning

8. Parameters set themselves and need no tweaking

9. Neuroscience insights to inform the algorithms and architecture

Essentially, that list of desirable features in DARPA's envisioned software is a description of the HTM algorithms. Its difficult to imagine why Numenta didn't throw their hat in the ring given the amount of money potentially involved if the technology catches the eye of the military. In any event, DARPA's document was very interesting reading.

Monday, October 4, 2010

The complexity of the problem faced by Numenta

I have been following Dileep George's new blog, and he made a couple of responses to posts by myself and Dave (perhaps the same Dave who occasionally posts here).

In my post, I asked Dileep how the traditional tree-shaped hierarchy can explain the vast number of qualities that can come into play when we recognise, for instance, a shoe. For example, when we see a shoe, we recognize that it is a certain color, a certain texture, has a certain design on it, and many other features. In other words, recognizing one object requires the brain to have connections to a number of other invariant representations of different types of objects and concepts. In my mind, I couldn't see how a simple tree structured hierarchy could represent this complexity, and Dileep confirmed that I was correct, stating that the brain likely has a number of different hierarchies that communicate with one another. Since that time, I will say that I think that I was mixing up how we recognize a particular instantiation of a shoe with how we recognize the invariant representation of "shoe" that is stored in the brain. The simple tree-shaped hierarchy might be sufficient to store the invariant "shoe" concept while not working to recognize a particular shoe.

Dave's question to Dileep focused on whether a single HTM network could recognize both an object (such as a shoe) and an action (like running or walking). Surprisingly to me, Dileep answered that you would need two separate HTM networks to handle two separate types of knowledge. My conclusion how is that the simple traditional tree shaped hierarchy is not sufficient even to represent all invariant concepts known by the brain, much less the particular instantiations of those representations that we learn (i.e. particular faces of persons as opposed to the general idea of "face").

This goes to show that even if Numenta's new algorithms have licked the problem of how the brain learns within a region and does inference, figuring out how the brain as a whole learns many types of objects and concepts, both invariantly and specifically, and how it ties all of this knowledge together in the amazing way that our brain works, is still something that we are only beginning to figure out.

Thursday, August 26, 2010

Numenta's new website

Numenta redesigned its website. Here are a few nuggets from the new site:

1. Some new videos were added, including Hawkins' 2008 keynote from the HTM workshop and a speech by Subutai Ahmad from the 2009 workshop. Ahmad's talk was particularly interesting because he discussed a number of corporate partnerships and some early results from them. For instance, Numenta is/was working with a major automaker on the creation of a pedestrian detection system, where the car looks for pedestrians in front of the vehicle. The early testing resulted in 96-97% accuracy, or closer to 99% accuracy if one counts a false positive as a good result (situations where the system detects a pedestrian where there wasn't one). The talk also mentioned some interesting work that Numenta did with Tyzx, which provides computer vision systems. They used an HTM network to look for objects/persons in security camera footage. Subutai specifically mentioned robotics as a potential application. Interestingly, only three days ago Tyzx announced a deal with Irobot to provide vision systems for its military robots, including person detection capabilities. The press release did not mention whether HTM's are a part of that technology. It would be interesting to see what companies Numenta has been working within in the 14 months since Subutai's talk.

2. The website also contains a basic description of its new learning algorithms. It is difficult not to notice how huge of a leap forward that Numenta views these algorithms as. In one place, Numenta states that the new algorithms are a "radical" improvement. In another place, it states that the new learning algorithms are "far superior" to the old ones. One thing that I wish the website contained was some experimental results showing these huge improvements. One thing that I found confusing was its description of prediction in the new algorithms. It described prediction as something flowing up the hierarchy. That seems different from prediction as described in the original HTM theory, which envisioned incoming data flowing up the hierarchy and predictions flowing down the hierarchy. In any event, it was an interesting read.

Thursday, August 19, 2010

Tidbits on Numenta

Wow has it been a slow summer for HTM news. I have never seen a period of time where Numenta's employees have made so few public appearances. Since Hawkins' talk in March, I haven't seen any mention of any Numenta speeches, interviews, or papers of any kind. A few things of note:

-In its June newsletter, Numenta mentioned that it decided not to attempt an HTM workshop this year, meaning that the next generation algorithms will not be out this year. In their words, they decided not to push for an "interim" release this year, but to delay the workshop and release to 2011. "Interim" was an interesting choice of words, suggesting that a more fully featured product will be the result when NUPIC 2.0 does come out.

-Dileep George has a new blog on his website. It is called Mind Matter, and is located at the following link.

-I saw an interesting blog post regarding a robot called Nao that can apparently show and understand emotion. The author claims that the creators of the new robot software were using HTM software in the robot. I have not been able to verify that claim. The only mention of HTM in the context of a Nao robot that I found was an article by Ben Goertzel in which he describes using an HTM for low level perception in a Nao robot. That article specifically states that he hasn't implemented the idea yet, however. Find the link here.

-Finally, Tomaso Poggio, a Professor in the Department of Brain and Cognitive Sciences at MIT, and one of the creators of a biology-based hierarchical learning model known at HMAX, has created a software model that uses GPUs to greatly accelerate software that is designed to emulate the cortex, such as HTM or HMAX. Poggio claims that the software accelerates these biology-based models by an amazing 80-100 times. Poggio is listed as a technical advisor on Numenta's website, so hopefully they are aware of this, especially given the increased computational demands of NUPIC 2.0.

Wednesday, May 19, 2010

Dileep George leaving Numenta

Big Numenta news today. Numenta's website indicates that Dileep George is on an extended "personal" leave of absence. He, along with Jeff Hawkins, co-founded Numenta back in 2005. It is hard to overstate his importance to the company over the years. He was the guy who read "On Intelligence" and figured out how to turn Hawkins' neuroscience theories into a mathematical expression that could be created in software. I went to George's website, and he says there that he left Numenta so that he could form a new company focused more on applications of the HTM technology.

I am not sure what to think about this. On the one hand, this could signal that George simply thinks that the technology is finally in a state that serious commercial applications can now be created with HTM. Numenta has always been more about the basics of the theory than applications, so it might be that Dileep just wants to hurry along the commercialization process. This could be a signal that the new algorithms really are going to be that big of a step forward for AI.

Hopefully this move doesn't mean some type of rift has opened between George and the company. Numenta could really use his talents down the road. Given George's stated reason for leaving, and given that it is called a "leave of absence" rather than outright resignation, I am inclined to go with the more optimistic interpretation.

Friday, May 7, 2010

May 2010 Numenta newsletter

In case you missed it, this week Numenta issued a newsletter with an update on the status of the new algorithms. Its an interesting read, with a write up by Jeff Hawkins himself. He talks about how last fall they decided to take a fresh look at their node learning algorithms, realizing that the current version had shortcomings that could not be overcome. They went back and looked at the brain for inspiration on how to improve the learning capabilities. Here are a few keys points I pulled out of the article:

1. The new algorithms can be learning and inferring at the same time. The old algorithms had a separate learning and inference stage, so this will be a significant improvement for many applications with real time data where the system needs to be able to learn, infer, and predict in real time (like a real brain).

2. The sparse distributed nature of the system makes it scale much better to large problems, and makes it very robust to noise. In other words, the system will work very well with messy, incomplete data.

3. Variable order sequence learning- A real brain can start listening to a song midway through and almost immediately identify the song. Likewise, we can predict the future based on learned sequences of various lengths that occurred a short time ago or years ago. The new software will be useful in doing these types of things.

4. Much more biologically realistic- This is the first version of the software that will basically be emulating the cortex at the level of neurons and synapses. Of course, the downside is the higher system requirements. Hawkins notes that Numenta is having to spent a great deal more time optimizing the software to be able to work on something that isn't a supercomputer.

As an aside, I am surprised at the level of skepticism of some of the mainstream AI people regarding Numenta. Ben Goertzel, for one, seems determined to believe that Numenta is on the wrong track. He went out of his way recently to claim that Itamar Arel's DESTIN system is a better hierarchical pattern recognition system. I have looked into DESTIN, and it actually seems very similar to Numenta's work. It learns temporal sequences of spatial patterns in a hierarchical nature, and performs bayesian inference. I have not been able to find any evidence showing that DESTIN has, so far, done more in the computer vision arena than HTM. If I am wrong, someone can correct me. For instance, in a December 2009 paper regarding DESTIN, Arel noted that they had conducted an experiment showing character recognition. It was recognition of letters in a binary (black or white) environment. Numenta was demonstrating that level of work at least three years ago. My sense is that DESTIN is on the right track, and perhaps Arel and Hawkins will collaborate at some point (maybe they already are) but I have no idea how Goertzel reaches his conclusion.

I was happy to see that Shane Legg (another AI critic of Numenta) seemed to change his mind about Numenta after seeing Hawkins' recent talk in March on the new algorithms. If Numenta can come through in a big way with its next software release, I think that there will be many more converts to the HTM theory of AI.

Tuesday, April 20, 2010

2nd public commercial HTM application

Looks like a press release was issued today by EDSA Power Analytics, noting that it is teaming up with Numenta to develop autonomous monitoring of electrical power systems. Apparently, for some applications, electrical power failures are hugely expensive. EDSA develops software to monitor power systems to attempt to prevent power failures. HTM can be used by EDSA to learn the difference between normal and non-normal electrical activity, such that the HTM becomes increasingly able to predict a future power failure.

http://www.businesswire.com/portal/site/home/permalink/?ndmViewId=news_view&newsId=20100420005529&newsLang=en

Thursday, March 25, 2010

Hawkins explains next generation HTM algorithms

On March 18, Jeff Hawkins gave a talk to the computer science department at the University of British Columbia. In that talk, for the first time, he gave a detailed explanation regarding the upcoming HTM algorithms. To be honest, parts of it were difficult for a layperson (like me) to fully understand. It is a very interesting talk, though, and showed that Numenta is getting ever closer to a truly intelligent computer. This version of the algorithms appears to much more closely mimic the brain's method of learning than ever before. Here is a link to the speech:

http://www.youtube.com/watch?v=TDzr0_fbnVk

Wednesday, February 17, 2010

Narrow versus broad AI

One interesting question to think about is whether Numenta's current focus on implementing its HTM algorithms on narrow AI problems actually hurts its usefulness. I say that because AI is actually getting pretty good at certain narrow AI problems (handwriting and speech recognition, certain types of computer vision, game playing, etc.). For instance, in a recent study, HTM's were in the middle of the pack in recognition accuracy when tested on optical character recognition. Of course, the counterargument is that HTM's don't just do character recognition, but can be used for many, many other applications. Yet, that gets to my point. One wonders if, for any narrow AI endeavor, HTM's will be outflanked by competing AI techniques designed specifically for that one, narrow task.

If this is truly a problem, the obvious solution would be for Numenta to ultimately focus HTM's on broad competence AI (i.e. robots that can carry on a conversation with you, reason intelligently on novel problems, make independent decisions, and otherwise learn like a human). Yet, Jeff Hawkins himself in his book envisioned little or no role of this type for his technology, at least in terms of robots.

The difference between the human brain and any computer is not that the human brain can do all thinking/reasoning tasks better than all computers. In fact, computers are now much better at certain tasks than humans, such as number crunching and playing chess. What is different is that the human chess player can do that and a million other things, and can understand how the million different things all relate together in a complete world model.

Dileep George of Numenta has actually publicly touched on a related topic in recent months. He discussed the No Free Lunch theorem, which states that no learning algorithm is inherently superior to other learning algorithms. If an algorithm seems to be better at a certain task than other algorithms, it is only because that algorithms is written based on certain assumptions about the world that are applicable to that task. The more assumptions that an algorithm makes about the world, the better it will be at a task in an area where the algorithm can exploit its world assumptions. Yet, that also means that while the algorithm will be better at specific problems where those assumptions are applicable, it will be worse at other problems where you can't make those assumptions.

Numenta believes that HTM's take advantage of two properties of the world that are also exploited by the human brain: 1) the world is hierarchical in nature, and 2) all learning must be done through time. In other words, HTM's work because we live in a hierarchical world in space and time. In narrow domains such as chess playing, the AI algorithms are designed specifically and only for that single task. The engineer thus makes many, many assumptions about the world in coding the algorithm. For this reason, a chess playing computer can beat the world's best human at chess, but knows absolutely nothing else about the world.

The No Free Lunch theorem brings me back to my initial point. Would Numenta be better off focusing on broad competence AI? Assuming that HTM's are based on the same assumptions about the world as the human brain, HTM's will be the best means of emulating human-level AI. On the other hand, for the many, many different narrow AI problems out there, the algorithms being developed specifically for such problems might be better. Time will tell. So far, Numenta has mostly focused on computer vision, and that might actually be a broad rather than narrow AI problem, given the huge amount a computer needs to know about the world to truly be able to understand what it is seeing as well as a human. It will be interesting to see the direction Numenta takes in coming years.

Wednesday, February 10, 2010

Vitamin D's surveillance software no longer in beta

Vitamin D has reached a milestone by becoming the first company to create a working product out of HTM technology that can now be purchased (it was released in beta form in November). See here for pricing information: http://vitamindinc.com/store/pricing.php

It looks like a single webcam is still free, while for $49 you get support for two webcams in either 320 by 240 or full VGA resolution. For $199, you get support for an unlimited number of cameras at full VGA resolution (although the company does not recommend using more than 3-4 cameras for a dual core PC or 6-8 cameras per quad-core PC).

For now, the software recognizes the presence of humans in videos (as opposed to other moving objects like cars, animals, or tree brances). The future of this software will be quite interesting, as Vitamin D has already noted that it plans to upgrade the software in the future to detect more sophisticated actions in video.

Friday, February 5, 2010

New version of HTM algorithms in October

A couple of weeks ago, Numenta sent out a newsletter in which it revealed that it plans a major new release of its software implementation of HTM. It will be released in October 2010. The newsletter said that the company has had some recent insights into the HTM learning algorithms based on a deeper understanding of the biology of the neocortex. Numenta said that the new algorithms have the potential for a "large" increase in scalability and robustness.

One of the things that makes Numenta such a solid AI company is that when they run into problems with issues like scalability and robustness, they look to the brain itself for solutions. Even to a non-expert like me, it is obvious that the whole field of artificial intelligence has floundered for the past 70 years precisely because it has ignored the only known example of real intelligence, the neocortex of the mammalian brain. Jeff Hawkins' book made this very point. He decided in the mid 1980's that he wanted to enter a PHD program to create intelligent machines using the brain as his guide to doing it. He tried to apply to MIT, the leading AI lab in the country, and they basically laughed him out of the building for believing that it was necessary to understand how the brain works to create real AI. Now, 25 years later, MIT has a research group doing exactly what Hawkins suggested as a graduate student.

Hawkins' ideas may not all be correct, but the progress that has been made in AI in the last five years or so seems to be much more biologically grounded than 20 years ago, so Numenta is clearly on the right track by emulating the brain. If the new software really is a large improvement in its ability to scale (currently it is quite limited in many ways), we might actually begin to see the software begin to approach human-level ability at certain tasks, such as visual pattern recognition.