This is the interesting title of a short article presented by Dileep George (co-founder of Numenta) back in July 2009. It is available at the following link, but you need access to an educational database (such as through a University) to get to it:
A couple of interesting points were made in the article: As was mentioned publicly by Subutai Ahmad in September, Numenta's third generation of algorithms are now being developed. This means that the algorithms are beginning to become more mature. As a result, Numenta is finally starting to think seriously about implementing HTM in hardware. Until now, HTM was such a new technology, and the algorithms have been so early in their development, that it was not feasible to use hardware.
It turns out that HTM in its software configuration is very CPU hungry, so a hardware implementation could be quite useful. For instance, Vitamin D's new beta webcam software that can recognize people in video can only analyze two 320 by 240 video feeds for each dual core 2 GHZ processor. Think about that. That means that if Vitamin D were to use its HTM-based software to analyze 640 by 480 video, it would require a quad core 2 ghz processor for EACH webcam feed. Extrapolating from that, it would be difficult to find a home computer powerful enough to analyze a single 1080P or even a single 720P video feed.
Perhaps software versions of HTM would be sped up significantly with GPU computing, however. HTM can apparently scale to a very large number of parallel CPU's, so one would think that a GPU could improve significantly over a CPU's performance.