Surprisingly, Jeff Hawkins' recent lecture is now available online:
http://www.beckman.illinois.edu/gallery/video.aspx?webSiteID=o2RWiAWUQEKPgd_8QBTYOA&videoID=fYhfoB6NFE2ytFPl7XLnTA
Subscribe to:
Post Comments (Atom)
This blog is a source of information for anyone interested in Numenta, Inc.'s hierarchical temporal memory and other related technologies. This blog is not affiliated in any way with Numenta, Inc.
Great video. I'd love to hear what they had to say at MIT
ReplyDeleteI would bet that they are similar talks, although with the close ties to Numenta of some of the MIT guys such as Poggio and DiCarlo, there might have been some interesting audience questions. One question that would be interesting to me is how far these algorithms can be scaled. Hawkins mentioned that they are currently modeling "hundreds of thousands" of neurons in the regions, which translates to about a billion synapses. Even though that is only a tiny portion of the total brain's capacity, it could be quite taxing for today's computers. I would bet that to really scale these algorithms to anywhere near a real cortex, they are going to need to go to hardware implementations. I think that I have read that you could potentially speed things up hundreds of times with hardware implementations of HTM.
ReplyDeleteYeah, I was thinking the same thing. Also, I'd like to know if they have found "optimal" setting for the parameters, i.e. column/cell ratio, etc, and how those compare with what we actually see in biology.
ReplyDeleteIt seems like GPUs would be a very logical fit for their algorithms...
(Sorry for my english, im from Hungary...)
ReplyDeleteYes nice video, but u have to wait to download
even if you just want to see the last minutes (dunno why cant they index the .flv-s, like on
youtube, lot of free software out there for this)
so i can upload the whole
vid on rapidshare if anyone need.