To be able to replicate a basic brain it's important to define the scope of what brains do.
At its core, a brain learns models and makes predictions about its senses and environment in order to make reasoned actions. This ability to represent reality is key for context-aware decisions.
All robots represent their environment internally in some fashion in order to make proper decisions based on the current context.
Whether implicitly programmed or learned, robots must have internal representations of the external world created from their sensory inputs, in order to respond intelligently based on the situation.
The cortex is the part of the brain that does this in all living things.
As the seat of perception, thought and memory, the cortex contains hierarchical networks that learn models of senses across time and space. This enables understanding reality.
The cortex can be thought of a common subspace for all the "sensors". The cortical hierarchy integrates different sensory inputs into a common framework of knowledge about time and space. This convergence enables inference and prediction.
The cortex can be boiled down to the algorithms used in HTM theory. HTM provides a computational theory capturing core algorithms like spatial pooling, temporal memory, and sensor fusion learned by the laminar neocortex.
This is how all representations of the environment will be stored for our robot. By implementing an HTM framework, our robot can build and leverage a learned model of reality just like animal brains, enabling meaningful responses.
HTM in Robots
HTM has compelling advantages over conventional programming:
1. It produces inherent invariances automatically from real-time data rather than manually defining filters.
2. The hierarchical structure allows high-level inferences and transfers based on sequenced low-level observations.
3. Sensor fusion happens intrinsically matching patterns across data streams in time and space.
4. Making predictions rather than static observations enables smarter real-time decisions.
5. The neocortex approach better handles novel inputs compared to task-specific systems.
By more closely replicating cortical structure, HTM-based robots will display more adaptive responses like animals. This project explores that potent concept through biologically inspired robotics!
HTM Overview
The HTM cortical learning algorithms contain three core functions: Spatial pooling - This mechanism in HTM does pattern recognition on the input data to identify common features, regardless of location. Like visual edge detectors.
Temporal memory - This memory system models time, discovering causes and effects by sequencing spatial patterns. It is analogous to grid cells activating at specific moments.
Sensor fusion - HTM integrates different sensory inputs together spatially and temporally. Like hearing a bark and seeing a dog simultaneously from different senses.
By stacking regions and columns using these principles, HTM forms a hierarchical network that can capture increasingly sophisticated features and behaviors over broader timescales.
For more information on Hirachical Temporal Memory, watch this very useful playlist by Numenta.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.