I did a little research yesterday in how to figure out the best way to have the robot pick up objects and place into "the mouth". This weekend @ night I'll look more into this but I may need to install PyBrain onto the Pi to assist with this. It seems like one way to to this. I'll also post more details on the mouth. I have some roomba wheels which will feed pull objects in :).
I am new to robotics to I have a theory (which is probably wrong). Do I really need depth? Can it be simulated with a single camera? Is everything just a co-ordinate since all distances are limited by the arms (move arm to approximate coordinate, distance measure to object on arm, finish grasp)?
just in case I'm keeping these links safe for later.
Kinect stuff:
http://openkinect.org/wiki/Main_Page
Machine Learning Links:
http://www.cs.ubc.ca/research/flann/
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.