I started this project as a final year EE student because I was curious about exploring the fields of Artificial Intelligence, Deep Learning and Machine Learning in general, as well as improve my electronics and CAD skills. But I wanted to create something that would one day be able to help people. This is how I became interested in creating a robotic hand that ultimately can be used as a prosthetic, but will also be suitable for more general automatic robotic grasping tasks. Given a very limited budget I decided to explore how far I can push functionality and intelligence of the hand with minimal sensory input. This is what led me to developing a hardware and software system that can make its own grasping choices based solemnly on visual input provided by webcam. The functionality of the hand is as follows in a nutshell:
- point the hand at any object,
- artificial intelligence via Convolutional Neural Networks decides the best way to grasp it,
- prosthetic hand grips the object as instructed by the neural nets.
The results so far have been incredibly promising as can be seen in the video below.
Licenses:
http://opencv.org/license.html