Currently, the system simply uses the changes in the total area of the detected blob of the object to figure out the proximity of the object with respect to the palm. I am planning to use stereo vision for it, so that the distance is properly calculated, and that the object handing automation process would be improved. Here is a demo video of the stereo vision currently being developed. More work needs to be done before this is actually realized, but for now, you could more or less figure out what object is closer to the camera through the shades of the pixel.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.