There's probably an obvious reason why this won't work (ex. camera vibrates every time you type).
It is a fantasy of mine to be able to just tap on my legs and type (based on QWERTY layout). To track your fingers/hands, there are many approaches... usually though it's cumbersome (a lot of parts) or some magic is involved.
This is not directly going towards that, it's a physical platform to work on. I know almost nothing about computer vision/ML but I have an idea of what I need to do to get this to work (combine vision and IMU data).
You can see this sample image here the pinkies are almost not visible so not sure how that will go.
This is using a Pi Zero 2 W and V3 wide camera module. Unfortunately this pi's wifi is messed up... so I'm using a USB WiFi dongle for development through SSH but eventually it'll just run on its own and use bluetooth hopefully to work as a keyboard (possibly trackpad too).
Pretty ambitious project but I gotta get this vision stuff down.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.