Dropping an idea here. Light up and track a ping pong ball on a stick with a stationary camera using OpenCV using size as your depth cue. Re-create as a 3D model, displaying it with rotation around the center of it. Allow changing the color of the ball, duplicate in the model.
3D light painting in realtime.
Daren Schwenke11:16 AM
Suppose you could use IMU and a cell phone and guess the approximate location of the camera as well to eliminate the stationary bit.
you can also use looming to determine distance
that's how insects avoid obstacles
you mean using just a single camera then?
sure, and low-res too
i'm curious how well using a single std. camera would work compared to stereo
one thing with looming is that it requires motion
ahh
works best when you move towards the object at a known speed
you basically see how fast it grows -- objects that are close grow faster than object that are far
@Daren Schwenke meant, or are there reasons that doesn't work well compared to 2 cameras
could you simply use the diameter of the ping pong ball in pixels, from a single camera, and convert that to depth, which i thought is whatit's easier than stereo, because instead of finding the same object on two different photos taken at slightly different angles, you just need to find the same object at two different sizes
@anfractuosity that requires pretty large resolution, though, or very consistent lighting
and knowing the size of the ping pong ball in reality too maybe?
ping pong balls have standardized size
oh true heh
i think ToF cameras sound pretty nifty, not sure how pricey they are though
Kinect
the first one used a pattern i thought, is the 2nd one tof then
if you can control the hardware in the wand, there are easier ways to do this
like the wii rmote
oh that uses ir i think?
and a sensor bar thing right
doesn't matter if the light is visible or not
oh actually the 'bar' is the light emitter i think actually
the important thing is it's modulated, so you can tell it from the background easily
just found this, which sounds v. interesting, will have to read the paper
https://www.youtube.com/watch?v=ZolWxY4f9wc
The Algorithm
The problem is that it will work well when you move near the camera. As you get further from the camera, the precision will drop fast.
yes
put the camera on the ball, and look for blinking light beacons in the anvironment
environment
you will also get orientation info from it
and the farther they are, the more precision you actually get
The problem is with any camera held by the user, it has to be oriented towards some reference to work. That limits the size of what you can create versus a ball on a stick pretty quickly. Also with putting multiple sources or points, then you have to deal with orientation feedback or the distance between your points is no longer your relative distance to the object.
A bigger ball would work and give you more accuracy at a greater distance.
I just liked the small size too. :)
button to select each color/turn on. On while held so you can stop your lines. Or select color via some interface then only light up when selected I guess.
Hmmm.. how about a tetrahedron made with multiple balls on each vertex. Then the depth cue could be garnered from that. Probably the same level of accuracy increase as just using a larger ball, but then you could get orientation of the 'brush' as well.
Or... if you are tracking multiple balls, when two are lit and the same color draw that as a plane in space.
I did something similar for detecting distance to target with the Creeper project using the distance between your eyes. It was only accurate out to about 10ft at acceptable framerate. Tracking an object like a lit ball is a whole lot easier though and real-time at much better framerates.
and resolutions.
ooo.. in the virtual representation, you could have bins or areas in 3D space where you could position the target to select other features like line width, patterns, etc.
I like the ball idea as well cause it puts the expensive bits remote from the user. You could literally use a flashlight, gels, and a ball to draw with no electronics needed on the end user side. Handing them out for an interactive thingy would not be cringe inducing.
I just had a flashback to the MS 3D pipes screensaver.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
hack the rpi 2040
Are you sure? yes | no