Close

Quick practical test

A project log for One more 3D mouse

I also want to make a 6DOF mouse

vedranVedran 06/29/2024 at 22:050 Comments

After the last success, I had to give it a quick go in actual CAD software. So I went to steer my test code from previous examples into something that can roughly detect which of the gestures mouse is currently doing, and inject the right command into Fusion 360 to be able to zoom, pan and orbit an object.

I made a simple algorithm that represents each gesture with a sum of all sensors values comprising that gesture. For example, from previous post:

The following gestures would be represented by the following sums(translate=pan):

    _sums[XP_PAN] = sensor(XpT) + sensor(XpB) + sensor(YmB) + sensor(YpT) + sensor(XmB) + sensor(XmT) - sensor(YmT) - sensor(YpB);
    _sums[XP_ORBIT] = sensor(XpT) + sensor(XpB) - sensor(YmB) - sensor(YpT) - sensor(XmB) - sensor(XmT) - sensor(YmT) - sensor(YpB);

    _sums[XN_PAN] = sensor(XpT) + sensor(XpB) + sensor(YpB) + sensor(YmT) + sensor(XmB) + sensor(XmT) - sensor(YpT) - sensor(YmB);
    _sums[XN_ORBIT] = sensor(XmB) + sensor(XmT) - sensor(YmB) - sensor(YpT) - sensor(XpT) - sensor(XpB) - sensor(YmT) - sensor(YpB);

XpT represents a sensor input X+, top and XpB sensor X+, bottom, as my design features 2 sensors on each axis. The algorithm then goes on to compare all sums to find the largest one, and takes that as the most likely gesture. The exact gesture is actually not very important, as long as we can reliably determine in which of the following states we are: idle, pan, orbit, zoom (zoom is represented by lifting and pressing on the knob; +/-Z axis translation)


Once we know the gesture, we need to somehow extract X/Y motion magnitudes to move mouse, and I should've spent more time on that as the current example doesn't do a very good job with tracking when in panning mode. But overall, I got what I needed and I could somewhat do what I wanted with the part.

Generally, approach seems to work nice. Orbiting work as I would expect it, but panning can be a bit difficult, and that's mostly because I didn't yet look into what data gives the best tracking when in panning mode. Also, zoom is going to need reworking. Even though the current zoom speed is the slowest possible (on tick per refresh rate, at 100Hz that's still pretty fast). I am thinking that zoom will be somehow incremented internally at a much lower rate so that is ticks to one every 100 - 500 ms.

Currently, though, my biggest annoyance is that the test setup needs to be held with one arm as it's too light and would otherwise just move around.

Discussions