This is a "nothing obvious" progress update.
The next one will address:
- IMU feedback for leveling/motion/aiming
- Actual image segmentation with Python
- Actual motion tracking/navigation
- Web interface to get telemetry
I will get all that done because I have to switch gears and learn something else for work/hackathon.
Anyway at this point I have successfully gotten all the stuff talking, I'm using "class-based architecture" or OOP if you can call it that. Really I'm just making this thing as I go along. I have pretty weak experience with OOP.
Top down the robot code is like this:
NavUnit
- boot
- motion (talk to WiFi buggy by websocket)
- sensors (address camera, tof, lidar, IMU
- the state/navigation
There's also a web interface that will get data from the nav unit.
I mostly made this update for the video since the individual video parts are very long/not something I can just sample 10 seconds of.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.