(This video is a close-up, check the demo in project details page)
Motivation
In 2023, I attended a robotics conference (Amii AI week) and saw a group of students building Duckiebot style autonomous cars. Surprisingly, they still relied on basic OpenCV pipelines for vision tasks.
When I asked why they weren’t using deep learning vision models and runing them on Jetson nano, they said Jetson felt a bit heavy for their project: it took time to find/train good models; software stack (CUDA, drivers) took long to set up; power consumption and physical size didn’t fit small robots well. Also if they used a Jetson nano, did they run everything on it or still needed a rasperry pi for project logic?
It turns out it wasn’t a lack of interest in deep learning — but some dilemma between project constraints and AI system complexity stops them trying edge AI solutions.
Where is the real issue?
Edge AI hardware is not rare. Jetson boards are powerful but heavy in cost, power, and system complexity. Raspberry Pi is easy to start with but struggles with real-time neural networks. OpenMV lowers the barrier further, but with limited model capability.
These tradeoffs point to the real issue: AI system complexity. In most embedded projects, vision is only a small part of a larger project. Yet once deep-learning vision is introduced, configuring and keeping the pipeline usable, stable and reliable often takes more effort than the rest of the project combined. AI Vision stops being a component and starts to take over the project efforts.
What’s missing is a better balance: hardware that is good enough, with easy-to-use AI vision software that stays in the background instead of taking over the system.
What I’m building
I’m building a compact edge AI camera board designed to run neural network vision tasks (e.g. object detection) entirely on-device. It runs a modern Linux OS to allow the use of modern development tools and languages. I use a AI accelerator (a Tensor Processing Unit, or TPU) for better performance. It is small enough to fit in the palm of your hand and mount directly on a robot.
More importantly, I build this board more as a vision sensor, than a dev board. That means, you can use it as a blackbox to get a simple vision->event flow, and fit the event to whatever your main project logic.
The goal is to make deep-learning vision practical for small robots and student projects, without forcing developers/users to take on unnecessary AI system complexity.
Looking for real feedback
I’m looking to talk with a small number of students or developers to better understand real constraints and decision points in their robotics projects. If you’re willing to share your experience — especially around why you did or didn’t use Jetson/Raspi or other platforms for AI, I’d love to hear your perspective.
Blade Master
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.