Close
0%
0%

aivision-H6

A Linux-based AI camera module that runs real-time deep-learning vision fully on-device for small robots and embedded projects.

Similar projects worth following
I’m prototyping a Edge AI camera board specifically for local, real-time vision using neural network models. From object detection, tracking to complex vision tasks, the goal is to run powerful AI models directly on the hardware—no PC (or Jetson), no cloud, no lag.

Local Inference: Optimized for on-device neural network running with TPU accelerator.
Real-time Vision: Zero-latency object detection and tracking.
Developer-Centric: Built for those who value performance and lean integration.

Whether you’re in Robotics, Industrial Automation, or a Student/Engineer exploring the limits of Edge AI, I’d love to connect and hear your use cases!

Here is the latest demo of this board. In this Phase 1 demo, the camera runs person detection fully on-device and directly controls a small robot: when person detected → GO; no person detected → STOP.

No Jetson, no PC, no cloud — just a camera, a neural network, and a control loop forming a compact, self-contained vision module that directly sends GO / STOP signals to a motor controller. The goal of this demo is to validate an end-to-end pipeline on real hardware: camera capture → on-device inference (TPU) → decision logic → motor control.

Rather than building autonomy or navigation, this phase focuses on proving that local vision alone can reliably drive physical actions without external compute or heavyweight setups. Phase 1 is about making the full loop work, stably and visibly, on real hardware.


  • Why I’m building a Edge AI camera for student robotics projects

    Blade Master01/30/2026 at 03:16 0 comments

    (This video is a close-up, check the demo in project details page)

    Motivation

    In 2023, I attended a robotics conference (Amii AI week) and saw a group of students building Duckiebot style autonomous cars. Surprisingly, they still relied on basic OpenCV pipelines for vision tasks. 

    When I asked why they weren’t using deep learning vision models and runing them on Jetson nano, they said Jetson felt a bit heavy for their project: it took time to find/train good models; software stack (CUDA, drivers) took long to set up; power consumption and physical size didn’t fit small robots well. Also if they used a Jetson nano, did they run everything on it or still needed a rasperry pi for project logic? 

    It turns out it wasn’t a lack of interest in deep learning — but some dilemma between project constraints and AI system complexity stops them trying edge AI solutions.

    Where is the real issue?

    Edge AI hardware is not rare. Jetson boards are powerful but heavy in cost, power, and system complexity. Raspberry Pi is easy to start with but struggles with real-time neural networks. OpenMV lowers the barrier further, but with limited model capability. 

    These tradeoffs point to the real issue: AI system complexity. In most embedded projects, vision is only a small part of a larger project. Yet once deep-learning vision is introduced, configuring and keeping the pipeline usable, stable and reliable often takes more effort than the rest of the project combined. AI Vision stops being a component and starts to take over the project efforts. 

    What’s missing is a better balance: hardware that is good enough, with easy-to-use AI vision software that stays in the background instead of taking over the system.

    What I’m building

    I’m building a compact edge AI camera board designed to run neural network vision tasks (e.g. object detection) entirely on-device. It runs a modern Linux OS to allow the use of modern development tools and languages. I use a AI accelerator (a Tensor Processing Unit, or TPU) for better performance. It is small enough to fit in the palm of your hand and mount directly on a robot.

    More importantly, I build this board more as a vision sensor, than a dev board. That means, you can use it as a blackbox to get a simple vision->event flow, and fit the event to whatever your main project logic.

    The goal is to make deep-learning vision practical for small robots and student projects, without forcing developers/users to take on unnecessary AI system complexity.

    Looking for real feedback

    I’m looking to talk with a small number of students or developers to better understand real constraints and decision points in their robotics projects. If you’re willing to share your experience — especially around why you did or didn’t use Jetson/Raspi or other platforms for AI, I’d love to hear your perspective.

  • Add a TPU module

    Blade Master01/01/2025 at 21:43 0 comments

    ## December 30, 2024 Year-End Update ## 

    The TPU module design for AI acceleration has been completed and is currently undergoing PCB testing. The module uses a Google coral TPU with a stamp hole interface compatible with the peripheral board and can be directly added to the peripheral board to work with the core board, becoming a powerful CPU+TPU edge AI inference machine. The neural network inference efficiency can reach 4TOPS (4 trillion operations per second), with a total board power consumption of only 6W. If needed, two TPU modules can be added to achieve 8TOPS inference speed. Here are two design diagrams:

    image.png

    TPU module 3D view

    image.png

    TPU module PCB raw design

  • Initial prototype

    Blade Master01/01/2025 at 21:34 0 comments

    Project Overview

    This project aims to create an embedded AI module running a Linux system, combined with a camera, primarily focusing on AI vision applications. The module is divided into two parts: the main control core board and the peripheral board.

    What main control chip or circuit is used? --------- Materials 

    Core board basic composition: The main controller uses Allwinner H6 (1.8GHZ, V200-AI), memory uses Samsung K4E6E304E series LPDDR3 (2GB), plus Samsung EMMC (8GB) and power management chip AXP805, as well as related resistors, capacitors, and inductors. The core board uses a golden finger interface to connect with the peripheral board.

    The basic composition of the peripheral board: USB TYPE-C power supply for the core board and peripheral board, related DC-DC step-down modules, several control buttons, wifi module (currently using RTL-8723BU, planning to switch to MT series later), TF card slot, OV5640 camera module. A TPU module (4TOPS) will be added later for AI computation acceleration.

    What has been created? -------------------------- Product 

    By combining the core board and peripheral board, this project can serve as an edge AI processor, running a Linux Sunxi mainline system equipped with TPU neural network computation acceleration, used for deep learning and computer vision applications.

    What functions have been implemented? -------------------------- Features 

    Currently, the core board has successfully and stably running Linux sunxi-5.10.75 system, with normal chip heating (compared to Orange Pi equivalent boards), normal DDR memory frequency, normal TF card system image loading, successful WiFi connection enabling SSH and other functions, and system interaction through serial port. Currently debugging the device tree part of the camera module, and camera test results are expected to come out soon.

    What are the potential applications? -------------------- Applications 

    The applications are broad. As far as I know, there are few easy-to-use, easy-to-learn, and efficient-to-run edge AI modules in the current market that can run AI vision applications. Our board is mini-sized and powerful and will provide detailed technical documentation and guidance, making it convenient to deploy in smart homes, IoT, robotics, and other fields.

    Schematic Design

    Attached is the partial schematic of the core board. The complete schematic documentation can be found in the related attachments section.
    [1] Main Controller Chip Allwinner H6 SoC

    SCH_Core Schematic_panel_1-SoC_2024-09-04.png

    [2] Memory Chip LPDDR3
    SCH_Core Schematic_panel_2-LPDDR3_2024-09-04.png

    [3] Storage eMMC
    SCH_Core Schematic_panel_3-eMMC_2024-09-04.png

    [4] Power Management Chip AXP805 and a DC-DC
    SCH_Core Schematic_panel_4-PWR_2024-09-04.png

    [5] Golden Finger Interface Definition
    SCH_Core Schematic_panel_5-mPCIe_2024-09-04.png

    PCB Layout (Non-source files)

    Top Layer of Core Board PCB
    core.png

    Peripheral Board PCB Top Layer
    base.png

    3D Rendering

    Core board 3D rendering
    3D_Core PCB_panel_2024-09-04.png

    Peripheral Board 3D rendering
    3D_Base PCB_panel_2024-09-04.png

    Circuit Debugging Instructions

    1. After receiving the board, write the Linux system image file (the image file can be found in attachments) to the TF card and insert it into the TF card slot.
    2. Use a USB to TTL serial module. Connect one end of the serial module to the serial pin header interface on the peripheral board using three Dupont wires, and connect the other end (USB 2.0) to the PC. For Windows systems, use MobaXterm software, create a new serial session, and wait for the board to start.
    3. Press and hold the POWER ON button on the peripheral board until the red LED lights up. Release the button, and after a few seconds, the yellow LED will light up and the red LED will turn off, indicating successful board startup. At this time, the serial session should display the boot log output. Wait for the login prompt.
    4. Enter username "orangepi" (the system uses orangepi3-lts system) and password "orangepi" to enter the system command line interface.
    5. Use the command line interface under serial connection to set up the board's WiFi connection. After this, you can disconnect the serial connection and operate the board directly through an SSH wireless connection....
    Read more »

View all 3 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates