You can train the object detection model on your local if you have GPU. If not, you can do in EdgeImpulse or Roboflow to host the dataset to Colab. Data cleansing and augmentation are easier done with Roboflow, and you can export the data in multiple formats. Implementation of some object detection models is also given here in roboflow models. Alternatively, you can extend the pre-trained models in OpenVINO model zoo by following the steps here.
These are the steps followed to do custom object detection on Raspberry Pi.
Custom Object Detection on Pi
It is possible to train an object detection model with your custom annotated data and do hardware optimization using OpenVINO and deploy on Pi, as long as the layers are supported by OpenVINO. By doing so, we can deploy an object detection model on RPi, to locate custom objects.
1. First, choose an efficient object detection model such as SSD-Mobilenet, Efficientdet, Tiny-YOLO, YOLOX, etc which are targeted for low power hardware. I have experimented with all the mentioned models on RPi 4B and SSD-Mobilenet fetched maximum FPS.
2. Do transfer learning of object detection models with your custom data
3. Convert the trained *.pb file to Intermediate representation - *.xml and *.bin using Model Optimizer.
export PATH="<OMZ_dir>/deployment_tools/inference_engine/demos/common/python/:$PATH"
python3 <OMZ_dir>/deployment_tools/model_optimizer/mo_tf.py --input_model <frozen_graph.pb> --reverse_input_channels --output_dir <output_dir> --tensorflow_object_detection_api_pipeline_config <location to ssd_mobilenet_v2_coco.config> --tensorflow_use_custom_operations_config <OMZ_dir>/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json
python3 object_detection_demo.py -d CPU -i <input_video> --labels labels.txt -m <location of frozen_inference_graph.xml> -at ssd
4. Finally, deploy the hardware optimized models on Pi.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.