To detect a custom object on mobile, we have followed these steps:
- Data Generation: click images of your object
- Image Annotation: draw bounding boxes manually
- Train & Validate Model: uses transfer learning
- Freeze the Model: for mobile deployment
- Deploy and Run: in mobile
To freeze the model, execute
python models/research/object_detection/export_inference_graph.py -input_type image_tensor -pipeline_config_path data/ssd_mobilenet_v1_custom.config -trained_checkpoint_prefix <model ckpt path> -output_directory object_detection_graph
To deploy the model on mobile,
- Use Android Studio to open this project and update the workspace file with the location and API version of the SDK and NDK.
- Set “def nativeBuildSystem” in build.gradle to ‘none’
- Download quantized Mobilenet-SSD TF Lite model from here and unzip mobilenet_ssd.tflite to assets folder,
- Copy frozen_inference_graph.pb and label_map.pbtxt to the “assets” folder above. Edit the label file to reflect the classes to be identified.
- Update the variables TF_OD_API_MODEL_FILE and TF_OD_API_LABELS_FILE in DetectorActivity.java to the above filenames with prefix “file:///android_asset/”
- Build the bundle as an APK file using Android Studio and install it on your Android mobile. Execute TF-Detect app to start object detection. The mobile camera would turn on and detect objects in real-time.
I have taken 300 images of household chairs, annotated them, and did the above steps to deploy on Android mobile to prove the concept. See the mobile is able to detect generic chairs in real-time.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.