The basic idea is to stream a close-up real-time video capture of the instrument or sensor output and use image recognition techniques to generate a digital representation of the value “seen” by the camera.
Powered by OpenCV, the software will either perform OCR (Optical Character Recognition) for LCD displays and the like or full video analysis with feature tracking / foreground extractions to follow dials and other kinds of displays.
It should be possible to customize these readings from a client application running on a PC, tablet or smartphone, with settings for:
- Image calibration.
- Range specification.
- Setting of units.
- Configuration of desired output.
- Backup setup and more.
Processed data will be stored in the device itself as long as there’s space available and it’ll be possible to extract it in different formats like XML, json, etc. Having a “full” computer and operating system could also enable us to send this data to cloud services owned by the user like Dropbox, Google Drive and the like. In theory, the device itself could host a small web-service server that can be queried through REST requests.
This is a great idea. Is there a working prototype?