Finally got around to sharing this project, hoping it inspires others to take it further. This system uses pulses of sound and infrared light emitted from the device that is tracked. Base stations in known location measure the time difference between those signals, effectively calculating point-to-point distance using time-of-flight of sound. This doesn't require high temporal resolution to get accurate measurements, so a cheap microcontroller can be used. The audio signal is filtered in analog, and a comparator triggers an interrupt on an Arduino Nano when the received signal is strong enough.
This project uses sound and light to perform low-cost indoor 3D localization. See summary video below for the overview.
While the idea was to use infrared light and ultrasonic sound so that the system wouldn't be detectable to humans, ultrasonic transducers are much more difficult to find than those for audible sound. Therefore, the system was built using audible sound. The update rate in the video above is relatively slow, only because the noise is too annoying. Otherwise, this could easily be run at much higher update rates.
While it worked pretty well for short ranges, especially for linear distance measurement, there are some major limitations and challenges not dealt with here:
Multi-path issues for the light and sound are not examined
Currently the system uses an analog filter for both signals and merely looks for a power level, but with better signal processing (likely possibly via FPGA), each signal could be coded and reflections ignored.
The sound filter amplification had to be tuned manually to achieve a reliable trigger without false positives
The amplification could be adjustable by the system so it could sweep until a signal was received, then adjust in real time based on measured distance
The signal could be processed through a bandpass filter with a more aggressive roll-off, and very high amplification used all the time. This may cause issues with noise, of course, but ideally only the signal we care about would be passed.
Using ultrasonic sound may help this, including choosing a frequency that doesn't occur much, since the ~5kHz sound used here may also be coming from other sources in the world.
The calculation of position was performed assuming perfect linear measurements
Ideally, more than 3 base stations would be used, so the calculations would not have a unique result. If those signals were combined with IMU data in a Kalman filter, the results would be higher quality and orientation could also be calculated.
Hi Jake. I am working on the same project but with ultrasonic and radio signals using instead of audio and infrared. If you interested we can collaborate to speed up the development process.
That just means I didn't explain it well enough. Actually the IR channel does not transfer any used data. The IR and audio pings occur at the same time and the base stations measure the time differential between receiving the IR and audio signals. That time, based on the speed of sound, determines the point-to-point distance. Basically, it's the time between thunder and lightning. The base stations and transmitter all have nrf24l01 radios as well, and the distance information is transferred over those radios and the 3D position is then calculated based on the known locations of the base stations.
Oh... I get it. Took me a while to figure what you're doing. So you are using the IR as a channel the transfer data between devices? Then audio ping is measured by all, time offsets are shared via IR, and known distance base station audio delay is compared against mobile tag?
Hi Jake. I am working on the same project but with ultrasonic and radio signals using instead of audio and infrared. If you interested we can collaborate to speed up the development process.