Can we build Machine Learning enabled sensor for under 1 USD?
To make the experience fit your profile, pick a username and tell us what interests you.
We found and based on your interests.
The first board version had provisions both for both sound-based machine learning (using an analog MEMS microphone), and motion-based machine learning (using a digital MEMS accelerometer). However, to be below the 1 USD BOM target, one would have to populate one of these options.
As I have worked a bit more on motion-type use-cases lately, I saw that there is a need for a dedicated board revision for this use-case. The main painpoint was the large size and a lack of battery.
The main changes are as follows:
Also fixed a couple of design issues, like there was no 3.3V regulator (I thought running off battery would be OK, but the voltage was too high for some components).
I might do a couple of more tests on the rev1 board to make sure there are no other issue. But otherwise, I think this is ready to send to production.
Over the last 6 months I have worked mainly on the MicroPython support in emlearn. This is now getting into a usable shape and I am focusing on practical examples and demos. One of them is for analyzing accelerometer data for Human Activity Detection. The example can detect/classify activities such as walking/standing/lying (using standard dataset), or one can collect custom data to implement exercise classification (squats/jumpingjacks/lunges/other). This is a good starting point for our low-cost board, and we can port the feature extraction code from (MicroPython) to C. And then we can collect more data to enable specialized use cases.
For training, we will need to collect raw accelerometer samples. I transmit this data over BLE, ideally at 50 Hz sample rate, but 25 Hz might also be acceptable. However, the BC7161 supports only the advertisements part of the BLE stack, and a single advertisement only has 29 bytes of payload. Which is 0.375 seconds of data at 25 Hz (3 axes, 8 bytes per sample). And only 0.180 seconds at 50 Hz. That is theoretically doable with advertisements, but it is not typical to change the advertisement data so rapidly - so we will have to see if it works in practice when connected to a phone or computer.
The backup plan is to use a cable to a computer for data collection, or to have a more powerful device piggyback via the extension headers to store or transmit the data, over a BLE connection (not just advertisements) or WiFi connection.
The Puya PY32F003 is now available from LCSC with 8 kB RAM and 64 kB FLASH. That is double what we had previously, and it actually has the same cost (15 cents @ 3k). Specifically, I switched to the PY32F003F16U6, which is QFN-20 - so it also takes much less space than the TSSOP20 used on the previous board. The extra RAM is not critical for analyzing accelerometer data, but it will come in very handy for doing audio analysis (which was a little cramped in 4 kB RAM).
Designing a rev2 board for audio will probably come some time later though.
TLDR: Audio data can be streamed to computer over serial to USB. And using a virtual device in ALSA (or similar), we can record from the device as if it was a proper audio soundcard/microphone.
In a previous post we described the audio input of the prototype board, using the Puya PYF003 microcontroller. It consists of a 10 cent analog MEMS microphone, a 10 cent operational amplifier, and the internal ADC of the PY32. To check the audio input, we need to be able to record some audio data that we can analyze.
The preferred way to record audio from a small microcontroller system would be to implement audio over USB using the Audio Device Class, and record on a PC (or embedded device like RPi). This ensures plug & play with all operating systems, without needing any drivers. Alternatively, one could output the audio from microcontroller on a standard audio procotol such as I2S, and then use a standard I2S to USB device to get the data onto the computer. Example: MiniDSP USBStreamer.
However, the Puya PY32F003 (and most other sub 1 USD microcontrollers), does not support USB nor I2S. So instead we will stream the audio over serial, and use a serial-to-USB adapter to get it on the PC. This requires some custom code, as there are no standards for this (to my knowledge at least).
Since the serial stream is also our primary logging stream, it is useful keep it as readable text. This means that binary data, such as the audio PCM must be encoded. There are several options here. I just went with the most widely supported, base64. It is a bit wasteful (33% increase), but it is good-enough for our usages.
A default baudrate of 115200 in PY32 examples, on the other hand, will not do. The bandwidth needed for 8kHz sample rate of 16 bit PCM, base64 encoded is at least 2*8000*(4/3)*8 = 170 kbaud (ignoring overheads for message framing). Furthermore, the standard printf/serial communication is blocking:
So any time spent on sending serial data, is time the CPU cannot do other tasks.
It would probably be possible to set up DMA buffering here, but that would be additional complexity.
I tested the PY32 together with an FTDI serial-to-USB cable. It worked at least up to 921600 baud, which is ample.
The messages sent look like this going over the serial port. The data part is base64 encoded PCM for a single chunk of int16 PCM audio.
audio-block seq=631 data=AAD///7/AgAAAP3//f/8//3/AwACAAYAAAD//wAAAgAAAAgAAAD//wAA///+/wIA///+//7///8CAAAACAD///3////8/wMA//8AAAIAAgD9/wEACAAAAAEAAAAGAAAAAAACAP//BAD9//3/FwABAP7///8AAAQA/v8CAP7/AAD9/wEA/f8GAAIAAAD6//3/AAAHAAQA+f/e/wEA/v8AAAAA/v/+//3/AAAGAAIAAAD+/wYAAAABAP//AAAAAP7/AAD+//r//v/+/wEA/f/9/wAA/f/+////AAABAAYAAAD9/wAAAQABAAAA/v8GAAAAAQD+/wAAAAAAAAwAAgAAAA==
Receiving is the data done with a Python script, using pyserial. The script identifies which of the serial messages are PCM audio chunks, and then decodes and processes them. Other messages from the microcontroller are logged out as-is.
Getting the audio into our script on the PC side is useful. But preferably, we would like to use standard audio tools, and not have to invent everything ourselves. So the processing script takes the received audio data, and write it to an output sound device, using the sounddevice library. This allows playing it back on our speaker, which allows for simple spot checking. But even more useful is to use a loopback device, to get a virtual sound card for our device.
I tested this using ALSA loopback, which creates a pair of ALSA devices. The script can then write to one device, and a standard program that supports ALSA (which is practically everything on Linux) can read the audio stream from the other device.
# read data from serial, output to ALSA virtual device python User/log.py --serial /dev/ttyUSB0 --sound 'hw:3,0' # record audio from ALSA virtual device arecord -D hw:3,1 -f S16_LE -c 1 -r 8000 recording.wav
Note: There is nothing ALSA specific...
Read more »This project recently hit Hackaday front page. So this seems a good time for a quick update, and maybe some clarifications.
1. Research into ultra-low cost hardware for TinyML systems.
The motivation is to explore what is possible within an artificially constrained budget. And what are the implications on the software and ML side of the computational constraints such an environment has.
2. Testing grounds for the emlearn open-source software package. The software is mostly used on slightly more powerful microcontrollers, typically 0.5-5 USD for just the microcontroller, and similar amounts in sensors. But trying to scale down is a good torture test.
1. NOT a good starting point for getting into ML on microcontrollers and sensors (TinyML).
For that, I recommend getting much beefier hardware. Like an ESP32 with several megabytes of RAM and FLASH. That will be a lot more practical and fun. AdaFruit, Seed Studio, Sparkfun, Olimex etc all have good options. Arduino with Tensorflow Lite for Microcontrollers is probably the most practical software starting point still. I am working on MicroPython bindings for emlearn which has the goal to be super accessible. But that project is still in very early days.
2. NOT a ready-to-run board
Current rev0 boards have just been through basic HW bringup - with several critical problems for actual usage. But looks to be enough to continue testing on - which is all that matters for a rev0 board. A new board revision will come some time in the summer, after I have had time to test and develop some more. That might actually be usable, if we are lucky.
The BLE driver and firmware is also just skeletons at this point in time.
CNN running on PY32. I have been testing running some Convolutional Neural Networks on Puya PY32. I was able to port TinyMaix successfully, and run a 3 layer CNN that takes 28x28 dimensional input. This complexity would be suitable for doing simple audio recognition - which is of interest in this project. However, it used 2 kB RAM and 25 kB of FLASH - leaving only 2 kB RAM and 7 kB FLASH for the rest of the system. That would be a tight squeeze... But they claim the AVR8 port used only 12 kB FLASH - so maybe it can be optimized down. To be investigated....
emlearn + MicroPython presentation at PyData Berlin. The slides are available. Video is to be published in the coming weeks, I believe.
Going to TinyML EMEA 2024 in Milano, Italy in June. I will be presenting about the emlearn TinyML software project. And maybe also a little bit about this hardware project :)
TLDR: Using analog MEMS microphone with an analog opamp amplifier, it is possible to add audio processing to our sensor.
The added BOM cost for audio input is estimated to be 20 cents USD.
A two-stage amplifier with software selectable high/low gain is used to get the most of the internal microcontroller ADC.
The quality is not expected to be Hi-Fi, but should be enough for many practical Audio Machine Learning tasks.
The go-to options for a microphone for a microcontroller based system are digital MEMS (PDM/I2S/TDM protocl), analog MEMS, or analog elecret microphone.
The ultra low cost microcontrollers we have found, do not have pheripherals for decoding I2S or PDM. It is sometimes possible to decode I2S or PDM using fast interrupts/timers or a SPI pheriperal, but usually at quite some difficulty and CPU usage. Furthermore, the cheapest digital MEMS microphone we were able to find cost 66 cents. This is too large part of our 100 cent budget, so a digital MEMS microphone is ruled out.
Below are some examples of analog microphones that could be used. All prices are in quantity 1k, from LCSC.
MEMS analog. SMD mount
Analog elecret. Capsule
So there looks to be multiple options within our budget.
The sensitivity of the MEMS microphones are typically -38 dBV to -42 dBV, and have noise floors of around 30-39 dB(A) SPL.
Any analog microphone will need to have an external pre-amplifier
to bring the output up to a suitable level for the ADC of the microcontroller.
An opamp based pre-amplifier is the go-to solution for this. The requirements for a suitable opamp can be found using the guide in Analog Devices AN-1165, Op Amps for MEMS Microphone Preamp Circuits.
The key criteria, and their implications on opamp specifications, are as follows:
Furthermore, it must work at the voltages available in the system, typically 3.3V from a regulator, or 3.0-4.2V from Li-ion battery.
The standard bit-depth for audio is 16 bit, or 24 bits for high-end audio. To cover the full audible range, the samplerate should be 44.1/48 kHz. However, for many Machine Learning tasks 16 kHz is sufficient. Speech is sometimes processed at just 8 kHz, so this can also be used.
Puya PY32V003 datasheet says specify power consumption at 750k samples per second. However, ADC conversion takes 12 cycles, and the ADC clock is only guaranteed to be 1 Mhz (typical is 4-8 Mhz). That would leave 83k samples per second in the worst case, which is sufficient for audio. In fact, we could use an oversampling ratio of 4x or more - if we have enough CPU capacity.
The ADC resolution is specified as 12 bits. This means a theoretical max dynamic range of 72 dB. However, some of the lower bits will be noise, reducing the effective bit-depth. Realistically, we are probably looking at an effective bitrate between 10 bit (60 dB) and 8 bit (42 dB). Practical sound levels at a microphone input vary quite a lot in practice. The sound sources of interest may vary a lot in loudness, and the distance from source to sensor also has a large influence. Especially for low dynamic range, this is a challenge: If the input signal is low, we will a have poor Signal to Noise Ratio, due to quantization and ADC noise. Or, if the input signal is high, we risk clipping due to maxing out the ADC.
The gain is a critical...
Read more »First prototype boards arrived this week.
In the weekend I did basic tests of all the subsystems:
As always with a first revision, there are some issues here and there. But thankfully all of them have usable workarounds. So we can develop with this board.
Examples of issues identified:
Next step will be to write some more firmware to validate more in detail that the board is functional. This includes:
I made an initial development board. This supports both sound-based and accelerometer-based ML tasks. As well as using the LEDs as a color detector. So this is intended to be used to develop and validate the tech stack. And then further cost-optimization will happen with later revisions.
These are the key components
Using a pre-built and FCC certified module for Bluetooth Low Energy, the Holtek BM7161.
This is a simple module based around the low cost BC7161 chip.
An initial batch of 10 boards have been ordered from JLCPCB.
Also did a check of the BOM costs. At 200 boards, the components except for passives cost
Additionally, there are around 20 capacitors, 1 small inductor, and 20 resistors needed.
This is estimated to be between 0.15 - 0.20 USD per board.
So it looks feasible to get below the 1 USD target BOM, for as low as 200 boards.
Also designed a small 3d-printed case, with holes for the microphone and LEDs / light sensor.
This looks to just-barely-doable on the chosen microcontroller (4 kB RAM and 32 kB FLASH).
Expected RAM usage is 0.5 kB to 3.0 kB, and FLASH between 10 kB to 32 kB FLASH.
There are accelerometers available that add 20 to 30 cents USD to the Bill of Materials.
Random Forest on time-domain features can do a good job at Activity Recognition.
The open-source library emlearn has efficient Random Forest implementation for microcontrollers.
The most common sub-task for Activity Recognition using accelerometers is Human Activity Recognition (HAR). It can be used for Activities of Daily Living (ADL) recognition such as walking, sitting/standing, running, biking etc. This is now a standard feature on fitness watches and smartphones etc.
But there are ranges of other use-cases that are more specialized. For example:
And many, many more. So this would be a good task to be able to do.
To have a sub 1 USD sensor that can perform this task, we naturally need a very low cost accelerometer.
Looking at LCSC (in January 2024), we can find:
The Silan SC7A20 chip is said to be a clone of LIS2DH.
So there looks to be several options in the 20-30 cent USD range.
Combined with a 20 cent microcontroller, we are still below 50% of our 1 dollar budget.
It seems that our project will have a 32-bit microcontroller with around 4 kB RAM and 32 kB FLASH (such as the Puya PY32F003x6). This sets the constraints that our entire firmware needs to fit inside. The firmware needs to collect data from the sensors, process the sensor data, run the Machine Learning model, and then transmit (or store) the output data. Would like to use under 50% of RAM and FLASH for buffers and for model combined, so under 2 kB RAM and under 16 kB FLASH.
We are considering an ML architecture where accelerometer samples are collected into fixed-length windows (typically a few seconds long) that are classified independently. Simple features are extracted from each of the windows, and a Random Forest is used for classification. The entire flow is illustrated in the following image, which is from A systematic review of smartphone-based human activity recognition methods for health research.
This kind of architecture was used for in the paper Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition? The paper shows that it is possible to perform similarly to a deep-learning approach, but with resource usage that are 10x to 100x better. They were able to run on Cortex-M3, Cortex-M4F and Cortex M7 microcontrollers with at least 96 kB RAM and 512 kB FLASH. But we need to fit into 5% of that resource budget...
The input buffers, intermediate buffers, tends to take up a considerable amount of RAM. So an appropriate tradeoff between sampling rate, precision (bit width) and length (in time) needs to be found. Because we are continiously sampling and also processing the data on-the-run, double-buffering may be needed. In the following table, we can see the RAM usage for input buffers to hold the sensor data from an accelerometer. The first two configurations were used in the previously mentioned paper:
samples | size | percent | |||||
---|---|---|---|---|---|---|---|
buffers | channels | bits | samplerate | duration | |||
2.00 | 3 | 16 | 100 | 1.28 | 128 | 1536 | 37.5% |
2.56 | 256 | 3072 | 75.0% | ||||
8 | 50 | 1.28 | 64 | 384 | 9.4% | ||
2.56 | 128 | 768 | 18.8% | ||||
1.25 | 3 | 8 | 50 | 2.56 | 128 | 480 | 11.7% |
16 bit is the typical full range of accelerometers, so it preserves all the data. It may be possible to reduce this down to 8 bit with sacrificing much performance....
Read more »If the complete BOM for sensor is to be under 1 USD, the microcontroller needs to be way below this. Preferably below 25% in order to leave budget for sensors, power and communication.
Thankfully, there have been a lot of improvements in this area over the last years. Looking at LCSC.com, we can find some interesting candidates:
There are also a very few sub-1 USD microcontrollers that have integrated wireless connectivity.
It looks like if we budget 10-20 cents USD to the microcontroller, then we get around:
At this price point the WCH CH32V003 or the Puya PY32F003x6 look like the most attractive options. Both have decent support in the open community. WCH CH32 can be targetted with cnlohr/ch32v003fun and Puya with py32f0-template.
What kind of ML tasks can we manage to perform on such a small CPU? That is the topic for the next steps.
Create an account to leave a comment. Already have an account? Log In.
Hi allexoK, thank you for the comment. I have looked for microcontrollers with integrated BLE. We use NRF chips a lot at work and they are awesome, but yeah outside the budget here. The CC2340R52 and N32WB031 were new to me and may be relevant, thanks a lot for the tips!
Oh, I see, I wasn't reading carefully... You already mentioned some solutions with BLE integrated!( WCH CH582F)!
Hey Jon,
The project is super cool! I noticed the Github project page mentions the BLE advertisement Have you considered other MCU with integrated BLE? Esp32-C3FH4 is quite cheap, and Nrf52832 has super low power advertisment, unfortunately they don't fit your price requirements. Also Texas instruments recently released some fansy-pants cheap BLE mcu (
CC2340R52E0RGER) which is available for 0.89USD for 1000 quantity. There are also some Chinese solutions like N32WB031KEQ6, but I'am not sure about how easy is to program them.
Also checkout my experiments (If you haven't seen them already) with activity recognition (https://www.youtube.com/shorts/THErT60AAR0) and machinery failure recognition (https://youtu.be/4Kl571AXN1U?si=KMrOhFqSmDt3WqJI). (Both using NRF52840 and both extremely oversimplified)
Alex
Become a member to follow this project and never miss any updates
Awesome project! Very curious what you will do with the microphone.