OK, big crowd today, let's get started. Welcome to the Hack Chat everyone, thanks for coming along for a tour of what's possible with machine learning and microcontrollers. We've got @pt and @limor from Adafruit, along with @Meghna Natraj, @Daniel Situnayake, and Pete Warden fromt he Google TensorFlow team.
We've also got a livestream for demos -
Also, we hear that a class of sixth-graders is tuned in. Let's make sure we make them feel welcome - we always support STEM!
heyyyy everybooooody
Hey everyone!! I am so jealous of that sixth-grade class; they have an awesome teacher
Can everyone on the Adafruit and Google side just introduce themselves briefly?
hi everyone itsa me- ladyada
Hi! This is Pete from the TensorFlow team at Google
check out the youtube video for the live demoooos
and more chitchat :)
Hi everyone! I'm Meghna from the Google Tensorflow Lite team. Excited to be a part of this event! :)
i'm pt, work with limor at adafruit, and founded hackaday 15 years ago which is a coincidence
lets drop some LINKS
@Pete Warden and google folks ! When is TensorFlow 2.0 officially releasing?? :D
Heywhile y'all think of the questions and ya wanna ask
A happy coincidence ;-)
Hey!! I'm Dan, and I work on the TensorFlow Lite team at Google. TF Lite is a set of tools for deploying machine learning inference to devices, from mobile phones all the way down to microcontrollers. Before I was at Google, I worked on an insect farming startup called Tiny Farms :)
Hello everyone... sorry for the late apperance...
Hey Dan, so it's a Bug's Life?
Can we get a brief overview of ML on uCs for those of us unfamiliar with the space? What problems is it solving, and on generic microcontrollers or special parts?
So when does it start?
@pt How neuronal network do yo use in this process?
OK so, if you're not familiar with ML on MCUs, I can share a few thoughts!
Can we make our own models (e.g.: using an LSTM for timeseries accelerometer readings)?
@pt What problems this Tiny ML can solve?
Who invented the term 'confusion matrix'?
@Daniel Situnayake What is the size of a typical TensorFlow Lite network (in kB) and what is the inference latency?
@andres for raspberry pi we're using the mobilenet v2 models that are optimized for mobile device use - its really a tone of work they put into it to optimize them and we wouldn't be able to do a better job!
Re: TensorFlow 2.0 - I honestly don't know, but we did just publish a release candidate I believe, so it won't be long
@Christopher Bero while @Daniel Situnayake is answering, here are 2 example, one for the pi 4 and one for a samd https://learn.adafruit.com/running-tensorflow-lite-on-the-raspberry-pi-4?view=all & https://learn.adafruit.com/tensorflow-lite-for-microcontrollers-kit-quickstart?view=all
i'm interested in fall detection for my (very elderly) parents.
Okey thanks!!!
Don't forget to tune into the livestream everyone:
Thanks!
Re: fall detection - We're working on an accelerometer-based example for gesture recognition, and hope to release that soon, it might be a good starting point
@happyday.mjohnson I'm interested in home electricity consumption patterns for older people too, ditto water usage. (eg nobody used the toilet today)
thank you. i'm interested also in the preprocessing. For example, one bunch-oh-data for human activity w/ accel used 4th order Butterworth Filter...whatever that means...
@Arthur Lorencini Bergamaschi for us, we can do non-net connected voice recognition for microcontrollers, for example - we made a low cost example that can control a servo to move up ro down, based on voice only, good start for folks with mobility issues
@pt and @limor thanks for answering!
@Pete Warden Accelerometer only detection would be impressive. I've found that a gyro adds a lot of valuable signal with that type of task.
OK, so for a brief overview of ML on microcontrollers, I guess we should introduce ML first? here is the demo for that -
@Arthur Lorencini Bergamaschi using TFlite means you dont have to do a lot of the optiizations required for heuristic-based pattern recognition
so there is preprocessing, training, making the model...maintaining...all these steps. NOt sure what goes on cloud what goes in edge. e.g.: I plop a acell/gyro on my parents....somehow get the readings...then do analysis in-dah-cloud? Then "compile" for edge?
I'm wondering, how much further development do Google see for Tensorflow.js? MCUs can use that easily with a web server.
ML is the idea that you can "train" a piece of software to output predictions based on data. You do this by feeding it data, along with the output you'd like it to produce.
@Duncan - definitely, we're trying to keep it very simple so that you don't need an IMU, but so far using just accelerometer data seems to be effective for our use case
instead you can take advantage of the optimizations that ARM & google have done, and you dont need to re-invent the matching algorithms - you just need to turn your model
what about keras? It seems far easier to me than TF?
2: Once your model is trained, you can feed in new data that it hasn't seen before, and it will output predictions that are hopefully somewhat accurate!
@happyday.mjohnson we love Keras, and TF 2.0 is based around it as an API
Can you run Keras on micro/circuitpython?
yah - keras on micro/cp?
Question for LadyAda: What software and hardware was used to build the demo and how many people hours did it take to reach a stable state?
right now the code for TF Lite on microntrollers is super streamlined
3: This second part, making predictions, is called inference. When we talk about ML on microcontrollers, we're generally talking about running the second part on microcontrollers, not the training part. This is because training takes a lot of computation, and only needs to happen once, so it's better to do it on a more powerful device
Ah, OK that makes sense.
You guys think TPUs will make continuous training on the edge possible?
@happyday.mjohnson @Sébastien Vézina we're focused on running models that have already been trained on MCUs, and Keras is all about training, so Keras isn't a great fit for our embedded use cases
4: So, why would we want to run inference on microcontrollers?! "Traditionally", meaning the last few years (since this stuff is all pretty new), ML inference has been done on back-end servers, which are big and powerful and can run models and make predictions really fast
@Dick Brooks to get the micro speech demos working took maybe 2 weekends of work, about 20 hours total - but it was in a much less stable state. now the code is nearly ready for release so ya dont have to relearn all the stuff i did. also, i wrote a guide on what i learned on training new speech models
@daniel that makes sense. I hope it is as easy as going from making model on desktop and pushing a "compile" run/restart to go to CP?
@Daniel Situnayake - so how compact can the trained models be? In terms of storage, processing power needed, etc.
(chuck from youtube asks) "Are you limiting yourself to only neural networks? ML methods based on decision trees like Haar Classifiers might be more appropiate for microcontrollers"
5: But this means that if you want to make a prediction on some data, you have to send that data to your server. This involves an internet connection, and has big implications around privacy, security, and latency. It also requires a lot of power!
from discord "can I use my Google AIY Vision setup for this?"
Hello!
please lemme know when there is a tutorial/stuff on accell/gyro human activity recognition....
we did some stuff with a parallax propeller for this, but it was in like 2009. GA generating a learning capable proggie
or perhaps human activity via processing videos (like when a "smart camera" is watching my mother and it notifies me she fell.
Did you make the training phase in the microcontroller?
https://www.sysml.cc/doc/2019/107.pdf ) but for now we have our hands full with NN approaches, so that's what we're focused on
Re: Neural-networks only? We're fans of other types of ML methods (this paper from Arm on using Bonsai Trees is great for example6: If we can run inference on the device that captures the data, we avoid all these issues! If the data never leaves the device, it doesn't need a network connection, there's no potential for privacy violations, it's not going to get hacked, and there's no need to waste energy sending stuff over a network
There is a scroll bar over to the right - it only appears when you're over it
its hard for a computer to wreck a nice beach. plus, auto car wreck is sum thymes a problem
for chuck's questions, we're using NN here because it lets us take advantage of the huge resources available with tensorflow! you can do tests on cpython to tune things and them deploy to a smaller device
@happyday.mjohnson we can show people detection now on vid... it's just a pi 4 too...
7: But on tiny devices like microcontrollers, there's not a lot of memory or CPU compared to a big back-end server, so we have to train tiny models that fit comfortably in that environment
have you tried tiny-yolov3 yet on the Pi 4?
@pt i saw image detection (cat, parrot..) but not movement detection (ooh - looky - the parrot fell off it's perch and is rolling around the floor...).
8: Despite the need for tiny models, it's possible to do some pretty amazing stuff with all types of sensor input: think image, audio, and time-series data captured from all sorts of different places
Besides the great Adafruit resources where can you find other available micro-ML models?
9: Hopefully that's a useful intro!! I will now stop going on and on and answer some questions :)
@happyday.mjohnson check what Bristol university are doing re: fall detection https://www.irc-sphere.ac.uk/100-homes-studyhttps://www.irc-sphere.ac.uk/100-homes-study
@daniel i think of those possibilities....i just don't understand the preprocessing -> training -> model building -> compiling into CP...
@Daniel Situnayake Awesome, thank you!
@happyday.mjohnson yah, that is possible, could look for parrot and then look for movement
yes the google AIY vision uses the movidius accelerator and a pi zero - totally will work with tensorflow lite. we wanted to experiment with non-aceelerated raspi 4's cause they're available, low cost, and we think that these SBC will only get faster!
@somenice we're busy adding examples to the TensorFlow Lite repo, you can see them here at http://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/
@PaulG - thank you.
Is it possible to train for particular people? IOW, differentiate between pt and limor when you're in frame together?
@Daniel Situnayake @Pete Warden How tiny are the models in kB? What is the inference latency on a SAMD level mcu for something like gesture recognition?
2 Questions:
1. Is this only targeted for Cortex M boards?
2. Will there be any additional guidance on setting up MCUs that are not already available/vetted on TFLite repo or in book. A coworker and I set up the TFLite for Azure Sphere and we got stuck on changes needed for the linker. If we didn't have the ability to ping a guy on Sphere team we wouldn't have made progress. I would love guidance on trying this on different chipsets.
i.e., not just "person"
we'll soon have models and training scripts for speech hotword detection, image classification, and gestures captured with accelerometer data
Hey everyone - hope you don't mind if I make a shameless plug while we're here! If you're interested in running TensorFlow's Object Detection API on the Raspberry Pi to detect, identify, and locate objects from a live Picamera or webcam stream, I've made a video/written guide that gives step-by-step instructions on how to set it up. It's very useful for creating a portable, inexpensive "Smart Camera" that can detect whatever you want it to. Check it out here:
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.