-
Hack Chat Transcript, Part 2
05/01/2019 at 20:06 • 0 comments@deshipu Absolutely! AI is a very broad term, it can mean lots of things and the implementation doesn't have to be "deep learning". Deep learning is particularly interesting because it's simplicity and performance :)
@Inderpreet Singh Do you mean by digging into a lower level. as in "what's inside the neural network"?
People ask about Arduinos and not microcontrollers hence the term AI or deep learning are more broad.
neurons, mostly...
Could focus on say vision based stuff. It is difficult for people to wade through ALL the information out there
some connections, a bunch of weights heh...
In other news I arrive 1 hour late to the hackchat again. Really looking forward to daylight savings.
I assusmed this hackachat was for vision based
@Dan Maloney That's probably the best way to learn. A lot of the concepts you can learn with very simple networks (the simplest is basically just a matrix multiplication!). When it comes to making projects, the neural networks designed to be good at image processing are particularily useful, which is we provide higher level tutorials (We think it can kickstart making projects)
@Josh Lloyd Are you on the reminder email list? You get a reminder 30 minutes before the chat.
Types of networks and then types of tools like TF vs pyTorch. It can be very overwhelming. I have taught ANN to engineering but the current stuff on the net is just too much UNLESS you focus on a particular problem to be solved
@Inderpreet Singh Right, it can be a lot to take in. There are many components. I think its important to try to learn the fundamentals (like a single layer ANN) as well as how to really apply to practical problems using existing architectures
@Dan Maloney The issue is that I'm asleep. Timezones :)
In the JetBot project we actually focus on a component of deep learning that can sometimes go under the radar, which is actually collecting a dataset
https://github.com/dusty-nv/jetson-inference
I spent a lot of time meandering all over th net until I found this:Following the link tutorial. Best intro ever.
@Tegwyn☠Twmffat agreed.
Its a good place to start IF you have a jetson board.
@John I really enjoy the idea of running models on low power hardware, because once the model is trained it really is magnitudes less expensive to run inference. Is there a suggested means of training at this point? Is NVIDIA offering cloud based training? Should it just be done on one's own means, on their own computer perhaps, for now?
@John Is it likely that something such as Training as a Service might be offerred in the future? At reasonable cost, that would be competative vs. me just running it on my own GTX ?
@Josh Lloyd use Nvidia container on AWS
@Josh Lloyd I think it depends what stage you are at. As a gamer, I train on my desktop with GPU to allow me to easily iterate, experiment, etc. Once you've got a lot of data and a complex pipeline you might consider a cloud pipeline or something else.
@Josh Lloyd You can even train some smaller datasets on the Jetson Nano itself when you're just getting started :)
@John I was hoping youd' say that
@Tegwyn☠Twmffat we provide containers that you can launch on a cloud provider that come with deep learning software (like TensorFlow) pre-installed
AsWe're getting to the top of the hour, which is the official end of the chat, but if @John wants to stick around and answer questions, that fine. Of course he may need to get back to work, so we'll leave it up to him.
Either way, I want to say a huge thanks to John for taking time out of his busy day for us. This was really helpful, both to AI noobs like me and the more seasoned vets.
I did training on Jetson TX2, which was 'OK'.
@John I've built a smart doll house that recognizes IMU gesture patterns to activate items in the doll house and hoping to port it over to Jetson Nano / Tensorflow, it uses really simple architecture: https://maxoffsky.com/research-progress/project-myhouse-a-smart-dollhouse-with-gesture-recognition/
Thanks@John I would assume that anyone with a gaming PC has a far more capable piece of hardware in their desktop than the Jetson Nano. I have a very outdated (at this point) GTX 760, and that has about 3 times as many CUDA cores.
@John Should it be expected that newer CUDA cores on newer hardware are more performant? In whatever the GFX equivilent for DMIPS is?
Sadly, I don't have a Hack Chat lined up for next week yet - we had a host lined up but they had to reschedule. So watch for announcements in case I get a host. Thanks everyone!
@Dan Maloney I'd love to keep talking with everyone. I do need to grab lunch soon :) Please feel free to send me direct messages, but I'll also come back and check this log. This has been awesome!
@Dan Maloney and @John
Thanksthanks all
I'll be pulling a transcript of the session and posting it on the event page. I'll throw a link in here later.
-
Hack Chat Transcript, Part 1
05/01/2019 at 20:05 • 0 commentsnever seen so many people join the room right before a chat session
Like bidding on ebay
Hey everyone, welcome to the Hack Chat. Today we have John Welsh from NVIDIA here to talk about all the exciting stuff that's going on with AI at the Edge.
Welcome John! Can you tell us a little about yourself and how you came to be working in AI?
Hey everyone! Of course. As for my job with NVIDIA - I'm an engineer on the Jetson product team focusing on how to apply deep learning with NVIDIA Jetson
I got into AI during my Masters back in maryland when working on my thesis. I tried a few computer vision techniques, but wanted to give it a shot given all of the material coming out :)
Ultimately I was trying to make a robot follow me around campus
like a body guard?
More like a pet I think
I'm hoping to hear more about all the project ideas everyone has
I think it's an exciting time with modern AI coming to such a small form factor
How close did you get to succeeding?
The robot followed me around the lab on campus. It was a pretty fun demo, but nothing we deployed anywhere yet
how many selfies did you have to take ...
to get it to discriminate against your collegues?
Hah, well. Not too many actually.
I'll chip in with my idea: driveway security camera that can differentiate between wildlife and humans/vehicles. Reduced false alarms would be the goal.
We used an existing dataset for person re-identification to learn important features for distinguishing people. So the neural network actually learned how to recognize people reasonably from a single camera shot
@Dan Maloney This sounds very cool. Is the goal ultimately to send pictures or alerts when one of these is detected?
When the robot follows you, it only sees your back. I imagine it would be very difficult to differentiate people that way, given that even normal people struggle with that?
I'm looking to build a handheld wireless monitor for use in Broadcast, and wanted to use the Nano for encoding/decoding and streaming the video. Do you know the latency off hand?
I'm thinking more of a tiered response. Keep track of wildlife intrusions (like a game camera) but send alerts for people. Send a high alert if you see a vehicle that's not known to the system, maybe via character recognition of license plates?
@Max-Felix Müller Absolutely. Face recognition wouldn't work in that context. Person re-identification is actually using the entire body (all orientations), so it learns features from your general apperance (clothes are helpful). We planned to combine this with face recognition for short term / long term recognition
As far as CNN's developed and trained with (TensorFlow, MATLAB, R, etc.), how is the portability of the Network supported by the Nvidia hardware?
@Dan Maloney I used openalpr for license plate recognition of the Nano. It works.
@alangixxer - Sweet, good to know. Thanks!
@Dan Maloney This may help for general object detection with good performance on Nano. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md . You can fine tune existing object detectors if you put together your own dataset
@FrazzledBadger , that can be a cool project.
The latency for the encoder/decoder depends on the resolution and other settings, but i guess there are many other factors like Wi-Fi communication, so eventually you would need to just try things out
We have some content on how to accelerate it on Nano also in case you need to do with real-time feedback
@jamesonbeebe I'm most familiar with TensorFlow and PyTorch. Many of the models will just work, we provide pre-compiled versions of these software on our download center and forums
perfect! That leads me to my next question, How can I accelerate real-time data processing with a big CNN?
@Dan Maloney I know our deepstream team has also done work with vehicle and license plate recognition. I believe we have sample apps out there
My favourite is the dog detector - it's incredibly responsive!
What is the feed-forward speed of one of the Jetson boards like compared to something like an RPi? A decently new CPU?
@John - Cool, thanks! Can't wait to start working this up. It tickles my security/paranoia spot.
@jamesonbeebe Do you know which model you're using? Sometimes (due to memory constraints) it can be trickier. I can point you to some content for optimizing TensorFlow models with TensorRT
@John, Did you run into any interesting problems with your follower bot?
HiI do not, however I was considering using a Jetson board for real time edge training
but I'm not too sure what the best way of doing that would be, well... it would be edge training and edge detection as well
@John Keras "just works". You have to install coda and cuda-nn, and gforce card is detected and used automatically
@Tom Kelley sometimes it depends on the model you're running. I believe we have some benchmarks out there let me try to pull them up :)
@John you're the man.
@John , what network model would you recommend for real time object detection in a video stream of say 720p video stream
hi@jamesonbeebe We actually guide people on how to train our 'collision avoidance' model in the jetbot project on the Jetson nano :) https://github.com/NVIDIA-AI-IOT/jetbot/wiki/examples#option-1---train-on-jetson-nano
TensorRT sounds like that's more of where I want to be looking. Thanks@jamesonbeebe I find PyTorch very flexible and well suited for this, and the training workflows we have in that project
@jamesonbeebe As a note, TensorRT is for inference (feed forward only) so you'll have to train in PyTorch and then optimize afterwards for TensorRT use
@Inderpreet Singh For object detection I used SSD MobileNet V2 which resizes input to 300x300 pixels. This is fairly accurate, but definitely depends on how you're using it. We have content to optimize these models with TensorRT, works pretty well on Jetson Nano
@Inderpreet Singh You can see an example doing inference with this model here https://github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/object_following/live_demo.ipynb
@Inderpreet Singh You can train the model on your own data too, let me know if you have questions on that
I have lots of data
I have the jetson nano on a live feed of a highway streatch
@Tom Kelley Here's the benchmarks, they have the RPi V3 in there I believe https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks
my current software is opencv based and could use deeplearning to do better identification of objects and vehicles
@Inderpreet Singh This sounds like a similar use case to what our deepstream team focuses on. They have some samples for traffic anomoly detection. Let me see if I can find those
@John Thanks man, I'll check that out!
How come you don't use detectnet?
@Inderpreet Singh https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps
Sigh, fine - take my money. Just ordered a Nano dev kit off Amazon ;-)
Brilliant. Any pointers to re-training with more data? My opencv routine has dumped gigs of data over the last three months.
@John , I was thinking about organizing some workshops in my company with the aim of exploring the Jetson Nano and more in general embedded computer vision. Do you have any project to suggest other than the robot navigation?
HiI have problems when lighting conditions change AND objects further away... like200meters
@Tegwyn☠Twmffat I did use DetectNet when I first joined NVIDIA, I think a lot of new content has come out since then including our support for some of the TensorFlow object detection API models on Jetson. There was a bit more to choose from along the computational / accuracy scale there
https://github.com/msurguy/awesome-jetson-nano
Hi all! I've been following AI on the edge for many years now, including exploring GPU accelerated inference on the Raspberry Pi and was so happy to see Jetson Nano that comes very close to being affordable workbench for experiments. I just wanted to say that I've started a list called Awesome Nvidia Jetson Nano that you all are welcome to look at and contribute to:The ideas of which projects to try without too much effort would be something I'd like to compile in one place for everyone to quickly refer to
@borelli.g92 Yes, I think there are tons! I think anything that needs to take video in and output processed data in real time. We have an interesting example of this where our interns used a projector to guide people in our cafeteria how to throw away their trash
@borelli.g92 https://news.developer.nvidia.com/from-munch-to-hunch-ai-classifies-your-waste-at-lunch/
@borelli.g92 I think things involving gesture interaction are also well suited, since the latency needs to be low and video needs to be streamed continuously
Very interesting! Thank you very much for sharing this.
@Maksim Surguy Awesome thanks for sharing this!
Do you have any repository where you share with the pubblic this kind of "intern projects"? I believe that it might be quite interesting if someone wants to dive into the topic.
@john what is the performance and feature gap between python and C++ for depelopment?
@John you're welcome! Enjoy, contribute and share pls
@borelli.g92 Yes! We have a github channel I contribute to myself https://github.com/NVIDIA-AI-IOT
@borelli.g92 There's lots of other projects there to check out
@John Thanks, I missed that one.
Over the top!
@Inderpreet Singh Interesting question! The feature gap has actually narrowed a lot with our latest SW release (that came along with Jetson Nano). For example, we have the TensorRT Python API which makes it much easier to prototype inference pipelines (and then deploy using C++ API if needed)
@Maksim Surguy Thank you, already on my watch list
@John any eta when you will release the product manual for the Jetson Nano SoM? Would like to get starting on a carrier board for my tablet projects but thats a bit hard with the limited information available so far.
Let's suppose I'm a poor (main thing is that the Nano is currently unavailable in germany) student who wants to get into AI. How/where would you recommend to start?
You can buy it directly from Nvidia now in Germany
@Max-Felix Müller We have tutorials / projects that we created to teach full AI workflows also (like JetBot https://github.com/NVIDIA-AI-IOT/jetbot)
@Prof. Fartsparkle it's sold out
@John cool. Thank you
@Prof. Fartsparkle NVIDIA is almost there to complete the documentation, so will publish the collateral soon
just pre-order it, mine shipped a week earlier than estimated
@Chitoku awesome!
These Nvidia jetson products will not disappoint!
I'm personally curious, is anyone here currently incorporating AI in any of there projects? Or if not, what do you feel is stopping you (maybe other than Nanos being sold out :P)
I'm very good at being disappointed
anyone here currently incorporating AI in any of there projects? ……… +3
@John - For me it's the learning curve that's stopping me. Or it will be once my dev kit gets here on Friday.
@John you know, "AI" is a very board term. There is a game of Reversi on one of my projects, you can play against the computer — that is arguably AI...
my robots also have some simple state machines with decision trees for their behavior — that is AI as well...
@Dan Maloney Ah. What would you say is the barrier your facing when learning? Intimidating software APIs? The underlying concepts?
While I did buy the Nano, I am not entirely sure how to get started. I have been tasked with digging up how we could develop low power edge devices. So my answer would be: I am not sure how to get started. Any tips? I am really interested in playing with NNs on MCUs
@John The major hindrance is the apparent complexity. The JetBot are great starter projects but there need to be more projects that help people/students LEARN deeplerning and understand it better.
@John It's a totally new area for me. I'm planning on working through the tutorials and building from there. then asking @Tegwyn☠Twmffat for help, lol
If you Google deep learning tutorial or tensor flow tutorial, it starts with demos