-
Transcript
08/03/2018 at 20:21 • 0 commentsTRHaddington Dynamics joined the room.2:56 PM
Okay! Lets get started. Welcome to Hack Chat @Haddington Dynamics , its great to have you here!
Great to be here
It would be great if people can put their questions at https://hackaday.io/event/160038-training-robots-by-touch-hack-chat so we can keep the chat going smoothly. While we wait for questions to roll in, can you tell us a bit about your work?
Kent will answer that directly - he is signing in now.
Hello and thanks for the invite to talk about Dexter. I have been doing tech for about 35 years (yea, i'm old) starting with early 8080 processors and eventually working with FPGA's and supper computers
super
I have always loved robots and after I sold my super computer company I decided to build an FPGA controlled Robot
Cool! Our first question from @Lutetium "What makes a robot "Trainable" ? What software do you use for this? " ?
Kent is answering this but we wrote our own software doing this.
He is very slow at typing.
Is it rude if I ask how long the software took to write? It's totally in jest. ;) Lol.
He codes fast creating sentences :)
I would expect the software uses some type of AI Classifiers. What kind?
Lol
Nice! You need to be efficient at the right things.
@Orlando Hoilett lets keep questions to https://hackaday.io/event/160038-training-robots-by-touch-hack-chat so that the chat stays easy to follow :)
No AI classifers.
I had an idea for creating a very high resolution optical encoder that could change the way that robots were controlled by precisely measuring each axis directly at the point of rotation. this needs to be done at the arc-second scale (1296000 points per revolution). in order to do this, you need to calculate very fast (thats why we need an FPGA). because we can measure very precisely we can measure the difference between where we told the stepper motors to go and where the axis actually is. This is a emergent torque sensor. So, we can grab the robot and it feels us and moves out of the way.
Our next question is from @Kevin Cheng "Have you ever tried to use any ROS stack? Why or why not?"
So, training is just pushing the robot around and recording all of the paths then playing them back
we have used ros and we plan on supporting it even more in the near future
we also have our own JavaScript development environment DDE Dexter Development Environment
And, we have a TCP/IP socket that exposes the API on the robot so that any language can be used
One of our Kickstarter backers built a Unity3D asset stack that interfaces with the socket API
Interesting! Our next question si from @Josh Starnes "How does a robot "learn" I mean does it record data to a table and eventually use that data somehow?"
As we move the robot around and capture paths and points, we store this in a json file that is re-playable. we can also send this movement data over to a second robot connected through the internet and control the remote robot. this is where we plan to use AI and machine learning by using the human movement data to train an AI
Is the json file or program specific to a given physical environment, or general to a class of actions ?
This remote robot training is very exciting to us. Humans will be able to interact remotely with each other through their robots
The json file is xyz and pose data. So you could use it to program another type of robot
So the concept of agency will be expanded to include the physical instantiation of one's personal robot. That is an interesting development.
Our next question is from @Frank Buss "Do you think an AI needs to control a robot to interact with the physical world to get self-consciousness and sentient? One with torque feedbacks sounds perfect for this."
Yes, we talk about being able to project your presence all over the globe.
You mean, through having multiple robots in various geographical locations ?
Rebecca: yes. and once there are millions of robots around the world the anyone can go anywhere and help out
https://io9.gizmodo.com/watch-as-a-hacker-frees-this-telepresence-robot-from-it-1663636319
telepresence robots can be a security problem:an imagine an AI hacks them, the Singularity will happen soon after it
Frank: I think AI will need to experience similar things as humans in order to have context that we will recognize as sentience
@Frank Buss you are absolutely right. That is where we are focused on a ground truth system. These robots will have a PUF (physically unclonable function) and a keystore.
but it could be in an emulated world
@Haddington Dynamics - i'm curious if silicon PUFs can be forged with a FIB
True, but then we have the brain in the vat computational problem. the real world is much easier to simulate.
What is an FIB?
focussed ion beam
Sentience as humans have it is a rather high-level challenge- goals, dreams, moral principles, pragmatic rules, self-preservation. You could certainly imagine using this system as proposed in a fantastic but much more limited way: for instance, a distributed army of robots that just clean the street and apply bioremediation chemicals in Superfund sites- but that don't have scripts that involve being "a real person".
There are multiple ways to create a PUF. I assume FIB could be one of those.
@kent gilson could you tell us a bit about future plans?
i mean to clone a PUF using a FIB
Then the original PUF is not "provably" unique.
and someone controls the robots, can be theoretical not hackable with public/private keys, but what if the controlling computer is hacked?
http://users.sec.t-labs.tu-berlin.de/~nedos/host2013.pdf
heh been done apparently already
We are putting the PUF and the keystore directly connected to the FPGA and not accessible by the processor and os
Could you explain a bit more about FIB and its role in physically unclonable function ?
@Rebecca E. Skinner we don't know if a highly complex AI/robot system don't developed sentience on its own, might emerge in any sophisticated AI systems
@kent gilson so the results from the PUF are used for what out of interest?
Understood. You could also get something incredibly useful in the real world without reaching for any sort of general AI sentience.
or we could produce a paperclip AI which gets super intelligent and consumes the whole universe producing paperclips
Life 3.0 book has this dystopic examination
right, just finished reading it :-)
Well... it all boils down the the definition of life, and what is a better life... Who is to say that the clips don't deserve to take over the world... :-D
I don't think it is very dystopic, but realistic. Better than the all optimistic and happy views of Kurzweil.
we like the idea of humans interacting with other humans through haptic interfaces. This may give us some time to further explore the implications of AI's using our robot infrastructure to do nefarious things.
Alright. I know @kent gilson wanted to show us a live feed of training the robot, so lets wrap up the questions and head over to the Zoom feed! https://zoom.us/j/811893179
thanks everyone we will start the zoom demo in 5 min so everyone who wants can have time to join
It's not going to install one of them remote controlled robots that are out to kill us... will it ? (The install you posted ;-) )
lol
Sorry had to put shurt on
shirt
I am in California :-D