Nine months ago, the machine was using a Pixy2 camera to track barcode labels thrown down on the ground. The machine would steer towards the label and then stop dead when the barcode appeared at a pre-defined location on the x axis. Furthermore, the code revolves around calculating a simple 'panError' for the y axis, which is a slightly confusing name as it's not like a de-bugging error or such like. I'm going to change the name to 'deviance' for the future.
Since only one barcode label was being recognised at the time, the calculation was very simple - just subtract the actual camera y coordinate from the desired coordinate. The same is true for the upgraded camera, where it is detecting the coordinate position of the individual crops, except that there's normally going to be about 6 plants and some of them might not be recognised properly or might just have died or been eaten by a pigeon!
The principle is exactly the same as for the Pixy2 and the solution is equally simple - just add up the individual 'deviances', divide by the number of plants detected and then subtract from the desired coordinate. The main difference is that the same type of calculation is going to be used for both the x and y axes.
The calculations themselves are made on the Jetson TX2 during detection itself and then output via I2C to an Arduino Nano intermediate as simple steering or drive commands eg integer '3' means 'stop dead'.
The following code uses the term 'nonant' which refers to a 3 x 3 matrix but really should really be called 'sextant' as the matrix currently detected is 3 x 2. I'm just too lazy to change the name just yet
The Jetson TX2 developer board is fine for getting started, but really, I should have bought the TX2 Jetson module separately and bolted it straight to the Connecttech Orbitty carrier as shown above.
The system has all the features required for deployment and can even be used for training custom image sets. Most important is: Ethernet connector for flashing CUDA etc, USB3 for camera connect.
Some work arounds are better than others and ideally I'd just send the bounding box data from the Jetson TX2 directly to the TC275, which controls the WEEDINATOR motors. However, there's a few critical restraints in that both the Jetson and the TC275 will only work in 'Master' mode and will not communicate with each other through the I2C bus in any shape or form!
The first workaround I researched was using an Arduino as an intermediator on the I2C bus, which would work as a slave for both the Jetson and the TC275 …… and this might just have worked if I'd included an auxiliary RTC clock and a digital 'tie line'. I spent a few days researching this and eventually realised that as work arounds go, this was a very poor one …. lots of work with coding and wiring and still there was the, if somewhat unlikely, possibility that the whole thing would fail and lock up the I2C bus by having the 2 masters try to access the I2C bus at the same time.
After a bit more head scratching, the solution became clearer - use I2C to receive data into the intermediator and then hardware serial to send it out again !!! This proved to be by far the simplest solution and I managed to simulate the whole thing on my living room dining table:
Intermediator: (NB. There's no 'Serial.print' here as this would slow things down excessively.)
#include<Wire.h>voidsetup(){
Wire.begin(0x70); // join i2c bus with address
Wire.onReceive(receiveEvent); // register event
Serial.begin(115200); // start serial for output
}
voidloop(){
delay(100); // Must have delay here.
}
voidreceiveEvent(int howMany){
int x = Wire.read(); // receive byte as an integer
Serial.write(x); // send a byte
}
TC275 (simulated):
int incomingByte = 0; //for incoming serial data
long y[4][4];
int a;
int b;
int c;
int d;
long x =0;
int i;
int j;
int numberOfBoxes;
int xMax;
void setup()
{
Serial.begin(115200); // opens serial port, sets data rate to 9600 bps
Serial.println("TEST ");
}
void loop()
{
if (Serial.available() > 0)
{
x = Serial.read(); // read the incoming byte:
/////////////////////////////////////////////////////////////////////////////////if(x>199)
{
numberOfBoxes = x-200;
}
if((x>139)&&(x<200))
{
j=x-140;Serial.print("Number of boxes: ");Serial.print(numberOfBoxes);Serial.print(", Box number: ");Serial.println(j);
}
if(x==120){ i =-1; }
if(i==0){ y[0][0] = x*1000; }
if(i==1){ y[0][1] = x*100; }
if(i==2){ y[0][2] = x*10; }
if(i==3){ y[0][3] = x;}
a= y[0][0]+y[0][1]+y[0][2]+y[0][3];
if(x==121){ i = 4; Serial.print(" corner a: ");Serial.println(a);}
if(i==5){ y[1][0] = x*1000; }
if(i==6){ y[1][1] = x*100; }
if(i==7){ y[1][2] = x*10; }
if(i==8){ y[1][3] = x; }
b = y[1][0]+y[1][1]+y[1][2]+y[1][3];
if(x==122){ i = 9; Serial.print(" corner b: ");Serial.println(b);}
if(i==10){ y[2][0] = x*1000; }
if(i==11){ y[2][1] = x*100; }
if(i==12){ y[2][2] = x*10; }
if(i==13){ y[2][3] = x; }
c= y[2][0]+y[2][1]+y[2][2]+y[2][3];
if(x==123){ i = 14; Serial.print(" corner c: ");Serial.println(c);}
if(i==15){ y[3][0] = x*1000; }
if(i==16){ y[3][1] = x*100; }
if(i==17){ y[3][2] = x*10; }
if(i==18){ y[3][3] = x; }
d= y[3][0]+y[3][1]+y[3][2]+y[3][3];
if(i==18){ Serial.print(" corner d: ");Serial.println(d);Serial.println("");}
i++;
}
}
After a few days of frantic code writing, I managed to cobble together a functional set of programs to send and receive the four coordinates of each box, the number of boxes detected simultaneously and the current box number …. All in a user friendly format that can later be processed into commands to steer the WEEDINATOR machine.
After a few days work, I finally managed to get data out of the Jetson TX2 through the I2C bus. I started off using a tutorial from JetsonHacks that runs a 4 digit LED display and then stripped out most of the code to keep only the few lines that transmit the data. It was a bit tricky to compile the code along with the main 'inference' program which is called detectnet-camera.cpp. This basic code can only transmit one byte at a time so an integer such as 463 cannot be transmitted as the upper limit is 254. We get something like 46 instead of 463. This is not an insolvable problem as there is already I2C code within the WEEDINATOR software repository for doing this between the Arduino Mega and the TC275 so it should be just a case of re-purposing it for this new I2C task. It's also a chance for me to try and understand what Slash Dev wrote !!!!
Here's some excerpts from my 'basic' I2C code:
voidOpenI2C(){
int length;
unsignedchar buffer[60] = {0};
//----- OPEN THE I2C BUS -----char *filename = (char*)"/dev/i2c-1";
if ((kI2CFileDescriptor = open(filename, O_RDWR)) < 0)
{
//ERROR HANDLING: you can check errno to see what went wrongprintf("*************** Failed to open the i2c bus ******************\n");
//return;
}
if( ioctl( kI2CFileDescriptor, I2C_SLAVE, PADDYADDRESS ) < 0 )
{
fprintf( stderr, "Failed to set slave address: %m\n" );
//return 2;
}
}
To detect different crops a large set of photos need to be taken and boundary boxes 'drawn' around the actual plant to help determine where it is in the camera frame. Since we dont actually have any newly planted crops at this time of year, I've used a ready prepared set of dog photos as a practice run. These are accurate step by step instructions and this text assumes all the relevant software is already installed on the Jetson:
Prerequisites:
Jetson TX2 flashed with JetPack 3.3.
Caffe version: 0.15.14
DIGITS version: 6.1.1
Check that all software is installed correctly by using the pre-installed dog detect model that comes with Jetpack by running this in terminal:
$ sudo ~/jetson_clocks.sh && cd jetson-inference/build/aarch64/bin && ./detectnet-camera coco-dog
It will take a few minutes to load up before the camera footage appears.
To start from scratch with a set of photos, first turn on the DIGITS server:
Training epochs = 16 Snapshot interval (in epochs) = 16 Validation interval (in epochs) = 16
Subtract Mean: none
Solver Type: Adam
Base learning rate: 2.5e-05
> Show advanced learning options
Policy: Exponential Decay
Gamma: 0.99
batch size = 2
batch accumulation = 5 (for training on Jetson TX2)
Specifying the DetectNet Prototxt:
> Custom Network > Caffe
The DetectNet prototxt is located at /home/nvidia/jetson-inference/data/networks/detectnet.prototxt in the repo.
> Pretrained Model = /home/nvidia/jetson-inference/data/networks/bvlc_googlenet.caffemodel
>Create
Location of epoch snapshots: /home/nvidia/digits/digits/jobs
You should see the model being created through a series of epochs. Make a note of the final epoch.
Navigate to /home/nvidia/digits/digits/jobs and open the latest job folder and check it has the 'snapshot_iter_*****.caffemodel' files in it. Make a note of the highest '*****' number then copy and paste the folder into here for deployment: /home/nvidia/jetson-inference/build/aarch64/bin.
Rename the folder to reflect the number of epochs that it passed, eg myDogModel_epoch_30.
For Jetson TX2, at the end of deploy.prototxt, delete the layer named cluster:
Obviously, we're not going to be detecting dogs in the field, but there is not a publicly available ready made inference model for detecting vegetable seedlings - yet.
A lot of Ai models were trained on cats and dogs, so not wanting to break with tradition, I thought it relevant to test the Jetson TX2 object recognition system on my dog. Actually, the correct term is 'inference' and searching the net for 'object recognition' is fairly useless.
$ ./detectnet-camera coco-dog # detect dogs in the camera
in the terminal (see video):
Next thing to do is to try and get the bounding box coordinates exported into the real world via the I2C bus, then, sometime next year, train some models with plant images that represent what is actually grown here in the fields.
Building the image set for the vegetables is not easy task and requires thousands of photos to be taken in different lighting conditions. Previous experience using the Pixy2 camera shows that bright sunlight causes relatively dark and sharp shadows which were a bit of a problem. With Ai, we can incorporate photos with various shadow permutations to train the model. We need to do some research to make sure that we do it properly.
I really thought that there could not be any more files to upload after the marathon 4 month Jetpack install debacle ..... But, as might be expected, there were still many tens of thousands more to go. The interweb points to using a program called 'DIGITS' to get started 'quickly' , yet this was later defined to be a mere '2 days' work !!!! Anyway, after following the instructions at: https://github.com/NVIDIA/DIGITS/blob/master/docs/BuildDigits.md I eventually had some success. Not surprisingly, DIGITS needed a huge load of dependancies and I had to back track through each one, through 'dependencies of dependencies of dependencies' ....... A dire task for a relative Ubuntu beginner like myself.
Fortunately, I had just about enough experience to spot the mistakes in each instruction set - usually a missing 'sudo' or failiure to cd into the right directory. A total beginner would have absolutely no chance ! For me, at least, deciphering the various error messages was extremely challenging. I made a note of most of the steps / problems pasted at the end of this log, which will probably make very little sense to anyone as very often I had to back track to get dependancies installed properly eg libprotobuf.so.12 .
Anyway, here is my first adventure with Ai - recognising a O:
Notes:
File "/usr/local/lib/python2.7/dist-packages/protobuf-3.2.0-py2.7-linux-aarch64.egg/google/protobuf/descriptor.py", line 46, in <module> from google.protobuf.pyext import _message ImportError: libprotobuf.so.12: cannot open shared object file: No such file or directory
$ git clone https://github.com/protocolbuffers/protobuf.git $ cd protobuf $ git submodule update --init --recursive $ ./autogen.sh To build and install the C++ Protocol Buffer runtime and the Protocol Buffer compiler (protoc) execute the following:
$ ./configure $ make $ make check $ sudo make install $ sudo ldconfig # refresh shared library cache. cd python sudo python setup.py install --cpp_implementation
Download Source DIGITS is currently compatiable with Protobuf 3.2.x
# example location - can be customized export PROTOBUF_ROOT=~/protobuf cd $PROTOBUF_ROOT git clone https://github.com/google/protobuf.git $PROTOBUF_ROOT -b '3.2.x' Building Protobuf cd $PROTOBUF_ROOT ./autogen.sh ./configure make "-j$(nproc)" make install ldconfig cd python sudo python setup.py install --cpp_implementation This will ensure that Protobuf 3 is installed.
cd $CAFFE_ROOT mkdir build cd build cmake .. -------------- originally Could NOT find Protobuf (missing: PROTOBUF_LIBRARY PROTOBUF_INCLUDE_DIR) but now corrected make -j"$(nproc)" ---------------- /usr/include/c++/5/typeinfo:39:37: error: expected ‘}’ before end of line
I solved this problem by modifying CMakeList.txt Orginal:
Traceback (most recent call last): File "/usr/local/bin/pip", line 7, in <module> from pip._internal import main ImportError: No module named _internal
sudo easy_install pip
sudo pip install -e $DIGITS_ROOT
Starting the server:
export CAFFE_ROOT=/home/mx/caffe/ cd digits ./digits-devserver
ValueError: Caffe executable not found in PATH ........... export CAFFE_ROOT=/home/mx/caffe/ echo "export CAFFE_ROOT=/home/nvidia/caffe/" >> ~/.profile source ~/.profile echo $CAFFE_ROOT
ImportError: libprotobuf.so.12: cannot open shared object file: No such file or directory
To fix the problem, all you need to do is to remove the lock files. You can do that easily using the commands below:
sudo rm /var/lib/apt/lists/lock sudo rm /var/cache/apt/archives/lock sudo rm /var/lib/dpkg/lock After that, reconfigure the packages
sudo dpkg --configure -a
The command to remove an apt repository is apt-add-repository with the -r option which will remove instead of add the repository. So in your case, the full command would be:
About 4 months ago I bought the Jetson TX2 development board and tried to install the JetPack software to it …….. but after many hours of struggle, I got pretty much nowhere. Fortunately, the next release, JetPack 3.3, worked a lot better and I finally managed to get a working system up and running:
The installation uses two computers running Ubuntu and the tricks that I used are:
Make a fresh install of Ubuntu 16.04 (2018) on the host computer
Use the network settings panel to set up the USB interface, particularly the IPv4 settings. The documentation gives an address of 192.168.55.2, so enter this then 255.255.255.0 then 255.255.255.0 again. When the install itself asks for the address. use: 192.168.55.1.
There must be an internet connection !
Make sure the install knows which internet device to use eg Wi-Fi / Bluetooth / whatever. A router switch is NOT required as the install will automatically switch between the internet and USB connection whenever it needs to, as long as it was told before hand which connection to use.
The plan is to spend the colder Winter months developing an object based navigation system for the machine so, for example, it can use the plants themselves to enhance the overall navigation accuracy. We'll still be using GNSS, electrical cables, barcodes etc but will eventually give mathematical weighting to the techniques that prove to be more useful.