-
Re-visiting Code from 9 Months Ago
06/12/2019 at 10:12 • 0 comments//printf(" Box Area (%i) = %i \n",n,myBoxArea[n]); // Print the area (size) of the box. //printf(" Box Centre (x,%i) = %i \n",n,myBoxCentreX[n]); // Print the box centre (x) coordinate. //printf(" Box Centre (y,%i) = %i \n",n,myBoxCentreY[n]); // Print the box centre (y) coordinate. // Divide up into 'Nonants' (the 9 version of quadrants). // 1200/3 = 400, 720/3 = 240. // Or divide into sextants eg 1920/3 = 640: if (( myBoxCentreX[n] <= 640 ) && ( myBoxCentreY[n] <= 540 )) { nonant = 0; nonantDevianceX[0] = myBoxCentreX[n] -320; nonantDevianceY[0] = myBoxCentreY[n] -270; //printf(" Nonant (%i) = %i \n",n,nonant); } if (( myBoxCentreX[n] >= 641 ) && ( myBoxCentreX[n] <= 1280 ) && ( myBoxCentreY[n] <= 540 )) { nonant = 1; nonantDevianceX[1] = myBoxCentreX[n] -960; nonantDevianceY[1] = myBoxCentreY[n] -270; //printf(" Nonant (%i) = %i \n",n,nonant); } if (( myBoxCentreX[n] >= 1281 ) && ( myBoxCentreY[n] <= 540 )) { nonant = 2; nonantDevianceX[2] = myBoxCentreX[n] -1600; nonantDevianceY[2] = myBoxCentreY[n] -270; //printf(" Nonant (%i) = %i \n",n,nonant); } if (( myBoxCentreX[n] <= 640 ) && ( myBoxCentreY[n] >= 541 )) { nonant = 3; nonantDevianceX[3] = myBoxCentreX[n] -320; nonantDevianceY[3] = myBoxCentreY[n] -810; //printf(" Nonant (%i) = %i \n",n,nonant); } if (( myBoxCentreX[n] >= 641 ) && ( myBoxCentreX[n] <= 1281 ) && ( myBoxCentreY[n] >= 541 )) { nonant = 4; nonantDevianceX[4] = myBoxCentreX[n] -960; nonantDevianceY[4] = myBoxCentreY[n] -810; //printf(" Nonant (%i) = %i \n",n,nonant); } if (( myBoxCentreX[n] >= 1281 ) && ( myBoxCentreY[n] >= 541 )) { nonant = 5; nonantDevianceX[5] = myBoxCentreX[n] -1600; nonantDevianceY[5] = myBoxCentreY[n] -810; //printf(" Nonant (%i) = %i \n",n,nonant); } //printf(" Nonant (%i) = %i \n",n,nonant); //printf("..................................................... \n"); //I2CDataHandler(); }
-
Major Control Board Upgrade
06/05/2019 at 17:07 • 0 commentsInstalled! Only one bug was apparent, a bad earth connection on the LHS steering controller. Worked pretty much out of the box.
-
Time to Shrink the Jetson
03/06/2019 at 09:35 • 0 comments -
Getting Detection Data from Jetson TX2 to TC275
10/25/2018 at 15:23 • 0 commentsSome work arounds are better than others and ideally I'd just send the bounding box data from the Jetson TX2 directly to the TC275, which controls the WEEDINATOR motors. However, there's a few critical restraints in that both the Jetson and the TC275 will only work in 'Master' mode and will not communicate with each other through the I2C bus in any shape or form!
The first workaround I researched was using an Arduino as an intermediator on the I2C bus, which would work as a slave for both the Jetson and the TC275 …… and this might just have worked if I'd included an auxiliary RTC clock and a digital 'tie line'. I spent a few days researching this and eventually realised that as work arounds go, this was a very poor one …. lots of work with coding and wiring and still there was the, if somewhat unlikely, possibility that the whole thing would fail and lock up the I2C bus by having the 2 masters try to access the I2C bus at the same time.
After a bit more head scratching, the solution became clearer - use I2C to receive data into the intermediator and then hardware serial to send it out again !!! This proved to be by far the simplest solution and I managed to simulate the whole thing on my living room dining table:
The code, as always, is on GitHub HERE.
Intermediator: (NB. There's no 'Serial.print' here as this would slow things down excessively.)
#include <Wire.h> void setup() { Wire.begin(0x70); // join i2c bus with address Wire.onReceive(receiveEvent); // register event Serial.begin(115200); // start serial for output } void loop() { delay(100); // Must have delay here. } void receiveEvent(int howMany) { int x = Wire.read(); // receive byte as an integer Serial.write(x); // send a byte }
TC275 (simulated):
int incomingByte = 0; // for incoming serial data long y[4][4]; int a; int b; int c; int d; long x =0; int i; int j; int numberOfBoxes; int xMax; void setup() { Serial.begin(115200); // opens serial port, sets data rate to 9600 bps Serial.println("TEST "); } void loop() { if (Serial.available() > 0) { x = Serial.read(); // read the incoming byte: ///////////////////////////////////////////////////////////////////////////////// if(x>199) { numberOfBoxes = x-200; } if((x>139)&&(x<200)) { j=x-140;Serial.print("Number of boxes: ");Serial.print(numberOfBoxes);Serial.print(", Box number: ");Serial.println(j); } if(x==120){ i =-1; } if(i==0){ y[0][0] = x*1000; } if(i==1){ y[0][1] = x*100; } if(i==2){ y[0][2] = x*10; } if(i==3){ y[0][3] = x;} a= y[0][0]+y[0][1]+y[0][2]+y[0][3]; if(x==121){ i = 4; Serial.print(" corner a: ");Serial.println(a);} if(i==5){ y[1][0] = x*1000; } if(i==6){ y[1][1] = x*100; } if(i==7){ y[1][2] = x*10; } if(i==8){ y[1][3] = x; } b = y[1][0]+y[1][1]+y[1][2]+y[1][3]; if(x==122){ i = 9; Serial.print(" corner b: ");Serial.println(b);} if(i==10){ y[2][0] = x*1000; } if(i==11){ y[2][1] = x*100; } if(i==12){ y[2][2] = x*10; } if(i==13){ y[2][3] = x; } c= y[2][0]+y[2][1]+y[2][2]+y[2][3]; if(x==123){ i = 14; Serial.print(" corner c: ");Serial.println(c);} if(i==15){ y[3][0] = x*1000; } if(i==16){ y[3][1] = x*100; } if(i==17){ y[3][2] = x*10; } if(i==18){ y[3][3] = x; } d= y[3][0]+y[3][1]+y[3][2]+y[3][3]; if(i==18){ Serial.print(" corner d: ");Serial.println(d);Serial.println("");} i++; } }
-
Dog detector Transmitting All Data to Arduino
10/23/2018 at 17:01 • 0 commentsAfter a few days of frantic code writing, I managed to cobble together a functional set of programs to send and receive the four coordinates of each box, the number of boxes detected simultaneously and the current box number …. All in a user friendly format that can later be processed into commands to steer the WEEDINATOR machine.
Here's the code used on the Jetson TX2:
int i2cwrite(int writeValue) { int toReturn = i2c_smbus_write_byte(kI2CFileDescriptor, writeValue); if (toReturn < 0) { printf(" ************ Write error ************* \n") ; toReturn = -1 ; } return toReturn ; } void OpenI2C() { int length; unsigned char buffer[60] = {0}; //----- OPEN THE I2C BUS ----- char *filename = (char*)"/dev/i2c-1"; if ((kI2CFileDescriptor = open(filename, O_RDWR)) < 0) { //ERROR HANDLING: you can check errno to see what went wrong printf("*************** Failed to open the i2c bus ******************\n"); //return; } if( ioctl( kI2CFileDescriptor, I2C_SLAVE, PADDYADDRESS ) < 0 ) { fprintf( stderr, "Failed to set slave address: %m\n" ); //return 2; } } void I2CDataHandler() { printf(" My box number = %i \n",myBoxNumber); for( int j=0; j < 4; j++ ) { if(j==0){i2cwrite(200+myNumberOfBoxes); } // Total number of bounding boxes. if(j==0){i2cwrite(140+myBoxNumber); } // Designates bounding box number. i2cwrite(120+j); // Designates box corner number printf(" intBB[j] = %i \n",intBB[j]); top = intBB[j]; myArray[j][0] = static_cast<int>(top/1000); printf(" myArray[j][0] = %i \n",myArray[j][0]); i2cwrite(myArray[j][0]); top = (top - myArray[j][0]*1000); myArray[j][1] = static_cast<int>(top/100); printf(" myArray[j][1] = %i \n",myArray[j][1]); i2cwrite(myArray[j][1]); top = (top - myArray[j][1]*100); myArray[j][2] = static_cast<int>(top/10); printf(" myArray[j][2] = %i \n",myArray[j][2]); i2cwrite(myArray[j][2]); top = (top - myArray[j][2]*10); myArray[j][3] = static_cast<int>(top); printf(" myArray[j][3] = %i \n",myArray[j][3]); i2cwrite(myArray[j][3]); } }
And the code for recieving the data on an Arduino:
#include <Wire.h> long y[4][4]; int a; int b; int c; int d; long x =0; int i; int j; int numberOfBoxes; int xMax; void setup() { Wire.begin(0x70); // join i2c bus with address Wire.onReceive(receiveEvent); // register event //Wire.begin(0x50); // join i2c bus with address //Wire.onReceive(receiveEvent); // register event Serial.begin(9600); // start serial for output } void loop() { delay(100); } // function that executes whenever data is received from master // this function is registered as an event, see setup() void receiveEvent(int howMany) { //delay(50); int x = Wire.read(); // receive byte as an integer //Serial.print(" Integer: ");Serial.println(x); // print the integer if(x>199) { numberOfBoxes = x-200; } if((x>139)&&(x<200)) { j=x-140;Serial.print("Number of boxes: ");Serial.print(numberOfBoxes);Serial.print(", Box number: ");Serial.println(j); } if(x==120){ i =-1; } if(i==0){ y[0][0] = x*1000; } if(i==1){ y[0][1] = x*100; } if(i==2){ y[0][2] = x*10; } if(i==3){ y[0][3] = x;} a= y[0][0]+y[0][1]+y[0][2]+y[0][3]; if(x==121){ i = 4; Serial.print(" corner a: ");Serial.println(a);} if(i==5){ y[1][0] = x*1000; } if(i==6){ y[1][1] = x*100; } if(i==7){ y[1][2] = x*10; } if(i==8){ y[1][3] = x; } b = y[1][0]+y[1][1]+y[1][2]+y[1][3]; if(x==122){ i = 9; Serial.print(" corner b: ");Serial.println(b);} if(i==10){ y[2][0] = x*1000; } if(i==11){ y[2][1] = x*100; } if(i==12){ y[2][2] = x*10; } if(i==13){ y[2][3] = x; } c= y[2][0]+y[2][1]+y[2][2]+y[2][3]; if(x==123){ i = 14; Serial.print(" corner c: ");Serial.println(c);} if(i==15){ y[3][0] = x*1000; } if(i==16){ y[3][1] = x*100; } if(i==17){ y[3][2] = x*10; } if(i==18){ y[3][3] = x; } d= y[3][0]+y[3][1]+y[3][2]+y[3][3]; if(i==18){ Serial.print(" corner d: ");Serial.println(d);Serial.println("");} i++; }
All files are on Github HERE.
-
Getting bounding box coordinates transmitted to Arduino over I2C
10/17/2018 at 16:14 • 0 commentsAfter a few days work, I finally managed to get data out of the Jetson TX2 through the I2C bus. I started off using a tutorial from JetsonHacks that runs a 4 digit LED display and then stripped out most of the code to keep only the few lines that transmit the data. It was a bit tricky to compile the code along with the main 'inference' program which is called detectnet-camera.cpp. This basic code can only transmit one byte at a time so an integer such as 463 cannot be transmitted as the upper limit is 254. We get something like 46 instead of 463. This is not an insolvable problem as there is already I2C code within the WEEDINATOR software repository for doing this between the Arduino Mega and the TC275 so it should be just a case of re-purposing it for this new I2C task. It's also a chance for me to try and understand what Slash Dev wrote !!!!
Here's some excerpts from my 'basic' I2C code:
void OpenI2C() { int length; unsigned char buffer[60] = {0}; //----- OPEN THE I2C BUS ----- char *filename = (char*)"/dev/i2c-1"; if ((kI2CFileDescriptor = open(filename, O_RDWR)) < 0) { //ERROR HANDLING: you can check errno to see what went wrong printf("*************** Failed to open the i2c bus ******************\n"); //return; } if( ioctl( kI2CFileDescriptor, I2C_SLAVE, PADDYADDRESS ) < 0 ) { fprintf( stderr, "Failed to set slave address: %m\n" ); //return 2; } }
int i2cwrite(int writeValue) { int toReturn = i2c_smbus_write_byte(kI2CFileDescriptor, writeValue); if (toReturn < 0) { printf(" ************ Write error ************* \n") ; toReturn = -1 ; } return toReturn ; }
writeValue = static_cast<int>(bb[0]); printf(" writeValueZero = %i \n",writeValue); i2cwrite(writeValue); writeValue = static_cast<int>(bb[1]); printf(" writeValueOne = %i \n",writeValue); i2cwrite(writeValue); writeValue = static_cast<int>(bb[2]); printf(" writeValueTwo = %i \n",writeValue); i2cwrite(writeValue); writeValue = static_cast<int>(bb[3]); printf(" writeValueThree = %i \n",writeValue); i2cwrite(writeValue);
writeValue = static_cast<int>(bb[0]); printf(" writeValueZero = %i \n",writeValue); i2cwrite(writeValue); writeValue = static_cast<int>(bb[1]); printf(" writeValueOne = %i \n",writeValue); i2cwrite(writeValue); writeValue = static_cast<int>(bb[2]); printf(" writeValueTwo = %i \n",writeValue); i2cwrite(writeValue); writeValue = static_cast<int>(bb[3]); printf(" writeValueThree = %i \n",writeValue); i2cwrite(writeValue);
Full code is on Github.
-
Step by Step Instructions for Turning Sets of Images into a Model for Object Detection on the Jetson TX2
10/14/2018 at 09:58 • 0 commentsTo detect different crops a large set of photos need to be taken and boundary boxes 'drawn' around the actual plant to help determine where it is in the camera frame. Since we dont actually have any newly planted crops at this time of year, I've used a ready prepared set of dog photos as a practice run. These are accurate step by step instructions and this text assumes all the relevant software is already installed on the Jetson:
Prerequisites:
Jetson TX2 flashed with JetPack 3.3.
Caffe version: 0.15.14
DIGITS version: 6.1.1
Check that all software is installed correctly by using the pre-installed dog detect model that comes with Jetpack by running this in terminal:
$ sudo ~/jetson_clocks.sh && cd jetson-inference/build/aarch64/bin && ./detectnet-camera coco-dog
It will take a few minutes to load up before the camera footage appears.
To start from scratch with a set of photos, first turn on the DIGITS server:
$ sudo ~/jetson_clocks.sh && cd digits && export CAFFE_ROOT=/home/nvidia/caffe && ./digits-devserver
Now we're going to build the model using actual images of dogs with their associated text files:
In browser naviate to http://localhost:5000/
Importing the Detection Dataset into DIGITS:
> Datasets > Images > Object Detection
Training image folder: /media/nvidia/2037-F6FA/coco/train/images/dog
Training label folder: /media/nvidia/2037-F6FA/coco/train/labels/dog
Validation image folder: /media/nvidia/2037-F6FA/coco/val/images/dog
Validation label folder: /media/nvidia/2037-F6FA/coco/val/labels/dog
Pad image (Width x Height): 640 x 640 Custom classes: dontcare, dog
Group Name: MS-COCO Dataset Name: coco-dog
> Create > Home > Models > Images > Object Detection
> Select Dataset: coco-dog
Training epochs = 16
Snapshot interval (in epochs) = 16
Validation interval (in epochs) = 16Subtract Mean: none
Solver Type: Adam
Base learning rate: 2.5e-05
> Show advanced learning options
Policy: Exponential Decay
Gamma: 0.99
batch size = 2
batch accumulation = 5 (for training on Jetson TX2)
Specifying the DetectNet Prototxt:
> Custom Network > Caffe
The DetectNet prototxt is located at /home/nvidia/jetson-inference/data/networks/detectnet.prototxt in the repo.
> Pretrained Model = /home/nvidia/jetson-inference/data/networks/bvlc_googlenet.caffemodel
>Create
Location of epoch snapshots: /home/nvidia/digits/digits/jobs You should see the model being created through a series of epochs. Make a note of the final epoch.
Navigate to /home/nvidia/digits/digits/jobs and open the latest job folder and check it has the 'snapshot_iter_*****.caffemodel' files in it. Make a note of the highest '*****' number then copy and paste the folder into here for deployment: /home/nvidia/jetson-inference/build/aarch64/bin.
Rename the folder to reflect the number of epochs that it passed, eg myDogModel_epoch_30.
For Jetson TX2, at the end of deploy.prototxt, delete the layer named cluster:
layer { name: "cluster" type: "Python" bottom: "coverage" bottom: "bboxes" top: "bbox-list" python_param { module: "caffe.layers.detectnet.clustering" layer: "ClusterDetections" param_str: "640, 640, 16, 0.6, 2, 0.02, 22, 1" } }
Open terminal and run, changing the '*****' number accordingly:
$ cd jetson-inference/build/aarch64/bin && NET=myDogModel_epoch_30 && ./detectnet-camera \ --prototxt=$NET/deploy.prototxt \ --model=$NET/snapshot_iter_*****.caffemodel \
Hit return twice and you'll see various messges including:
[TRT] attempting to open cache file dogPoo_epoch_8/snapshot_iter_3088.caffemodel.2.tensorcache
[TRT] cache file not found, profiling network model
This is not an error!
If you've got:
[TRT] building CUDA engine
Then all is good - just wait a few minutes for it to complete and then the camera should activate.
Now find / borrow a dog and test for bounding boxes!
-
Dog Detector
10/13/2018 at 08:49 • 0 commentsObviously, we're not going to be detecting dogs in the field, but there is not a publicly available ready made inference model for detecting vegetable seedlings - yet.
A lot of Ai models were trained on cats and dogs, so not wanting to break with tradition, I thought it relevant to test the Jetson TX2 object recognition system on my dog. Actually, the correct term is 'inference' and searching the net for 'object recognition' is fairly useless.
The demo used is found on the Nvidia GitHub page: https://github.com/dusty-nv/jetson-inference and the best thing to do is scroll right down to about 3/4 down and run this:
$ cd jetson-inference/build/aarch64/bin
$ ./detectnet-camera coco-dog # detect dogs in the camera
in the terminal (see video):
Next thing to do is to try and get the bounding box coordinates exported into the real world via the I2C bus, then, sometime next year, train some models with plant images that represent what is actually grown here in the fields.
Building the image set for the vegetables is not easy task and requires thousands of photos to be taken in different lighting conditions. Previous experience using the Pixy2 camera shows that bright sunlight causes relatively dark and sharp shadows which were a bit of a problem. With Ai, we can incorporate photos with various shadow permutations to train the model. We need to do some research to make sure that we do it properly.
-
First Steps With Ai on Jetson TX2
10/13/2018 at 08:40 • 0 commentsI really thought that there could not be any more files to upload after the marathon 4 month Jetpack install debacle ..... But, as might be expected, there were still many tens of thousands more to go. The interweb points to using a program called 'DIGITS' to get started 'quickly' , yet this was later defined to be a mere '2 days' work !!!! Anyway, after following the instructions at: https://github.com/NVIDIA/DIGITS/blob/master/docs/BuildDigits.md I eventually had some success. Not surprisingly, DIGITS needed a huge load of dependancies and I had to back track through each one, through 'dependencies of dependencies of dependencies' ....... A dire task for a relative Ubuntu beginner like myself.
Fortunately, I had just about enough experience to spot the mistakes in each instruction set - usually a missing 'sudo' or failiure to cd into the right directory. A total beginner would have absolutely no chance ! For me, at least, deciphering the various error messages was extremely challenging. I made a note of most of the steps / problems pasted at the end of this log, which will probably make very little sense to anyone as very often I had to back track to get dependancies installed properly eg libprotobuf.so.12 .
Anyway, here is my first adventure with Ai - recognising a O:
File "/usr/local/lib/python2.7/dist-packages/protobuf-3.2.0-py2.7-linux-aarch64.egg/google/protobuf/descriptor.py", line 46, in <module>
from google.protobuf.pyext import _message
ImportError: libprotobuf.so.12: cannot open shared object file: No such file or directoryProcedure:
# For Ubuntu 16.04
CUDA_REPO_PKG=http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb# Install repo packages
wget "$CUDA_REPO_PKG" -O /tmp/cuda-repo.deb && sudo dpkg -i /tmp/cuda-repo.deb && rm -f /tmp/cuda-repo.debwget "$ML_REPO_PKG" -O /tmp/ml-repo.deb && sudo dpkg -i /tmp/ml-repo.deb && rm -f /tmp/ml-repo.deb
# Download new list of packages
sudo apt-get updatesudo apt-get install --no-install-recommends git graphviz python-dev python-flask python-flaskext.wtf python-gevent python-h5py python-numpy python-pil python-pip python-scipy python-tk
------------------DONE------------------------------
sudo apt-get install autoconf automake libtool curl make g++ git python-dev python-setuptools unzip
------------------DONE------------------------------
$ git clone https://github.com/protocolbuffers/protobuf.git
$ cd protobuf
$ git submodule update --init --recursive
$ ./autogen.sh
To build and install the C++ Protocol Buffer runtime and the Protocol Buffer compiler (protoc) execute the following:$ ./configure
$ make
$ make check
$ sudo make install
$ sudo ldconfig # refresh shared library cache.
cd python
sudo python setup.py install --cpp_implementationDownload Source
DIGITS is currently compatiable with Protobuf 3.2.x# example location - can be customized
export PROTOBUF_ROOT=~/protobuf
cd $PROTOBUF_ROOT
git clone https://github.com/google/protobuf.git $PROTOBUF_ROOT -b '3.2.x'
Building Protobuf
cd $PROTOBUF_ROOT
./autogen.sh
./configure
make "-j$(nproc)"
make install
ldconfig
cd python
sudo python setup.py install --cpp_implementation
This will ensure that Protobuf 3 is installed.------------------ DONE -------------------------
sudo apt-get install --no-install-recommends build-essential cmake git gfortran libatlas-base-dev libboost-filesystem-dev libboost-python-dev
----------- DONE -----------------------------------sudo apt-get install libboost-system-dev libboost-thread-dev libgflags-dev libgoogle-glog-dev libhdf5-serial-dev libleveldb-dev liblmdb-dev libopencv-dev libsnappy-dev python-all-dev python-dev python-h5py python-matplotlib python-numpy python-opencv python-pil python-pip python-pydot python-scipy python-skimage python-sklearn
----------------- DONE -------------------------------
export CAFFE_ROOT=~/caffe
git clone https://github.com/NVIDIA/caffe.git $CAFFE_ROOT -b 'caffe-0.15'------------- Done ------------------
sudo pip install -r $CAFFE_ROOT/python/requirements.txt
----------------- DONE with errors -------------------
????????????? pip install --upgrade pipapt-get install protobuf-compiler
cmake -DBLAS=opencd $CAFFE_ROOT
mkdir build
cd build
cmake .. -------------- originally Could NOT find Protobuf (missing: PROTOBUF_LIBRARY PROTOBUF_INCLUDE_DIR) but now corrected
make -j"$(nproc)" ---------------- /usr/include/c++/5/typeinfo:39:37: error: expected ‘}’ before end of lineI solved this problem by modifying CMakeList.txt
Orginal:# ---[ Flags
if(UNIX OR APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -Wall")
endif()if()Modified:
# ---[ Flags
if(UNIX OR APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -Wall -std=c++11")
endif()if()make install
DIGITS_ROOT=~/digits
git clone https://github.com/NVIDIA/DIGITS.git $DIGITS_ROOTsudo pip install -r $DIGITS_ROOT/requirements.txt
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in <module>
from pip._internal import main
ImportError: No module named _internalsudo easy_install pip
sudo pip install -e $DIGITS_ROOT
Starting the server:
export CAFFE_ROOT=/home/mx/caffe/
cd digits
./digits-devserverValueError: Caffe executable not found in PATH ........... export CAFFE_ROOT=/home/mx/caffe/
echo "export CAFFE_ROOT=/home/nvidia/caffe/" >> ~/.profile
source ~/.profile
echo $CAFFE_ROOTImportError: libprotobuf.so.12: cannot open shared object file: No such file or directory
Starts a server at http://localhost:5000/.
python -m digits.download_data mnist ~/mnist
Other notes:
To fix the problem, all you need to do is to remove the lock files. You can do that easily using the commands below:
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
sudo rm /var/lib/dpkg/lock
After that, reconfigure the packagessudo dpkg --configure -a
The command to remove an apt repository is apt-add-repository with the -r option which will remove instead of add the repository. So in your case, the full command would be:
sudo add-apt-repository -r ppa:colingille/freshlight
-
Ai Object Based Navigation Takes One Step Forwards
10/13/2018 at 08:37 • 0 commentsAbout 4 months ago I bought the Jetson TX2 development board and tried to install the JetPack software to it …….. but after many hours of struggle, I got pretty much nowhere. Fortunately, the next release, JetPack 3.3, worked a lot better and I finally managed to get a working system up and running:
- Make a fresh install of Ubuntu 16.04 (2018) on the host computer
- Use the network settings panel to set up the USB interface, particularly the IPv4 settings. The documentation gives an address of 192.168.55.2, so enter this then 255.255.255.0 then 255.255.255.0 again. When the install itself asks for the address. use: 192.168.55.1.
- There must be an internet connection !
- Make sure the install knows which internet device to use eg Wi-Fi / Bluetooth / whatever. A router switch is NOT required as the install will automatically switch between the internet and USB connection whenever it needs to, as long as it was told before hand which connection to use.
The plan is to spend the colder Winter months developing an object based navigation system for the machine so, for example, it can use the plants themselves to enhance the overall navigation accuracy. We'll still be using GNSS, electrical cables, barcodes etc but will eventually give mathematical weighting to the techniques that prove to be more useful.