An extension to the WEEDINATOR https://hackaday.io/project/53896-weedinator-2018 project, this system uses an Nvidia Jetson TX2 / Xavier to detect the location of individual plants, that have previously been accurately planted in a grid, to reconstruct that grid in a computer system, and use it for orientation and navigation of the robot.Previously, navigation has been attempted by means of GPS, coloured ropes and wires carried high frequency AC current but none of these proved to be effective due to poor accuracy and impracticability. 'Models' can be trained to recognise the individual plants using so called 'neural networks' and previous tests have suggested that results will be very good as the background will generally be uniform, clean, soil and maybe a few stones.
It's only taken me 2 months to work out how to get the camera working without buying a 4K monitor, mostly thanks to a reply on the Nvidia community forum, which is pretty fantastic.
Place the above before "CUDA(cudaNormalizeRGBA()" in the draw section at the bottom of the main loop.
In the section near the top where the code creates the display and texture, either set your texture size to a custom value or divide it by an amount that brings it into the size of your display properly. I divided the camera size by 2 for my needs.
The camera frame now needs to be split up into 6 grids for the new resolution and calculations made to take account of the perspective every time the camera is moving to a new position.
I spent a bit of time taking about 1,000 photos of some yellow plastic discs I had lying around to use as simulators of grids of plants in the workshop rather than out in a field.
This proved to be a great investment and has made testing the machine much easier.
After adding about 1,000 'labels' as described in the previous log, rather surprisingly, the detection now works very well in bright sunlight with strong shadows:
It's all about the number of labels, not the number of images. A proportion of the images should be close up, high resolution, but, quite possibly, a large number can be lower resolution, so I decided to include photographs of the seedlings in groups of 9 as below:
On a relatively small dataset of just 2064 images, we're already getting good results detecting swede plants. The boxes are not tight on the crops yet and this can probably be cured by adding a load of null images of the soil. Shadows are also a problem and additional images will probably be added with shadows to counter that.
350 swedelings have been planted. The weather is dry and hot. Each plant is exactly 11" apart to match the weeding pattern of the robot. A giant wooden set square and carefully placed string lines are used for positioning.
From experiences using computer vision last year, some of the cameras got very confused by bits of dry vegetable matter, particularly long thin bits, or 'straw' lying on the surface of the soil. The previous log shows a very scrappy plot, mainly due to this straw being turned over near the surface rather than buried. A pass with the plough turns the soil over to about 8" depth and should help bury the rubbish. The test plot is now left to dry out and any remaining weeds to get blasted by the strong sunlight we're getting at the moment: