-
Log 10: Control Methods
07/01/2017 at 12:45 • 0 commentsFollowing steps are executed by this robotic system to achieve object detection and following. First, object is extracted using image processing method. Second, errors such as heading angle error and distance error between detected object and the robot are calculated. Third, controllers are designed for minimizing these errors.
A. Image Processing Method
In this method, a color-based object detection algorithm is developed for the Kinect camera sensor. AForge.NET C# framework has been used to provide useful set of image processing filters and tools designed for developing image processing algorithm [10]. The method is executed as below.
1) With the help of Kinect camera, both color (RGB ) and depth information are collected. In the development of this algorithm, the first step is to detect specified colored object and then get its position and dimension. Finally, the object is located in the image.
2) The simplest object detection is achieved by performing color filtering on the Kinect RGB images. The color filtering process filters pixels inside or outside of specified RGB color range and fills the rest with specified color (black color is used in this paper ). In this process, only object of the interested color is kept and all the rest are removed out.
3) Now, the next step is to find out coordinates of interested colored object. This is done by using 'Blob counter' tool, which counts and extracts stand alone objects in images using connected components algorithm [10]. The connected component algorithm treats all pixels with values less than or equal to 'background threshold' as background, but pixels with higher values are treated as object's pixels. However, 'Blob counter' tool works with grayscale images, so, gray scaling is applied on the images before using this tool.
4) Last step is to locate the detected object in the image. Once the object is located, we get the coordinates (X,Y) of the centre of object.
Note that, the desired range of area for an object is chosen to be from 1m to 1.2m. The reference distance for an object is chosen as 1.1m as shown in Figure 1. The distance between an object and the robot is obtained using Kinect depth map. If the object is located 1.2m or further, the robot will go towards forward direction. If the object is placed less than 1m, the robot will go towards backward direction. And, if the object is within this range, the robot will stop. Figure 2 shows an image frame having detected object with its center coordinates (X,Y) within a desired area.
Figure 1 Desired range of area for an object
Figure 2 Image showing a detected object within desired area
B. Error Measurement
This section explains the definition and measurement methods for heading error and distance error as follows:Figure 3 Image showing detected object with its center coordinates (X1,Y1) outside a desired area
1) Heading angle error:
Consider an object is detected in the right corner of an image frame with its center coordinates (X1, Y1) outside a desired object area as shown in Figure 3. To make the robot (quadcopter) turn towards detected object, it should change its current angular position to a desired angular position to achieve object following. Therefore, a heading angle error e is defined as,where φD is the desired angular position of robot and φc is the current angular position of robot as shown in Figure 4.
Figure 4 Heading angle error definitions
Figure 5 shows an extended view of a heading angle error, e, from the Kinect camera with detected object in an image frame having its center coordinates as (xm,ym) pixels.
According to the image principle explained in [4] and [11], the heading angle error e between the center of detected object and the center of RGB image is given by,
where a is the pixel size of the color image with detected object. f is the focal length of Kinect camera and n is the pixel difference between the center of detected object and the center of RGB image frame.
In this project, only the heading angle is considered for error measurement. Therefore, n in the equation above can be replaced by xm. Thus, a heading angle error can be calculated as,
Figure 5 Heading angle error measurement2) Distance Error
The distance error can be defined as the difference of a reference distance between an object and the robot and a current distance between them. The distance between an object and the robot can be obtained by a depth map using Kinect camera sensor.
The distance error can be defined from Figure 6. The current distance between an object and the robot is yb and the reference distance between them is yD. The distance error is defined as,
Reference:
[4] G. Xing, S. Tian, H. Sun, W.Liu, H. Liu, "People-following system design for mobile robots using Kinect sensor," in Proc. of 2013 25th Chinese Control and Decision Conference (CCDC), 2013, pp. 3190-3194.
[11] C. D. Herrera, J. Kanna1a, J. Heikkila, "Accurate and practical calibration of depth and color camera pair," in Proc. of 14th Intemational Conference on Computer Analysis of Images and Patterns, 2011, pp. 437-445.
A. V. Gulalkari, G. Hoang, H. K. Kim, S. B. Kim, P. S. Pratama and B. H. Jun, "Object Following Control of Six-legged Robot Using KInect Camera," ICACCI, South Korea, 2014.
-
Log 9: Kinect Sensor
07/01/2017 at 12:09 • 0 commentsKinect Sensor
The Microsoft Kinect camera sensor is a revolutionary RGB-D camera that is primarily built as an input device for Xbox gaming console [8]. Due to its capability of producing decent quality images and depth information, this low cost device become popular in the field of scientific study especially in the field of computer vision and robotics.
Kinect's Software Development Kit (SDK ) for Windows offers API interfaces to help user to create and develop their own application through it [9].
Figure 1: Kinect Xbox camera sensor
Figure 1 shows the Kinect camera sensor consisting of an IR (infrared ) projector, an IR camera, a RGB (color ) camera, four microphone array, a tilting system and image processing microchip known as Primesense's PSI 08 0-A2. The depth camera consists of an infrared laser projector combined with a
monochrome CMOS sensor, which capture video data in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable with its two modes of operation that are default mode and near mode. The Kinect Xbox can work only with default mode and its range varies from 80 centimeters to 4 meters. The RGB camera operates at 30Hz, and can offer images with 8-bits per channel. Using the tilting system, camera can be tilted up to 27° either up or down.Reference:
A. V. Gulalkari, G. Hoang, H. K. Kim, S. B. Kim, P. S. Pratama and B. H. Jun, "Object Following Control of Six-legged Robot Using KInect Camera," ICACCI, South Korea, 2014.
-
Log 8: Testing OpenCV Programs
02/14/2016 at 12:55 • 0 commentsPart 1:
Download the OpenCVTest_1.py from the files.
This program opens the file in the same directory names "cam.jpg" and displays the original image and a Canny edges of the original image.
Just like we've done in the previous log, create a file called OpenCVTest1.py using the following command:
nano OpenCVTest1.py
Then copy/paste in OpenCVTest1.py and press CTRL+O and CTRL+X(Now you should be back in the command line, if you're unsure about this stage, examine the log 6)
And execute the code using the following command.
python OpenCVTest1.py
P.s. the Pi camera comes with a protective cover on its lens, make sure that you've removed it otherwise you'd probably get a complete black picture instead of a canny edges of the original image.Part 2:
Download the OpenCVTest_2.1.py from the files.
This program opens a picam stream, attempts to change to 320x240 resolution, and shows the original image of each frame and also a Canny edges of each frame.
Now repeat the same process as the Part 1, create a file called OpenCVTest2.py using the following command:
nano OpenCVTest2.py
Then copy/paste in OpenCVTest2.py and press CTRL+O and CTRL+X(Now you should be back in the command line, if you're unsure about this stage, examine the log 6)
And execute the code using the following command.
python OpenCVTest2.py
Part 3:Download the OpenCVTest_3.1.py from the files.
This program tracks a red ball and outputs its location in terms of coordinates.
Now repeat the same process as the Part 1, create a file called OpenCVTest3.py using the following command:
nano OpenCVTest3.py
Then copy/paste in OpenCVTest3.py and press CTRL+O and CTRL+X
(Now you should be back in the command line, if you're unsure about this stage, examine the log 6)
And execute the code using the following command.
python OpenCVTest3.py
Having done this we've almost concluded the project, in the next log I'll be working on the flight control of the quadcopter. -
Log 7: Get a LED Blinking
02/13/2016 at 13:51 • 0 commentsAt this stage I'm going to explain how to create a file which consists of a simple code (to get a LED blinking) and we will conclude with executing the code. This will be helpful particularly at the next stage where we'll have a go with OpenCV.
First of all, breadboard blink_led.png (you can find the circuit diagram in the files)
(I assume that you have a certain level experience with breadbording circuits therefore I'm not going to get into too much detail with that.)
If you are wondering how to calculate the resistor value in this circuit, see resistor for LED calculation.pdf
See "RaspberryPi2_J8_pinout.png" for a complete RPi 2 connector J8 pinout
Alternatively see "RasPiB-GPIO_lightbox.png"
Continuing at the RPi command line: (Execute the following commands one by one)
nano blink_led.py # open the file my_blink.py with the nano editor
Some of the resources use the "touch" command to create a file and then the "nano" command to edit the created file. However nano will create the file if it has not been created already, therefore there no need for the "touch" command.Now copy / paste in blink_led.py (located at the files), then press Ctrl+O to save, then Ctrl+X to exit nano.
sudo python blink_led.py
Run the program with this command, note sudo "super user do", i.e. root access is necessary to perform hardware I/O on the RPi.You should now see the LED on your board blinking. Press Ctrl+C to exit this program.
(For future reference Ctrl+C exits most programs when ran from a Linux command line.)
This step is absolutely crucial for does of you who wants to make the program run when Raspbian boots (i.e. for a headless embedded application) proceed as follows . . .
sudo nano /etc/rc.local # open rc.local in the nano editor
In rc.local just before "exit 0" add the following:sudo python /home/pi/blink_led.py & # add this to rc.local, just before "exit 0"
Do NOT forget the "&" to start as a separate process or the RPi will run your program indefinitely and will not continue to boot !!Forgetting the "&" could put the RPi in an unrecoverable state (necessitating re-formatting the SD card if that's the case take a look at log 1).
/etc/rc.local is ran by the RPi as root during boot-up, so you don't really need to include "sudo" in the command even if accessing GPIO pins.
sudo shutdown -r now # reboot, the LED should start blinking during RPi boot-up
To return to the regular boot-up, simply open rc.local again and remove the "sudo python /home/pi/blink_led.py &" linesudo nano /etc/rc.local # remove the "sudo python /home/pi/blink_led.py &"
-
Log 6: Installing OpenCV on the RPI 2
02/11/2016 at 21:20 • 0 commentsBefore we get started, I'd like to warn you that this stage takes about 3.5 hours total on a RPi 2.
Also during this process, you cannot use the PuTTY because it will lose the connection due to "connection timed out" error. Therefore you need to connect the RPI to a separate monitor via the HDMI cable (just like you've done in Log 1)
Once that's done, power up the RPI and log in with the default username / password, which is pi / raspberry (unless you've changed the log in details by the "sudo raspi-config" command)
Then execute the following commands one by one:
sudo apt-get update sudo apt-get upgrade sudo apt-get install python-numpy python-scipy python-matplotlib sudo apt-get install build-essential cmake pkg-config sudo apt-get install default-jdk ant sudo apt-get install libgtkglext1-dev sudo apt-get install v4l-utils sudo apt-get install libjpeg8 \ libjpeg8-dev \ libjpeg8-dbg \ libjpeg-progs \ libavcodec-dev \ libavformat-dev \ libgstreamer0.10-0-dbg \ libgstreamer0.10-0 \ libgstreamer0.10-dev \ libxine2-dev \ libunicap2 \ libunicap2-dev \ swig \ libv4l-0 \ libv4l-dev \ python-numpy \ libpython2.7 \ python-dev \ python2.7-dev \ libgtk2.0-dev \ libjasper-dev \ libpng12-dev \ libswscale-dev wget http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/3.0.0/opencv-3.0.0.zip unzip opencv-3.0.0.zip cd opencv-3.0.0 mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D BUILD_EXAMPLES=ON \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D WITH_V4L=ON ..
The next command takes about 3 hours on the RPi 2. I used a mini 5V DC fan to facilitate air circulation and heat dissipation, otherwise I'll overheat due to the fact that this process will push the RPi to the limits of what it is capable of.
sudo make
sudo make install sudo nano /etc/ld.so.conf.d/opencv.conf
# opencv.conf will be blank, add the following line:/usr/local/lib # enter this in opencv.conf, NOT at the command line
(leave a blank line at the end of opencv.conf) then save and exit nano# back to the command line:
sudo ldconfig sudo nano /etc/bash.bashrc
# add the following lines at the bottom of bash.bashrcPKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
# enter these at the bottom of bash.bashrc, NOT at the command lineexport PKG_CONFIG_PATH
# enter these at the bottom of bash.bashrc, NOT at the command line
(leave a blank line at the end of bash.bashrc)# save bash.bashrc changes, then back at the command line, reboot
sudo shutdown -r now
# after rebooting, verify our OpenCV install:python
# enter interactive Python prompt session>>> import cv2 >>> cv2.__version__
# should say your OpenCV version, i.e. '3.0.0', press Ctrl+D to exit the Python prompt session
-
Log 5: Streaming live video with the Picam
02/11/2016 at 12:58 • 0 commentsMake sure you have enabled the camera!
(If you are unsure about this take a look at the log 2)
Execute:
sudo apt-get install vlc
command to install vlc on the raspberry piThen install vlc on you windows PC follow this link:
https://ninite.com/ then check the vlc and click on installThen get back to the PuTTY and execute:
raspivid -o - -t 0 -hf -w 800 -h 400 -fps 24 |cvlc -vvv stream:///dev/stdin --sout '#standard{access=http, mux=ts,dst=:8160}' :demux=h264
Now go on the VLC (on your Windows PC) >> Media and start a Network Protocol:Type in the IP address followed by :8160 i.e. (http://111.111.0.11:8160)
Remember that you've obtained the IP address by
hostname -I
(If you are unsure about this take a look at the log 2) -
Log 4: Verify that Picamera Works
02/11/2016 at 12:54 • 0 commentsMake sure you have enabled the camera by
sudo raspi-config
(If you don't know how to do this take a look at Log 2.)raspistill -o cam.jpg # take a picture with the raspberry picam raspivid -o video.h264 -t 10000 #record a video (for 10s) with picam
Usels -l
command to verify "cam.jpg" and "video.h256" are there.pcmanfm &
command to open the documents folder of the raspberry pi[VISUAL!]
there should be the picture you've just took (named as cam.jpg) double click on that to open the picture
also the video (named as video.h264) that you've recorded is located in the same directory.
-
Log 3: Installing and setting up PuTTY and Xming
02/11/2016 at 12:50 • 0 commentsInstall PuTTY follow this link:
(http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html, choose "putty.exe")
(then create a PuTTY shortcut on your desktop)Install Xming and Xming-fonts follow this link:
(http://sourceforge.net/projects/xming)
(create an Xming shortcut on your desktop)Reboot after installing PuTTY, Xming, and Xming-fonts
Use this command for rebooting:sudo shutdown -r now
Start PuTTY:Set the following settings:
-your RPi IP address
-Terminal -> Bell -> None (bell disabled)
-Connection -> Seconds between keepalives -> set to "30"
-Connection -> expand SSH -> X11 -> check "Enable X11 forwarding"
then save these settings by entering a preferred name in the "Saved Sessions" box, for example "my_default", then choosing SaveTo begin a PuTTY session, load your preferred settings and click "Open"
Start Xming before or just after beginning a PuTTY session if you would like to see Raspbian windows rendered on your Windows desktop computer,
To verify Xming is running look for the Xming icon in the lower right corner of your Windows screenTo verify PuTTY and Xming are working, start PuTTY try the following commands
pwd # present working directory, should say "/home/pi" as this is the default location for the user "pi" ls -l #lists the files in the current directory pcmanfm & #this is the graphical Raspbian file browser and should open as a separate window epiphany-browser & #this is the graphical default Raspbian internet browser, which can also be used to browse files, FTP, etc.
To paste into a PuTTY window simply right-click anywhere in the PuTTY window.
To copy from a PuTTY window simply highlight what you would like to copy(not necessary to press Ctrl+C)
-
Log 2: First Time Boot-up
02/11/2016 at 12:41 • 0 commentssert the flashed SD card into the RPi, then connect:
-USB keyboard
-USB mouse
-PiCamera
-USB Wireless adapter
-HDMI monitor cable
-Power (at last)The newest version of Raspbian (Raspbian-Jessi) boots directly into the graphical desktop, once boot-up is complete, bring up a command line, type
sudo raspi-config
and set the following options:
1 Expand Filesystem - set OS to fill SD card
3 Boot Options - set to "B1 Console"choose "Finish", when asked "Would you like to reboot?" choose "Yes", if you need to reboot from the command line type "sudo shutdown -r now" to reboot, for future reference if you need to shut down without rebooting type "sudo shutdown -h now"
log in with the default username / password, which is pi / raspberry
startx //start the graphical desktop
choose wireless icon at the top right, enter wireless router password, verify networking workshostname -I
this command will display the IP address write that down. (you'll need this while setting up the PuTTY in the next step) then typesudo shutdown -r now
# reboot, you don't need to log in on the RPi after rebooting -
Log 1: Making the Raspbian SD Card
02/11/2016 at 12:36 • 0 commentsDownload the latest .zip version of Raspbian from www.raspberrypi.org, then unzip the file.
Download and install Win32DiskImager follow this link: here
If your PC does not have a SD card slot you can purchase a separate USB SD card reader.
Insert and format your SD card, (right click on the SD card icon and click on Format)
Then open Win32DiskImager and flash Raspbian to the SD card.
(Image file is the unzipped raspbian file and device is the SD card that you've inserted then click on "Write")
When flashing is complete before removing the SD card make sure to right-click on the SD card drive letter and choose "Eject", then remove the SD card.