-
Testing of the device
10/03/2016 at 20:31 • 0 commentsHere I'm testing the device for lack of movement.
The program detects if the person is not moving for a time period. In a real situation the time period would be set around 2 minutes if the person is on the floor. For the sofa there could be a limit of two hours and for the bed there could be a limit of 12 hours. There are currently not different detection times for the sofa or a bed.
Demonstrating loggin the alerts to a webservice (on the feed the person gets id 7):
-
Monitoring a person
10/03/2016 at 20:27 • 0 commentsWith background subtraction we have got the foreground detected a.k.a. the objects of interest. What should we do with this information?
Activity detection
There are different methods to monitor people. Activity detection is used to determinate what activity the person in the video is performing. In the static analysis the persons posture is analyzed at a specific time. A posture is a good indicator of what the person is doing e.g. lying, standing or sitting. This information alone is not very useful. That is why in the dynamic analysis the outcome of the static approach is combined to the earlier static approach outcomes. In this way we can analyze movement patterns. If the person was standing in the last frame and in the current frame is detected as lying, the person probably have suffered from a fall.
Well in the real world this is not as easy as it looks. One study shows that there are three features that usually occur when a person falls. The incident will happen in a short time period, typically in a range of 0.4-0.8 seconds. The persons centroid changes rapidly and significantly. And last the vertical projection of the person changes significantly.
Position and motion analysis
While posture analysis is a good way to detect the persons state it is hard for it to detect what activity, more specific than just sitting, standing or lying, the person is performing. That is why the persons position could be used to determine what ADL or IADL the person is currently performing. With this technique the daily routines could be monitored and taught to the system and if something abnormal is detected, it could create an alarm.
Combination
Because presented methods does not always achieve the sensitivity needed for a robust system these methods could be combined. The results from static analysis, dynamic analysis, position and motion analysis can be combined with simple AND or OR rules. The final decision could also be generated with combining each output and the certainty of it as a weighted result to create a maybe more robust solution.
Currently the system only detects only if a person is not moving enough in a time period. These are the features that i am currently developing to the second version of this product.
-
Fall detector installation to Raspberry Pi
10/03/2016 at 20:13 • 0 commentsFall detector installation
Fall detector is installed to a Raspberry Pi 3 model B. Now follows a step-by-step guide for the installation.
For debuggin purposes RASPBIAN JESSIE (Full desktop image based on Debian Jessie) is installed as the operating system. For the final version RASPBIAN JESSIE LITE (Minimal image based on Debian Jessie) with Python and OpenCV installed would be better.
Installing the operating system to the SD card is simple and Raspberry Pi foundation has it all covered up on their website.
After the OS is runnign there are a few things that should be done. Localisation options can be set, if needed, with raspi-config. The following commands will set it to Finnish.
> sudo raspi-config5 Internationalisation OptionsI1 Change localefi_FI.UTF8Keyboard language can be set to Finnish, or any other language, with setxbmap.
setxkbmap fi
Now everything can be updated. This can be made by connecting the device to the internet, via wifi or ethernet. After the connection is established the package list should be updated and then every program should be upgraded and lastly downloaded packages should be cleaned up. This can be made with the following commands.
> sudo apt-get udpdate> sudo apt-get dist-upgrade> sudo apt-get cleanAfter this the latest firmaware should be updated. Raspbian has a tool called rpi-update pre-installed and this can be used for the purpose.
sudo rpi-update
After the firmware is updated, a restart of the system is needed.
sudo shutdown -r 0
Python should be already installed in the system. This can be verified with running python from the terminal.
python
Next OpenCV can be installed with apt-get.
sudo apt-get install libopencv-dev python-opencv
Numpy should be already installed. This can be verified with trying to install it with the Python package manager pip, which is a recursive acronym that stands for "Pip installs Packages".
pip install numpy
Fall detector repository is cloned from Github.
git clone https://github.com/infr/falldetector-public.git
After this the system can be tested with running main.py.
cd falldetector-public/fall-detector-v1/
> python main.py -
Basic video analysis: What is Background subtraction
07/07/2016 at 21:44 • 0 commentsUsually the interesting part in a video scene is not the background but the objects in the foreground. These objects of interest could be any object; humans, cars, animals etc. Foreground detection also called background subtraction is a method where these objects of interest are separated from the background in a video.
If the background of a scene remains unchanged the detection of foreground objects would be easy. Just take a picture in the beginning of an empty scene and then compare future frames to that first picture. The first picture can be called the background model.
This method is not really useful in real life. Almost in every scene the background changes or at least there is video noise. That is why a threshold should be adapted to the detection.
You can test this non-adaptive background subtraction with a threshold written in Python (2.7.x) and OpenCV (2.4.x).
import sys import cv2 threshold = 100 camera = cv2.VideoCapture(0) _, backgroundFrame = camera.read() backgroundFrame = cv2.cvtColor(backgroundFrame, cv2.COLOR_BGR2GRAY) while 1: _, currentFrame = camera.read() currentFrame = cv2.cvtColor(currentFrame, cv2.COLOR_BGR2GRAY) foreground = cv2.absdiff(backgroundFrame, currentFrame) foreground = cv2.threshold(foreground, threshold, 255, cv2.THRESH_BINARY)[1] cv2.imshow("backgroundFrame", backgroundFrame) cv2.imshow("foreground", foreground) key = cv2.waitKey(1) & 0xFF if key == ord("q"): cv2.destroyAllWindows() camera.release() sys.exit()
As soon as the background change, e.g. someone opens a curtain in a room, this method fails. That is why one could use an adaptive background model where the background model adapts to changes in the environment. Here is a variation of this adaptive model.
import sys import cv2 threshold = 10 camera = cv2.VideoCapture(0) _, backgroundFrame = camera.read() backgroundFrame = cv2.cvtColor(backgroundFrame, cv2.COLOR_BGR2GRAY) i = 1 while 1: _, currentFrame = camera.read() currentFrame = cv2.cvtColor(currentFrame, cv2.COLOR_BGR2GRAY) foreground = cv2.absdiff(backgroundFrame, currentFrame) foreground = cv2.threshold(foreground, threshold, 255, cv2.THRESH_BINARY)[1] cv2.imshow("foreground", foreground) alpha = (1.0/i) backgroundFrame = cv2.addWeighted(currentFrame, alpha, backgroundFrame, 1.0-alpha, 0) cv2.imshow("backgroundFrame", backgroundFrame) i += 1 key = cv2.waitKey(1) & 0xFF if key == ord("q"): cv2.destroyAllWindows() camera.release() sys.exit()
This is the basic idea of background subtraction. You can read more about video analysis in my thesis (still working on it) or if you want to look in to modern backgrounding methods you can start with the Gaussian mixture model and for further reading: Xu et al. (2016) Background modeling methods in video analysis: A review and comparative evaluation