Close
0%
0%

Brainmotic

EEG to control a Smart house. Oriented to persons with disabilities.

Similar projects worth following
Brainmotic comes from the union of two keywords within our project:

Brain: we capture the bioelectric activity of the brain through EEG to control some elements of the common areas of a house and Domotic: that is Home Automation, achieving the word of Brainmotic!

Brainmotic is a project that born with the idea that a person who has a disability or anyone, in general, could control their home using the EEG, similar to Brain-Computer Interface (BCI) and employing the technique of Motor Imagery.

General Purpose:

Controlling a house through EEG, using the method of motor imagery, improving the quality of life for people with disabilities.



GOALS:


  • Develop and Evaluate the UI.
  • Set the position of the helmet's electrodes, Acquire EEG signals and define the patterns with which we will work.
  • Develop an Assistive Unit that can be placed in any room of the user's home.
  • Develop the communication between the RPI2-Thingspeak-assistive Units.
  • Install Thingspeak and MySQL in the RPI2.
  • Install Tensorflow in the RPI3.
  • Evaluate the system.

brainmotic-ii-highres.png

Storyboard

Portable Network Graphics (PNG) - 668.60 kB - 10/03/2016 at 06:44

Preview

BrainMotic-master.zip

Copy of github

Zip Archive - 125.76 kB - 10/03/2016 at 06:43

Download

  • 1 × Raspberry Pi 3 Processing the EEG user's signal using the library of Tensor Flow.
  • 1 × Raspberry Pi 2 Controlling the UI and Thinkspeak.
  • 1 × Arduino Uno Bridge between the ADS1299 and the RPI3.
  • 1 × ADS1299 Data Converters / Analog to Digital Converter ICs (ADCs)
  • 3 × PSoC® 4 CY8CKIT-049 4xxx microcontroller for Assistive Units.

View all 13 components

  • Interface design for Brainmotic

    Daniel Felipe Valencia V10/03/2016 at 14:00 0 comments

    We have made the simple design interface; our inspiration is the interface developed by Pebble smart watch. In Figure 1, we show the development and interface levels in Figure 2 we show the steps to turn off the bedroom light. We want to emphasize that this interface is currently running on the screen of the RPI, and the left button corresponds to the motor imagery event left and enter the motor imagery event right button. besides us, we should highlight pending usability evaluation of this interface. An intermediate design is shown in Figure 3. and design we have now interface in Video.

    Figure 1.

    Figure 2.

    Figure 3.

    Video actual interface.

  • Future work for Assistive Unit and User interface (UI)

    Daniel Poveda10/03/2016 at 13:45 0 comments

    Right now we're development the assistive Units and the User interface for achieve the goals of this project. we'll like to say that now we have some prototypes and different versions of both. Please, be patient, we'll start to upload the corresponding logs and instructions of the assistive Units and the User interface

  • Future work with TensorFlow and the BCI

    Daniel Felipe Valencia V10/03/2016 at 07:10 0 comments

    We plan to capture the intention of the user (left, right, up or down) to create a database of these actions and use TensorFlow later by a game that encourages the user to generate patterns that we recognize, for this we create a maze in Blender we share the following link: https://github.com/dfvalen0223/BrainMotic/blob/master/laberinto_maze.blend

    The labyrinth is in first person currently using the up, down, left and right to move up to the wall having the title of the goal.

  • EEG acquisition and test of BCI with motor imagery

    Daniel Felipe Valencia V10/03/2016 at 07:08 0 comments

    We have started capturing EEG data with TI ADS1299, eight channels with positions c3, p3, t7, and p7 for the electrodes of odd, channels 1,3,5, and 7; and t4 positions, p4, t8, and p8 for even channels.

    The ADS has this configured digital converters differential Analogs, so we have connected all the channels to negative terminal BIAS terminal channels, which it is connected to the earlobe of the person using the device.

    The ADS is associated to a bridge RPi3 using an Arduino UNO; the Arduino is responsible for setting the converter to this we have used the code openBCI available at: https://github.com/OpenBCI/OpenBCI-V2hardware-DEPRECATED

    The ADS connected to the Arduino and the latter to the RPI; we have begun to compare the amplitudes generated in each of the hemispheres, for example when you think about moving his left arm expect the voltage of the electrodes in the right hemisphere is greater than the left.

    We captured 10 seconds of data of the eight channels 250 Samples Per Second, and then process each channel with the FFT, then add the even and odd channels and compare the results of these sums. We use the concept of voltage levels on TTL gates to generate three levels or thresholds, a range that would appear to a high impedance or rest position of the user, another indicating the action "left" similar to an electrical low level and a "right" as the high electric level.

    To indicate to the user that the action of left arm has been identified, we will indicate to the user that action was understood; this is done with sounds of "left" and "right" generated by the library python passing text to speech. The action "pause" does not make noise because the user is expected to remain in this state, and it would be annoying to the user that the system maintains sound of "pause."

    Finally, we want action left is associated with the button that allows slide on interface options, right action with the button that allows enter the options interface that controls or reads data from thingspeak and sensors or actuators connected to the PSoC microcontrollers, and finally, pause option generates no action on the interface.

    Tests with the connection between the user via the ADS and Arduino, and RPI; They have proved satisfactory because the strategy of comparing the voltages of each hemisphere of the brain allowed the RPI interpret the imagined user action correctly, for example when the user thinks about moving the right arm after 10 seconds the RPI answer "right ".

  • Pattern recognition

    Daniel Felipe Valencia V10/03/2016 at 07:06 0 comments

    We noticed that 59 channels are too many to make a pattern recognition algorithm in the RPI, for this reason, we have decided to change the basis of competition ii BCI data, dataset III. This database has only 3 data channels, which can simulate the behavior of the converter into digital signals biosignal ADS1299 from Texas Instruments.

    We create a script in python that calculates the FFT of each of the four channels of the database and displays (https://github.com/dfvalen0223/BrainMotic/blob/master/plot_BCI_II_competition_dataset_III.py). We have executed in the RPI and the next picture we see the result of an FFT made to 8 data signal (in Matlab Figure 1. and Python with IDE PyCharm, Figure 2.), this signal is half the time high and the other half in low.

    Figure 1.

    Figure 2.

    Then, we install TensorFlow (TF) in the RPi3 and we start the tutorial of TF from https://www.tensorflow.org/versions/r0.8/tutorials/mnist/beginners/index.html

    we want to share the code that we developed and tested, which makes the task of recognizing handwritten numbers (28x28 pixels images).

    The pattern recognition algorithm that we use in the Github code (https://github.com/dfvalen0223/BrainMotic/blob/master/Tensorflow_MNIST.py) is a simple neural network. The code (https://github.com/dfvalen0223/BrainMotic/blob/master/Tensorflow_MNIST_advance.py) utilizes a deep neural network. We want to share these codes because we discussed and processed the beginner and advanced tutorials TF, and we believe that may be useful for those who start managing this library in Python.

    We expect to adjust the vectors result of the operation of the FFT, and the energy bands of 0-57 Hz 8-channel ADS1299, as follows: ch1, range of frequencies 0-27 in the first row of a 28x28 matrix positions in the second row the amplitude of the frequency of 28-57 Hz, this until we complete 16 rows, and the rest of the fill 12 rows of zeros.

    However, we began to capture the ADS data after the data reaches a stage that adjusts the data so that the neural network TF do its job, we learned that the RPI gets stuck, so we decided to change strategy analysis of imagery for the motor.

    The new strategy is to use the voltage potentials, and it is reflected in the FFT when you want to perform an action with a member from one side, that is, we want to raise the right arm then the electrodes of the left hemisphere reflect more electrical activity than the right and vice versa. Code in:https://github.com/dfvalen0223/BrainMotic/blob/master/patternsBCI_MotorImagery_left_right.py

  • Feature extraction

    Daniel Felipe Valencia V10/03/2016 at 07:02 0 comments

    We have begun to ask what signal characteristics we recognize? And we decided to do an analysis of EEG frequencies. But what the waveforms and frequency of status or physical activity of a person say ?. We have found the following:

    Author: [1]

    Name

    frequencies [Hz]

    When

    alpha

    8

    13

    evident during the absence of visual stimuli

    beta

    12

    30

    seen in the frontal region of the brain and are observed during concentration

    gamma

    30

    100

    seen during motor activities

    delta

    0.5

    4

    observed at stage 3 and 4 of sleep

    theta

    4

    8

    occur during light sleep and are observed during hypnosis

    mu

    8/12

    Motor Imagery (MI) BCI paradigm

    Author: [1]

    The lines of the next figure 1. are the FFT (Fast Fourier Transform) of the 59 channels database of BCI Competition IV, dataset 1 during the second 2 after a visual stimulus and the second one after removing the stimulus Visual. It seems that you can see the visual image stimuli right, because there is a lot of energy between 8-13 Hz frequency bands, and the Figure 2. there appears to be an absence of visual stimulation.

    Figure 1.

    Figure 2.

    [1] S. Sanei and J. Chambers, EEG signal processing. John Wiley & Sons, 2007.

  • Electrode positions

    Daniel Felipe Valencia V10/03/2016 at 06:56 0 comments

    We need to know why the positions of the electrodes and nomenclature why we decided to look in scientific articles and have found the following information:

    “Normally during an examination, a set of 19 EEG electrodes is used, according to the so-called 10-20 system, which is recommended by the International Federation of Clinical Neurophysiology (IFCN) (Fig. 5). In a brain-computer interface which does not have to comply with medical standards, a different number of electrodes can be used, sometimes up to 512, according to need.” [1]

    “1. The alphabetical part should consist preferably of one but no more than two letters.

    2. The letters should be derived from names of underlying lobes of the brain or other anatomic landmarks.

    3. The complete alphanumeric term should serve as a system of coordinates locating the designated electrode..”[1]

    Author: [1]

    [1] トマトン124, “Electrode locations of International 10-20 system for EEG (electroencephalography) recording, ”, 2010

    [2] S. Sanei and J. Chambers, EEG signal processing. John Wiley & Sons, 2007

    [3] American Clinical Neurophysiology Society “Guideline 5: Guidelines for Standard Electrode Position Nomenclature, ” American Clinical Neurophysiology Society, 2006

    and then we begin to understand the database BCI competition IV, dataset1. Then, we plot in Matlab the first data.

    Superior view of channels:

    3D view of channels:

    Matlab code:

    newData1 = load('-mat', fileToRead1);
    vars = fieldnames(newData1);
    for i = 1:length(vars)
        assignin('base', vars{i}, newData1.(vars{i}));
    end
    X=nfo.xpos;
    Y=nfo.ypos;
    cnt = cnt';
    plot3(X,Y,double(cnt(:,2091)))

  • Installation of Kivy and first model of the interface

    MARIA CAMILA GUARIN M.10/03/2016 at 06:47 0 comments

    This time, we will talk about the interface that we are designed to BrainMotic, which will be controlled by the user using data patterns in the EGG signals. At the same time, the application will send the actions to be must make the actuators (like turn on or turn off a bulb), or it will read the data from the sensors (such as temperature).

    We had discussed what library (dedicated to computer visual) we would use in the project. As all the members of the team know to program in Python, we decided to use the graphics library Kivy; this was chosen between multiples libraries like Matplotlib because it has a use more consistent with a user interface.

    According to our interpretation of the Kivy’s pages (https://kivy.org/), Kivy has multiples tools that are development-oriented of user interface, so the final execution of an interface don´t consume a lot of system resources, other attractive that we find in Kivy is the development and implementation of the application in natively on different operating systems (Windows, Raspbian, Ubuntu, MAC OS, etc.). This allows that we can realise the code in Windows and later check in Raspbian, without relying on on the Raspberry Pi (RPi).

    From the installation on kivy in the RPi, we followed the commands thoroughly mentioned in its official pages. The installation was simple but a little delayed.

    After we had been reading about the management of the library, the kivy language, and the tools this had, we created the first interface of the project. This application was developed to handle three lights and two sensors: one of illumination and other of temperature.

  • Project Block Diagram

    Daniel Poveda10/03/2016 at 06:44 0 comments

    At our first meeting, we define the scope of this project, which eliminated the proposed specific objectives and set goals to achieve at the time of this challenge, but never overlooked the general purpose of the project wich is "controlling a house through EEG, using the method of motor imagery, improving the quality of life for people with disabilities".

    The elimination of the objectives is due to the complexity of it and the limited number of individuals who make up the Team Brainmotic. We decided to prioritize 3 key aspects of this project that allow reaching the main objective, wich is:

    • The capture and process of EEG signal for pattern recognition.
    • The User interface (UI).
    • Control some elements of common areas (kitchen, bathroom and bedroom) belonging to the user's home.

    Considering these aspects, we begin to reshape the project and draw a sketch about the block diagram which explaining the project.

    As shown in the figure, each of these aspects is assigned to a team member of Brainmotic, wich is:

    • Team EEG: Lead by the engineer Daniel Felipe Valencia, he is the responsible of capturing the EEG, process them and recognizes the patterns of the signals.
    • Team User interface: lead by the student María Camila Guarín, she is in charge of designing the User Interface.
    • Team Assistive Unit: lead by the student Daniel Poveda, he is the responsible for creating a generic unit wich will be distributed in the common areas of the user's home.

    The Brainmotic team is made up of people from the Universidad de San Buenaventura-Cali, Colombia. :)

    To explain the behavior of the system and the order of the functional blocks that compose it, you should start with the block on the left representing a user wearing a helmet with electrodes.

    This system begins with the acquisition of the bioelectric activity of the brain's (EEG). The user must user a helmet that has eight (8) electrodes connected to an ADS1299 (digital-analog-converter designed for this purpose). This ADS1299 is wired to the RPI2 and use the SPI protocol for sending the capture EEG signal. Once this signal is in the RPI2, this will be analyzed by the Tensor Flow software embedded in the RPI2.

    The block who represent the RPI2 has a block that represents the UI that will operate the user with his "brain". This UI will have a menu displaying the elements to be controlled depending on the room where the assistive unit will installed. For trading information between the Assistive units and the RPI2, both will use WiFi technology, operating with an ESP8266 module. The obtained data from the assistive unit are read by the RPI2 and displayed on the UI of this system.

    Each assistive unit that is distributed around the house will have a PSoC microcontroller, a lux sensor (light sensor), a temperature sensor, two circuits for power on and off either a light bulb or an electrical outlet.

    After many meetings, we have improved the design of the system by proposing improvements to the functional block diagram of the scheme.

    This scheme performs the same tasks as the first layout. We decided to put together in a block called "central unit" the processing of the EEG and the UI. This central unit (CU) has been distributed in two RPI and divided the tasks for not overload on one RPI with all the work. We decided to use the O.S. Jessie lite for both RPI. For simplicity we change the ESP8266 of the RPI for an ethernet modem TPLINK, wich helps the CU for communicating with the assistive units, aside; we will use the default router in the user's home to facilitate the connection of all modules (CU and assistive units).

    To make this system work, the team Brainmotic taking into account the three (3) aspects already mentioned, we defined some goals to achieve the primary objective of the project. These goals are:

    • Develop and Evaluate the UI.
    • Set the position of the helmet's electrodes, Acquire EEG signals and define the patterns with which we will work.
    • Develop an Assistive Unit that can be placed in any room of the...
    Read more »

  • Brainmotics story board

    Daniel Felipe Valencia V10/03/2016 at 06:32 0 comments

    Please download the picture to enhance the view.

View all 12 project logs

  • 1
    Step 1

    These are the instruction to install the environment for do the FFT in the RPi:

    $ sudo apt-get install libblas-dev
    $ sudo apt-get install liblapack-dev 
    $ sudo apt-get install python-dev 
    $ sudo apt-get install libatlas-base-dev
    $ sudo apt-get install gfortran 
    $ sudo apt-get install python-setuptools 
    $ sudo easy_install scipy
    $ sudo apt-get install python-matplotlib

    Source: http://wyolum.com/numpyscipymatplotlib-on-raspberry-pi/

  • 2
    Step 2

    These are the instruction to install the environment for TensorFlow in the RPi:

    $ sudo apt-get update
    $ python --version
    $ sudo apt-get install python-pip python-dev
    $ wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/raw/master/bin/tensorflow-0.9.0-cp27-none-linux_armv7l.whl
    $ sudo pip install tensorflow-0.9.0-cp27-none-linux_armv7l.whl
  • 3
    Step 3

    These are the instruction to install the environment for Thinkspeak, local, in the RPi:

    $ sudo apt-get -y install build-essential git mysql-server mysql-client 
        libmysqlclient-dev libxml2-dev libxslt-dev libssl-dev libsqlite3-dev
    
    $ mysql --user=root mysql -p raspberry_pass_ownUSer
    $ mysql> CREATE USER 'thing'@'localhost' IDENTIFIED BY 'speak’;
    $ mysql> GRANT ALL PRIVILEGES ON *.* TO 'thing'@'localhost' WITH GRANT OPTION;
    $ mysql> commit;
    $ mysql> exit;
    
    $wget http://cache.ruby-lang.org/pub/ruby/2.1/ruby-2.1.5.tar.gz
    $tar xvzf ruby-2.1.5.tar.gz
    $cd ruby-2.1.5 && ./configure
    $ make && sudo make install  && cd ..
    $ echo "gem: --no-rdoc --no-ri" >> ${HOME}/.gemrc
    $ sudo gem install rails -v 4.1.10
    
    $ sudo bundle update
    $ sudo chmod -R 777 /usr/local/lib/ruby/gems/2.5.0/
    $ gem install json
    
    $ git clone https://github.com/iobridge/thingspeak.git
    $ cp thingspeak/config/database.yml.example thingspeak/config/database.yml
    $ cd thingspeak
    $ bundle install -V
    $ bundle exec rake db:create
    
    $mysql --user=root mysql -p
    $mysql> show databases;
    #+----------------------------  +
    #| Database                        |
    #+----------------------------  +
    #| information_schema        |
    #| mysql                              |
    #| performance_schema      |
    #| thingspeak_development |
    #| thingspeak_test               |
    #+------------------------------+
    
    $mysql> exit; 
    $bundle exec rake db:schema:load 
    
    $rails server webrick
    
    Source: http://www.esp8266-projects.com/2015/11/raspberry-pi2-thingspeak-on-jessie.html

View all 4 instructions

Enjoy this project?

Share

Discussions

Ember Leona wrote 03/31/2017 at 23:09 point

I like your logo. How will you prevent hacking of controllable items... Is your fridge running? better stop it from smashing the oven

  Are you sure? yes | no

nirajbagh169 wrote 03/22/2017 at 07:53 point

Hi Sir,

      My name is niraj . I am a  PhD research scholar . my PhD work is based on Motor Imagery BCI. I want to do Left and Right hand Motor imaginary  movement to move cursor . i have ADS1299 , Arduino Board and i have already installed the  processing software. The problem is that  i do not know how to do experiment. what kind of stimulus i will design. I have read some papers and your blog " EEG acquisition and test of BCI with motor imagery " you explained  very well manner . I want to know more about it. Can you send me your project document which will explain about how to take EEG signal ,what kind of stimulus and Code to my mail id. it will be very helpful for me. Thank you.
my mail id is nirajbagh169@gmail.com

  Are you sure? yes | no

pranav wrote 10/12/2016 at 05:26 point

I realyy lie this project and i am right now in India launching a similar project for senior citizen at very affordable price if you are keen i am interest in taking this project commercial   on global scale let me know what will be requied for commercial and how we cdan work together 

  Are you sure? yes | no

Angie Escarria wrote 04/12/2016 at 20:57 point

Muy bueno, mis mejores deseos para este proyecto

  Are you sure? yes | no

Christian Salazar Bravo wrote 04/08/2016 at 06:48 point

good luck!

  Are you sure? yes | no

alexmanjarres1896 wrote 04/07/2016 at 01:18 point

I like this project

  Are you sure? yes | no

Ricardo Sandoval wrote 04/06/2016 at 02:03 point

Me gusta mucho, ojalá les vaya bien

  Are you sure? yes | no

DiegoGarzonCasas wrote 04/03/2016 at 21:08 point

Ich mag Ihr Projekt , würde Ich mag mehr von der durch eine interne Nachricht zu kennen.

  Are you sure? yes | no

Edgar Andres Chavez V. wrote 03/30/2016 at 03:34 point

It is an excellent project that will soon be available so that they can appreciate it.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates