-
1Step 1
What we're doing is exploring the data rate limitations of glass by streaming high quality video to the heads up display(HUD) while sending orientation data to the Beaglebone. Both the video data streaming from the camera through the Beaglebone and the orientation settings from Glass, through the Beaglebone, and sent as servo commands to the servos test the communication capabilities of glass as well. This setup tests the data I/O and processing capabilities. We can modify the compression of the video on the Beaglebone before sending thereby modifying the processing that glass has to do to decode the data. We also test the Beaglebones limitations through modifying our compression ratios. This is also one of the reasons that we need a Logitech C920 as a camera. The C920 has an on-board H.264 compression IC. By compressing the stream in the camera the workload on the Beaglebone DRAMATICALLY decreases, allowing not only smooth operation, but operation in general. Without it, the stream is almost unusable.
We started by screwing three servos together and screwing the camera into the end of the servo arm. Next we designed and printed an adapter to secure the nerf gun to the bottom of the camera with the fire servo. Finally, we hooked everything up to the beaglebone, launched our app on glass and began control.
All of our code is available here:
https://github.com/yacoman89/GlassNerfTurret
Now that we have a starting point we've begun design and printing of a larger glados head to house the camera and servo. <See pictures> Now all we have to do is use a digital low pass filter to smooth our motion data out before we send it to the servos and we SHOULD be good to go. The new NERF design is next on our plate.
The low pass filter is finished. We used Matlab to create a function we're using to smooth out the glass data. The new head is printed, custom 3D printed servo extensions installed, and an extra servo installed to mimic GLaDOS motion more realistically. A new feature we've just finished is voice mimicry. Glass voice to text -> AT&T text to speech -> file ->melodyne(using python for application for autotune automation) -> raspberry pi speakers.
We'll be working on the project during our free time between classes and will post updates as they become available. Time to polish the project up and add internet control to those with glass and interested in trying it out. Voice control in GLaDos voice will be possible too!
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.