This project creates a functional guardian with a servo, a camera, some LEDs and the Viam ML Model service and Vision Service.
Print, Paint, and Program a Guardian to Track Humans and Dogs Using a Pi, Camera, and Servo
To make the experience fit your profile, pick a username and tell us what interests you.
We found and based on your interests.
This project creates a functional guardian with a servo, a camera, some LEDs and the Viam ML Model service and Vision Service.
guardian-detection.mp4Guardian detects meMPEG-4 Video - 8.69 MB - 05/18/2023 at 18:42 |
|
|
ernieandtheguardian.mp4Guardian detects and follows my dog aroundMPEG-4 Video - 13.98 MB - 05/18/2023 at 18:41 |
|
|
timelapse.mp4Timelapse of the buildMPEG-4 Video - 28.19 MB - 05/18/2023 at 18:41 |
|
|
guardian-finished.mp4The finished guardian projectMPEG-4 Video - 6.95 MB - 05/18/2023 at 18:41 |
|
Assemble for testing
To assemble the guardian, start with the head and use four M2 screws to screw the camera with attached ribbon cable to the front half of the head. Optionally, if the green of the camera is visible from the outside, use a marker to color the camera board. Then put both parts of the head together.
Your servo probably came with mounting screws and a plastic horn for the gear. Use the screws to attach the horn to the base of the head.
Next, get your Raspberry Pi and your servo and connect the servo to the Raspberry Pi by connecting the PWM wire to pin 12, the power wire to pin 2, and the ground wire to pin 8. Then attach the head to the servo.
Next, get the three 10mm RGB LEDs ready. Attach the common cathode of each LED to a ground pin on your Raspberry Pi. Attach the wires for the red and the blue LEDs to GPIO pins.
Before continuing with assembly, you should test your components work as expected. To be able to test the components, you need to install viam-server and configure your components.
Go to the Viam app and create a new robot called guardian.
Go to the Setup tab of your new robot’s page and follow the steps to install viam-server on your computer.
Navigate to the Config tab of your robot’s page in the Viam app. Click on the Components subtab and navigate to the Create component menu.
{ "pin": "12", "board": "local"}
Click Save config in the bottom left corner of the screen.
Navigate to your robot’s Control tab to test your components.
Click on the servo panel and increase or decrease the servo angle to test that the servo moves.
Next, click on the board panel. The board panel allows you to get and set pin states. Set the pin states for the pins your LEDs are connected to to high to test that they light up.
Next, click on the camera panel and toggle the camera on to test that you get video from your camera.
Now that you have tested your components, you can disconnect them again and paint and decorate your guardian and then put the rest of the guardian together. Remove the servo horn, and place one LED in the back of the Guardian head, leaving the wires hanging out behind the ribbon camera.
Then place the servo inside the Guardian body and attach the horn on the head to the servo’s gear. Carefully place the remaining two LEDs in opposite directions inside the body. Thread all the cables through the hole in the lid for the base of the guardian, and close the lid.
Use a suitable base with a hole, like a box with a hole cut into the top, to place your guardian on top of and reconnect all the wires to the Raspberry Pi.
At this point also connect the speaker to your Raspberry Pi.
Then test the components on the robot’s Control tab again to ensure everything still works.
For the guardian to be able to detect living beings, you can use this Machine Learning model. The model can detect a variety of things which you can see in the associated labels.txt file.
You can also train your own custom model based on images from your robot but the provided Machine Learning model is a good one to start with.
To use the provided Machine Learning model, copy the effdet0.tflite file and the labels.txt to your Raspberry Pi:
scp effdet0.tflite pi@guardian.local:/home/pi/effdet0.tflitescp labels.txt pi@guardian.local:/home/pi/labels.txt
Next, navigate to the Config tab of your robot’s page in the Viam app. Click on the Services subtab and navigate to the Create service menu.
{"source": "cam","pipeline": [ { "type": "detections", "attributes": { "detector_name": "detector", "confidence_threshold": 0.6 } }]}
Navigate to your robot’s Control tab to test the transform camera. Click on the transform camera panel and toggle the camera on, then point your camera at a person or pet to test if the vision service detects them. You should see bounding boxes with labels around different objects.
With the guardian completely configured and the configuration tested, it’s time to make the robot guardian behave like a “real” guardian by programming the person and pet detection, lights, music, and movement.
The full code is available at the end of the tutorials.
We are going to use Virtualenv to set up a virtual environment for this project, in order to isolate the dependencies of this project from other projects. Run the following commands in your command-line to install virtualenv, set up an environment venv and activate it:
python3 -m pip install --user virtualenvpython3 -m venv envsource env/bin/activate
Now, install the Python Viam SDK and the VLC module:
pip3 install viam-sdk python-vlc
Next, go to the Code Sample tab on your robot page and select Python.
Copy the boilerplate code. This code snippet imports all the necessary packages and sets up a connection with the Viam app in the cloud.
Next, create a file named main.py and paste the boilerplate code from the Code Sample tab of the Viam app into your file. Then, save your file.
Run the code to verify that the Viam SDK is properly installed and that the viam-server instance on your robot is live.
You can run your code by typing the following into your terminal:
python3 main.py
The program prints a list of robot resources.
On top of the packages that the code sample snippet imports, add the random and the vlc package to the imports. The top of your code should now look like this:
import asyncioimport randomimport vlcfrom viam.robot.client import RobotClientfrom viam.rpc.dial import Credentials, DialOptionsfrom viam.components.board import Boardfrom viam.components.camera import Camerafrom viam.components.servo import Servofrom viam.services.vision import VisionClientasync def connect(): creds = Credentials( type='robot-location-secret', payload='SECRET_FROM_VIAM_APP') opts = RobotClient.Options( refresh_interval=0, dial_options=DialOptions(credentials=creds) ) return await RobotClient.at_address('ADDRESS_FROM_VIAM_APP', opts)
You will update the main() method later.
Next, you’ll write the code to manage the LEDs. Underneath the connect() function, add the following class which allows you to create groups of LEDs that you can then turn on and off with one method call:
class LedGroup: def __init__(self, group): print("group") self.group = group async def led_state(self, on): for pin in self.group: await pin.set(on)
If you want to test this code, change your main() method to:
async def main(): robot = await connect() local = Board.from_robot(robot, 'local') red_leds = LedGroup([ await local.gpio_pin_by_name('22'), await local.gpio_pin_by_name('24'), await local.gpio_pin_by_name('26') ]) blue_leds = LedGroup([ await local.gpio_pin_by_name('11'), await local.gpio_pin_by_name('13'), await local.gpio_pin_by_name('15') ]) await blue_leds.led_state(True)
You can test the code by running:
python3 main.py
Your guardian lights up blue.
Now, you’ll add the code for the guardian to detect persons and pets. If you are building it for persons or cats or dogs, you’ll want to use Person, Dog, Cat, and, if you have a particularly teddy-bear-like dog, Teddy bear. You can also specify different ones based on the available labels in labels.txt.
Above the connect() method, add the following variable which defines the labels that you want to look for in detections:
LIVING_OBECTS = ["Person", "Dog", "Cat", "Teddy bear"]
Then, above the main() method add the following function which checks detections for living creatures as they are defined in the LIVING_OBJECTS variable.
async def check_for_living_creatures(detections): for d in detections: if d.confidence > 0.6 and d.class_name in LIVING_OBECTS: print("detected") return d
Underneath the check_for_living_creatures() function, add the following function which gets images from the guardian’s camera and checks them for living creatures and if none are detected moves the servo randomly. If a creature is detected, the red LEDs will light up and music will play.
async def idle_and_check_for_living_creatures(cam, detector, servo, blue_leds, red_leds, music_player): living_creature = None while True: random_number_checks = random.randint(0, 5) if music_player.is_playing(): random_number_checks = 15 for i in range(random_number_checks): img = await cam.get_image() detections = await detector.get_detections(img) living_creature = await check_for_living_creatures(detections) if living_creature: await red_leds.led_state(True) await blue_leds.led_state(False) if not music_player.is_playing(): music_player.play() return living_creature print("START IDLE") await blue_leds.led_state(True) await red_leds.led_state(False) if music_player.is_playing(): music_player.stop() await servo.move(random.randint(0, 180))
There is one last function that you need to add before you can write the full main() function and that is a function to focus on a given creature. The function calculates the center of the detected object and then checks if that center is close to the middle of the entire image. If it is not near the middle of the entire image, the function moves the servo to the left or right to attempt to center the object.
Add the following function above your main() function:
async def focus_on_creature(creature, width, servo): creature_midpoint = (creature.x_max + creature.x_min)/2 image_midpoint = width/2 center_min = image_midpoint - 0.2*image_midpoint center_max = image_midpoint + 0.2*image_midpoint movement = (image_midpoint - creature_midpoint)/image_midpoint angular_scale = 20 print("MOVE BY: ") print(int(angular_scale*movement)) servo_angle = await servo.get_position() if (creature_midpoint < center_min or creature_midpoint > center_max): servo_angle = servo_angle + int(angular_scale*movement) if servo_angle > 180: servo_angle = 180 if servo_angle < 0: servo_angle = 0 if servo_angle >= 0 and servo_angle <= 180: await servo.move(servo_angle) servo_return_value = await servo.get_position() print(f"servo get_position return value: {servo_return_value}")
The main logic for the guardian robot:
Please copy a suitable music file to the directory where your code is running and name it guardian.mp3.
Replace your main() function with the following:
async def main(): robot = await connect() local = Board.from_robot(robot, 'local') cam = Camera.from_robot(robot, "cam") img = await cam.get_image() servo = Servo.from_robot(robot, "servo") red_leds = LedGroup([ await local.gpio_pin_by_name('22'), await local.gpio_pin_by_name('24'), await local.gpio_pin_by_name('26') ]) blue_leds = LedGroup([ await local.gpio_pin_by_name('11'), await local.gpio_pin_by_name('13'), await local.gpio_pin_by_name('15') ]) await blue_leds.led_state(True) music_player = vlc.MediaPlayer("guardian.mp3") # grab Viam's vision service for the detector detector = VisionClient.from_robot(robot, "detector") while True: # move head periodically left and right until movement is spotted. living_creature = await idle_and_check_for_living_creatures(cam, detector, servo, blue_leds, red_leds, music_player) await focus_on_creature(living_creature, img.width, servo) # Don't forget to close the robot when you're done! await robot.close()if __name__ == '__main__': asyncio.run(main())
Now, run the code:
python3 main.py
If everything works, your guardian should now start to idle and when it detects humans or dogs or cats turn red, start music, and focus on the detected being.
Create an account to leave a comment. Already have an account? Log In.
Become a member to follow this project and never miss any updates