Close
0%
0%

Big Mouth Billy Bass Offline AI

Turned a thrift shop Big Mouth Billy Bass into a self-contained AI! Now he chats with folksy wisdom—check it out!

Public Chat
Similar projects worth following
I turned a thrift store Big Mouth Billy Bass into a fully functional AI unit! Using a Raspberry Pi, motor controllers, speech-to-text (VOSK), text-to-speech (Piper), and a lightweight language model, I brought Billy to life. He listens, chats with folksy wisdom, and moves his mouth, head, and tail in sync with his words. From thrift shop find to interactive wall companion, this was a super fun project to create. Now, Billy hangs on my wall, ready to chat daily!

The Big Mouth Billy Bass has been a part of American culture for decades . For a complete history of this novelty item, check out the Wikipedia page . https://en.wikipedia.org/wiki/Big_Mouth_Billy_Bass 

So after finding Billy in a thrift shop, I was able to take him home for $5. I'd seen other projects modifying Billy Bass to sing arbitrary songs, or to act as the front end for an Alexa or Google home. I had other plans in mind.

My plan was to turn Billy Bass into a completely self-contained AI unit. 

Overall I used a Raspberry Pi 5 - 8 GB SBC running Raspian. Motor controller boards were used to control the several DC motors for the mouth, head and tail. Two separate AC to 5v DC power supplies were used to control the electronics. A small powered speaker was used for audio output and an electret microphone was used for voice input. 

VOSK https://github.com/alphacep/vosk-api was used for the offline speech to text. The medium sized model was chosen for performance. 

Piper https://github.com/rhasspy/piper was used to generate the text to speech.

A smaller LLM model was chosen in order to generate quick responses. I set up the LLM with a prompt to let Billy know that he was a fish in the wall and to answer all questions with some folksy wisdom! 

From there, I used an Arduino compatible board to sample the audio output. Based on the amplitude levels, I generated motor control signals to control the tail flapping, head turning, and mouth movements.

I put this all together with a small python script that basically listens for audio input, converts it to text, sends it to the llm model and streams responses from the llm to Piper for text to speech conversion. From there, audio is played using the aplay utility.

Overall this was a super fun project to work on! Good old Billy is now sitting on my wall chatting away with me daily! 

20241210_204853.mp4

MPEG-4 Video - 19.31 MB - 12/11/2024 at 02:23

Download

View project log

Enjoy this project?

Share

Discussions

Laurens Corijn wrote 12/19/2024 at 10:05 point

Just a note, but the LLM you mention " uDelphip " does not seem to exist anywhere on the web but here on this page. Probably a typo?

  Are you sure? yes | no

Russell Courtenay wrote 12/18/2024 at 17:20 point

I gotta see video of this!

  Are you sure? yes | no

Steve Hernandez wrote 12/18/2024 at 17:21 point

Look in the files section. I couldn't figure out how to get it posted in the gallery

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates