Close

Update: Llama3, wake word handling and a new face!

A project log for Automatic Speech Recognition AI Assistant

Turning a Raspberry Pi 4B into a satellite for self-hosted language model, all with a sprinkle of ASR and NordVPN Meshnet

adam-meshnetAdam - Meshnet 05/31/2024 at 07:290 Comments

With the update, I have added wake word handling, a nicer face for the assistant, created a GitHub Repository, and tidied up the code a little bit.

Finally, the assistant only starts processing queries once it hears you speak the magic "Hey robot" words instead of always trying to process whatever it hears. 

Additionally, I have moved from LocalAI to Ollama for the AI framework and am using GPU for the chat completions. Switching to Meta's llama3 LLM on top of all of that means that the inference times have been greatly reduced, resulting in very quick responses from the assistant.

If you want to give it a try, please see my GitHub repository: https://github.com/RoseywasTaken/ASR-AI

Also, the update has had its true, real-life trial because I'm currently 500 KM away from my desktop PC running the llama3 model. Luckily, accessing the Ollama API is super easy, thanks to NordVPN's Meshnet feature.

Discussions