After covering the hardware in the last post, let's dive into what makes it tick: the software architecture.
When I refactored this project from its old Arduino-based monolith, my main goal was to create a clean, flexible, and scalable system. I built the entire architecture around FreeRTOS and a layered, event-driven model.
Here’s a breakdown of the key layers:
1. The Foundation: BSP & HAL
This is the lowest layer, separating the "application logic" from the physical hardware.
BSP (Board Support Package): This is the core of the hardware abstraction. It’s defined by a single struct (bsp_t) containing function pointers for all hardware operations. A global pointer (g_bsp) is used by all other modules to access hardware.
HAL (Hardware Abstraction Layer): These are the drivers using the BSP. They manage the specific peripherals (hal_display, hal_audio, hal_encoder).
2. The Heart: The Main Event Queue
This is the true center of the entire system. Instead of tasks calling each other directly, most modules send messages to a single, central queue: main_queue_event.
An app_event_t structure is as follows:
typedef struct {
app_event_source_t source;
uint8_t CmdCode;
app_event_payload_t payload;
} app_event_t;
can come from various sources:
typedef enum {
EVENT_SOURCE_INPUT,
EVENT_SOURCE_WIFI,
EVENT_SOURCE_CHANNEL,
EVENT_SOURCE_GUI,
EVENT_SOURCE_SNTP
// ... you can easily add any
} app_event_source_t;
as CmdCode could be used custom enums for specific source
typedef enum { INPUT_ENCODER_TURN, INPUT_KEY_PRESS } input_event_type_t;
typedef enum { WIFI_EVENT_CONNECTED, WIFI_EVENT_DISCONNECTED, WIFI_EVENT_STA_READY} wifi_event_type_t;
typedef enum { SNTP_EVENT_TIME_SYNCED } sntp_event_type_t;
typedef enum { CHANNEL_EVENT_PEER_SPEAKING, CHANNEL_EVENT_PEER_SPEAKING_END , CHANNEL_EVENT_PEER_LIST_CHANGED} channel_event_type_t;
typedef enum { GUI_EVENT_CONNECT_TO_SERVER, GUI_EVENT_PEER_SELECTED } gui_event_type_t;
payload also easily enchanced
typedef union {
struct {
uint8_t key_code;
int16_t press_type;
uint32_t timestamp_ms;
} input;
struct {
bool is_connected;
} wifi;
struct{
uint64_t peer_speaking;
}channel;
struct{
uint16_t selected_peer;
peer_info_t peer_info;
}gui;
} app_event_payload_t;
This means we have one primary handler in main() that receives all these events and decides what to do next.
3. Synchronization: The System Event Group
If the main_queue is for sending messages, the system_event_group is for signaling states. This is crucial for synchronizing tasks. For example, the lan_task waits for the WIFI_CONNECTED_BIT (set by hal_net) before it starts running.
4. The "Worker" Modules
These are the other key modules that run as separate tasks and interact with the event system:
- Wi-Fi & Provisioning (hal_net): Manages the Wi-Fi connection in APSTA mode, including the web portal for setup.
- LAN Task & Peer Manager: Manages our UDP-based discovery protocol and keeps a live list (peer_info_t) of all other active 'Stray' devices.
- NVS Manager: Saves our configuration (device name, Wi-Fi credentials) to Non-Volatile Storage.
5. The Audio Pipeline
This is the most complex part of the project. It's a two-way process (Record/Transmit and Receive/Play) managed by FreeRTOS tasks, queues, and a shared memory pool.
- Transmit Pipeline (Record & Send)
- Trigger: The audio_task waits for the PTT_PRESSED_BIT. When it arrives, it mutes the speaker.
- Record Loop: While PTT is held, the audio_task:
- Grabs a free buffer from the shared_buffer_pool.
- Reads the I2S (RX) data into a stereo buffer.
- Converts the stereo sample to mono.
- Sends this mono audio_chunk_t to the mic_to_net_queue.
- Network Send: A separate lan_tx_task waits for chunks on that queue. It wraps the audio in a UDP packet adding header
typedef struct { uint32_t magic_number; // "secret knock" (e.g. 0xDEADBEEF) udp_packet_type_t type; // Packet type from enum uint64_t sender_sn; // Unique sender serial number char sender_name[16]; // Sender name to display in UI uint32_t sequence_number; // Packet sequence number } udp_packet_header_t;types of Packets:
typedef enum { UDP_PACKET_DISCOVERY_QUERY, // Query "who is here?" UDP_PACKET_DISCOVERY_RESPONSE,// Response "I'm here!" UDP_PACKET_AUDIO, // Packet with audio data UDP_PACKET_AUDIO_END_TX, // Final packet signaling the end of transmission } udp_packet_type_t; - Release: The buffer is returned to the shared_buffer_pool, and a UDP_PACKET_AUDIO_END_TX is sent when PTT is released.
- Receive Pipeline (Receive & Play)
- Network Receive: The lan_rx_task listens for UDP packets.
- On Audio Packet: It validates the magic number. If it's a UDP_PACKET_AUDIO packet:
- It grabs a free buffer from the shared_buffer_pool.
- Copies the packet data into the buffer.
- Sends the audio_chunk_t to the net_to_speaker_queue.
- It also generates a CHANNEL_EVENT_PEER_SPEAKING event for the UI to update its focus.
- Playback Loop: The audio_task (in receive mode) waits for chunks on the net_to_speaker_queue.
- Play: When a chunk arrives, it :
- enables the speaker (speaker_handle_playback),
- converts the mono sample to stereo,
- writes the audio data to the I2S (TX) channel,
- returns the buffer to the shared_buffer_pool.
- Timeout: If no new audio packets arrive within a timeout (SPEAKER_OFF_TIMEOUT_US), a handler automatically mutes the speaker.
The key feature here is the shared_buffer_pool. Using this and two independent queues (mic_to_net_queue and net_to_speaker_queue) allows the I2S, network, and audio tasks to pass data around efficiently without extra memory copies.
Having any further questions, feel free to reach out and ask
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.