Software Architecture Perspective
The software architecture of the system could be analyzed in depth, and we will address this in the future. However, considering that it is "just a servo," its complexity and size are not significant. Most of the effort is devoted to perfecting it. When discussing software architecture, we should consider the structure of the firmware, the requirements from the controlling side, how tests fit into this framework, and what testing orchestration looks like.
Firmware
The firmware can be described in following simple terms: Servo firmware should be a sophisticated control loop governed by messages. This simplification forms the basis of our firmware.
For effective testing, we focused on abstraction and decomposition. Our goal was to ensure that details specific to embedded hardware do not excessively infiltrate the codebase, enabling extensive testing on our computers.
The firmware structure comprises three layers. The bottom layer includes platform-specific code, which is minimal. This layer contains the Hardware Abstraction Layer (HAL) from the manufacturer and our setup functions for that platform. Each firmware version includes precisely one platform module in its source code.
The second layer is board-specific code, ideally encapsulated in a single .cpp file with pin and peripheral configurations. This layer typically invokes the platform-specific functions configured for the specific board.
The remainder resides in an independent layer, ideally agnostic to any platform or board-specific code. Currently, we cannot run 100% of this code on non-embedded hardware, but the percentage that requires embedded execution is extremely low. This limitation partly results from my laxity and our use of nanopb exclusively on the embedded side.
The independent layer contains the majority of the code, but we will delve into this in a future blog post, as it is not immediately relevant.
Interfacing
We chose protobuf messages as the format for communicating with the servo. Our long-term goal is flexibility in the underlying layer used for transmitting these messages. Currently, we support framing protobuf messages in COBS and transmitting them via full-duplex UART.
This approach means that software interfacing with the servo must create a valid protobuf message, encapsulate it in COBS, send it over UART, and await a response.
The servio repository includes the scmdio utility binary, facilitating communication with the servo through a CLI interface. For example, we can command the servo to switch to position control mode and move to position 0. This utility is primarily for configuration, as it implements a frontend for the configuration API in the protobufs.
Bash command would look like this:
$ smcido mode position 0
C++ code using boost asio could look like this:
boost::asio::awaitable< void > set_mode_position( cobs_port& port, float angle )
{
servio::Mode m;
m.set_position( angle );
servio::HostToServio hts;
hts.mutable_set_mode()->
co_await exchange( port, hts );
}
Tests
We employ various tests: unit tests, simulation tests, firmware tests, control tests, and blackbox tests, each with a unique focus.
Unit tests evaluate small, independent code segments on host devices (our laptops) without hardware access. For instance, we test our algorithm for storing configurations in flash memory using a raw memory buffer. Currently, the number of unit tests is lower than desired, but our extensive testing through other methods mitigates the need for more unit tests. Only code from the independent layer is tested here.
Start 1: cfg_utest_test 1/5 Test #1: cfg_utest_test ................... Passed 0.02 sec Start 2: cfg_storage_utest_test 2/5 Test #2: cfg_storage_utest_test ........... Passed 0.01 sec Start 3: control_utest_test 3/5 Test #3: control_utest_test ............... Passed 0.01 sec Start 4: kalman_utest_test 4/5 Test #4: kalman_utest_test ................ Passed 0.00 sec Start 5: metrics_utest_test 5/5 Test #5: metrics_utest_test ............... Passed 0.00 sec
Simulation tests assess control loops. They involve test scenarios that provide inputs like "start at velocity `0rad/s`, after 1 second move to velocity `0.7rad/s`, and remain steady," along with criteria for successful outcomes. By integrating our control loop with a simple motor simulator, we can test without hardware, focusing solely on code from the independent layer.
Firmware tests, similar in nature to unit tests, are conducted exclusively on the embedded device. They are ideal for evaluating aspects closely tied to the hardware, such as the reliability of periodic timers. These tests are beneficial for both long-term maintenance and assessing new PCB designs.
Benchmarks form part of the firmware tests, analyzing quantitative properties like control loop frequency or CPU time spent in interrupts. These benchmarks do not directly influence test outcomes but are monitored to ensure modifications do not introduce issues. They primarily focus on the platform or board layers but also include the independent layer due to its significant impact.
Control tests, which I highly value, are essentially integration tests that employ real hardware and control loops. We use test scenarios from the simulation tests on fully powered servos, allowing us to observe the hardware’s response to various scenarios in real-time. A future blog post will cover this in detail.
Blackbox tests are conducted on the final firmware build. They involve interfacing with the servo via protobuf messages to test the frontend. The focus here is on the integration of different components.
The public repository includes unit and blackbox tests, while simulation, firmware, and control tests are in a private repository.
Test Orchestration
To manage our diverse range of tests, we developed a unified approach for test execution, crucial for effective CI integration. We created joque, a library that orchestrates test execution by concurrently running tasks with respect to their dependencies.
The test_orchestrator binary scans the servio build for unit tests, generates simulation tests, identifies firmware tests, produces control tests, and detects blackbox tests. These tasks are then executed by joque. Post-execution, the orchestrator compiles a comprehensive report from all tests.
We will provide more details on this later, as each test type requires a distinct approach within the orchestration framework. The key is that joque's abstraction is versatile enough to optimally manage each test type.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.