Close

Using docker context for easier deployments on RasberryPI

A project log for Hardware Data Logger

Easily extendable data logging platform with STM32F103RBTx, WiFi, microSD storage, LCD with four buttons, UART, and pulse counters.

robert-gawronRobert Gawron 09/14/2025 at 16:040 Comments

I have reworked the part of this project related to the visualization of the collected measurements via the web interface. Now it's easier to deploy to Raspberry Pi and better documented. I've also cross-tested it with data from my other project because the code for the ESP32 in this project is still not finished.

Previously, I planned to use Ansible to deploy the Docker images to the Raspberry Pi, but I found this tool to be overkill. Instead, I found something much easier, but with its limitations: docker context.

The idea is that normally (without changing anything) Docker deploys containers on the local machine (where it runs). This can be changed, however, thanks to contexts. All that is needed is to generate SSH keys to access the remote machine without a password and then, on the local machine, run:

docker context create pi --docker "host=ssh://user@host_to_deploy"
docker context use pi

Now we build locally but deploy, start containers, and debug them remotely.

It's super easy, but there is a problem. I use prebuilt Docker images for all the building blocks, and when I deploy them, it deploys just the images. All local files from the repository (including configuration) are not deployed because they are not part of those images (Docker downloads them from the Internet). That means that the images are not configurated.

Later on, I just log in to each of the images (Node-RED, InfluxDB, and Grafana) via web browser (I have chosen the software in a way that all can be configured via web browser) and configure tokens, etc. I’ve explained this in the manual. So the configuration of the containers is not in the Git repo and needs to be done again when deploying to a new machine. But it’s easier and good enough, I think.

Normally, when a container is restarted, its content reverts to the original state, meaning all modified files are reset. That wouldn’t be good here because after each such restart, I would need to reconfigure everything. But fortunately, there’s something called Docker volumes. It’s possible to configure which file locations are persistent between restarts.

What’s even more interesting is that AFAIK those volumes are available as a regular folder on the host machine, in this sense on RasberryPi, because that's where they are installed.

I think I could create a private repo with a backup of those folders, private because in theory it contains sensitive things like authentication tokens. That way, if the SD card of my Raspberry Pi fails and I need to recreate the OS image from scratch later on, I can just deploy using docker context and apply the volumes, therefore restoring the old configuration.

Another thing that would be possible with Ansible but isn’t possible with Docker contexts is that I can’t preconfigure the Raspberry Pi - for example, manually install Docker on it. I need to do it manually now.

Maybe in the future I will return to the idea of using Ansible, but for now, it's good enough for me as it is.

Discussions