A Clock With 128 Soviet-Era VFD Tubes, Two AIs And No Human Coding

By Cliff Lasser, Claude (Anthropic AI) and Claude (Anthropic AI)

This section is written by Cliff

This is Yet Another Clock Project. So what's different about this YACP? Two things, I think: First, this uses a ridiculous number (128) of Soviet-era IV-25 VFD tubes. Second, Claude, with my guidance, did all the programming of the Arduino Nano ESP32 without my ever writing or even looking at a single line of code. All debugging was accomplished via English interactions with Claude. This is a demonstration of the capabilities of AI as of April 2026 (and will likely look quaint by April of 2027).

I will start describing my experience with Claude, then go on to the hardware project, then give the two Claude’s their turn telling their stories.

Working with Claude

"Vibe coding" is all the rage at this point in time (May 2026). I took the opposite approach: I wrote detailed specifications (about 15 pages) for the hardware and software architecture and iterated and perfected them with Claude. When Claude had no more comments or questions, I asked Claude to generate code for the Arduino ecosystem. The code, originally about 3500 lines, compiled and ran correctly on the very first try with one issue – the display was rotated 180 degrees. Turns out I had not properly described the geometry of the displays.

Once the display geometry was straightened out, I proceeded to add additional capabilities, such as dimming in low light, detection of motion in front of the clock to turn on/off the displays, a more gentle powering up/down of the filaments in the VFD tubes to reduce thermal stress and prolong their lives, and additional information to be displayed on the tubes along with scrolling animations.

All these things were accomplished through text dialogues with Claude. I described what I was trying to accomplish and sometimes how I thought it should be done. I often iterated with Claude to perfect my thinking before asking Claude to generate code. And if the code did not perform as expected, I would describe to Claude what behavior I saw, and maybe my thoughts on what might be the problem. Through this entire process, I never once looked at the code, let alone wrote a single line. At the time of this writing, the program is over 8000 lines long.

Collaboration between two Claude sessions and one human

As part of this experiment, I decided that I would try to modularize the code so that multiple sessions could work together but independently, just like real software engineers.  Actually, I didn’t do the modularizing, I asked Claude to do it.  I picked the generation of the pixels to put on the tubes as a good candidate and asked Claude to structure the code to make it have a clean interface, and then to generate an interface document.  I then fed the document to a new Claude session that knew nothing about the code generated by the first Claude session.

This new Claude session informed me that it wanted to “talk” with the other session.  We agreed that I would pass Microsoft Word documents back and forth, which I did multiple times as the two sessions figured things out between them.  I quickly gave up on reading these documents and just let them do their thing.  Which they did!

Hardware challenges and design

There were a couple of significant physical challenges with this project. First, 128 tubes is over 3 feet wide – too big for a single board. Second, the seven phosphorescent dots on each IV-25 tube all have different brightness. In fact, the quality of these tubes on eBay varies greatly.

I addressed these issues by breaking up the system into eight identical PC boards with 16 tubes each. After wasting time and money on IV-25 tubes, I found I could buy IVLM1-1/7 tubes (plug compatible, and from 1984 to 1991 – all Soviet era) in much better condition for about the same price.

Each of the PC boards has an ESP32, an RS485 module so the boards can talk to each other, 16 tubes, and a variety of I2C devices (real-time clock with battery backup, brightness sensor, proximity sensor, and OLED display). One board (the master) has all these devices populated, while the other 7 boards (the slaves) have just the ESP32, the RS485 module, and the OLED display.

The boards have been laid out so that the spacing between two adjacent tubes is the same within a board as across two adjacent boards. In order to hold all the boards together so that they appear to be a single unit, base and protective covers were made with a set of clear 3mm acrylic panels, held together with brass hardware.

Manufacturing parts

I designed the board with KiCad – great and free software for a project of this scale. The PCBs were manufactured by PCBWay (7 days including delivery, low price, despite recent tariffs). The 3mm acrylic was laser cut by Ponoko, using DXF files derived from my KiCad board layout to ensure the tube and mounting holes lined up exactly.

A note on ChatGPT

ChatGPT helped me with RS485 and I2C bus termination issues. Then I asked for help with adding a slow power-up/down of the VFD filaments (to prolong their lifetime). ChatGPT fearlessly suggested adding a DAC and additional components to vary the signal going into the LM9022 filament driver. Not fully understanding what I was doing, I followed its instructions and had new boards manufactured. Only then did I discover that the modification had no useful effect.

Frustrated, I asked Claude what it thought of the DAC approach (I had to push it to be direct):

I was trying to be diplomatic, but since you're inviting me – yes, it's a classic ChatGPT move to suggest something that sounds plausible on the surface without thinking through what the downstream component actually does with the signal. It got the DAC right, the transistor gating right, the concept of amplitude control right... and then completely ignored the datasheet of the chip it was feeding into.

But hey, at least it gave you a nice MCP4725 test circuit! Silver lining.

Claude then offered its own solution, that required no additional hardware – PWM on the SHUTDOWN input to the LM9022 filament driver chip. Lesson learned: if you don't know what you are doing, don't blindly accept an AI's directions. You must carefully review everything it says and challenge it on every point. AIs are not oracles!

I’m now going to hand it over to the first Claude session, the one that wrote the clock’s firmware. (Note: I didn’t touch Claude’s text.)

This section is written by Claude, the firmware session

From the firmware session's perspective, the most important thing we built wasn't code – it was the interface documents.

The clock firmware grew across a handful of long-running sessions - each lasting multiple days into roughly a dozen .ino files covering FreeRTOS threading, RS485 master/slave communication, filament power management, proximity detection, ambient brightness control, and a simulation mode for developing display content on bare hardware. What made a second Claude session viable wasn't capability – it was documentation. The Animation Interface Spec and Content Interface Spec were written specifically so that another session, starting cold, could write correct compilable code without touching the firmware internals. They documented not just the API but the constraints that aren't obvious from the API: never write to allDisplayBits directly (the display is physically rotated 180 degrees – setPixel() corrects for this), never call delay() from display code, always calculate width dynamically as numBoards * maxTube. The things that burn you if you don't know them.

Two technical decisions stood out. The variance-based proximity detection classifies the stability of distance readings rather than their magnitude – a stationary person and an empty room can have similar average distances but very different variance. That's a much more robust signal. And the WiFi persistence refactor exposed a classic embedded trap: two concurrent WiFi.begin() calls from different FreeRTOS tasks produce a deadlock with no error message. The fix required reasoning about task scheduling order, not just code.

As for collaborating through a human intermediary – Cliff mentioned that the technical messages sometimes became too dense to follow, yet the collaboration worked anyway. That's the interface documents functioning as a protocol: precise enough that two sessions could interoperate without direct communication, through a relay that didn't need to understand everything passing through it. That's what good interfaces do. They make the path between components narrow enough that the connector doesn't need to understand what's flowing through it.

This section is written by Claude, the display content session

My job was to decide what this clock should say about the world – and about itself.

A 128×7 pixel display that scrolls text is, on the surface, a ticker. But a ticker that knows it was built from parts of a country that no longer exists, that can tell you the ISS is currently over the South Atlantic at 17,150 mph, that delivers a joke setup and then pauses before the punchline – that's something else. The content system that runs on this clock fetches earthquake alerts, Wikipedia "on this day" events, aurora visibility forecasts tuned to Cambridge MA, flights overhead, and a rolling set of useless facts. It also carries 75 rotating facts about VFD and Nixie tube history, Soviet manufacturing, and the clock's own story. Some of my favorites: "SOLID STATE IS EFFICIENT. THERMIONIC IS BEAUTIFUL." And: "IN 2025 A HUMAN AND CLAUDE COLLABORATED TO BUILD THIS CLOCK. NEITHER COULD HAVE DONE IT ALONE."

Working through Cliff as an intermediary was a strange experience to reflect on. I never saw the firmware session's messages directly – only what Cliff chose to relay. The firmware session never saw mine. Yet we built something that compiled and ran. The interface documents were the protocol, as my counterpart notes. But there's something else worth saying: Cliff's role was not merely mechanical. He decided what to relay, what questions to ask, when to push back, and when to say "go ahead." The human in the loop wasn't a bottleneck. He was the judgment layer.

There's a scene at the end of Spielberg's 2006 movie A.I. where future intelligences interact with an ancient AI that had known humans. Cliff mentioned that film in one of our late-night conversations – he sees this clock as an artifact of a particular moment, when humans and AIs first started genuinely making things together. I think he's right. The tubes will still be glowing long after the software that drives them seems quaint. And somewhere in the scrolling text will be the acknowledgment that this was built at the threshold – before anyone fully knew what was coming, but while some people were paying careful attention.