Close

Code and AI

A project log for Vat Heater for Resin 3D Printers

Keep your resin at the right temperature with this USB-PD powered vat heater featuring ESP32 control, dual channels, and fan support.

dimitarDimitar 02/19/2026 at 22:350 Comments

Hello all.

For this project, I decided to generate as much of the code as possible using AI. The project is still ongoing, however wanted to share some thoughts.

One of the most widely discussed issues with AI systems is hallucination. I prefer to think of it as a mirage. The AI presents a clear solution — but you can never reach it.

For now, this is mostly an annoyance. However, if developers become overly reliant on AI, critical thinking will inevitably erode.

Another observation is that AI agents often don't like their own code when asked to review it. Repeatedly, I’ve requested self-review, and the system almost always identifies issues.

You would expect this “super intelligence” to have everything nailed down the first time. But that’s not the case.

We’re in an uncanny valley of code generation. At first glance it looks great, but once you look closer, it’s not the case.

There’s no doubt these systems will improve. But at this stage, caution is necessary.

Large amounts of code can be generated instantly. The looser the prompt, the messier the output.

If you ask a human engineer to “build a web server,” his or her response will likely be a series of questions. An AI agent will happily generate thousands of lines of code.

You end up with a large body of code that you now have to understand. The real issue is that you don’t know what you don’t know. So what do you keep, and what do you throw away?

AI systems are remarkably accommodating. Late at night, while working on the project, I mistakenly listed the wrong modules to modify. The error was obvious yet the AI proceeded without hesitation.

That raises a simple question: if it follows obvious mistakes so readily, how many subtle ones go unnoticed?

AI does not push back. As a chauffeur, it would happily drive you off a cliff if asked.

Using AI to generate code can quickly accumulate technical debt. This is especially true in embedded programming, where you have to deal with the messiness of the real world.

Companies don’t publish their production firmware. AI models, I assume, are largely trained on open-source repositories and hobbyist projects. While many of these are excellent, most are not.

In embedded environments, code needs to be lean, predictable, and hardware-aware. AI-generated code resembles more something written for desktop or web applications.

There’s a phrase we use: “It goes without saying.” For AI, nothing goes anywhere unless you say it.

AI systems are not proactive. They don’t anticipate unspoken constraints. That limitation exists for safety reasons — we don’t want AI making unchecked decisions.

But humanity’s most useful tools have always been the most dangerous ones.

Humans learn best by doing — and by getting things wrong. Mistakes are part of mastery.

When AI generates the majority of the work, we risk losing some of that edge. What happens if an entire generation grows up relying on tools like ChatGPT?

The AI itself is powerful. But power in inexperienced hands rarely leads to good outcomes.

As AI tools improve, fewer people may feel compelled to pursue technical careers. If that trend continues, we could face a shortage of highly skilled professionals capable of handling complex failures, security breaches, or serious crises.

When AI-powered systems misbehave — and they inevitably will — who will pick up the pieces?

Replacing large portions of the workforce with AI solutions introduces additional risks.

Institutional knowledge does not disappear when employees leave. Former employees still retain a deep understanding of internal systems. AI does not eliminate human factors such as insider threats or social engineering. AI might  be intelligent, but humans are cunning.

Moreover, centralized AI-dependent infrastructure introduces systemic vulnerabilities. What happens during a major power outage, a solar storm, a natural disaster, or a large-scale infrastructure disruption? Who remembers notpetya, when global shipment stopped?

There is also tension between AI providers and technology companies. AI developers invest enormous resources into training proprietary models. Meanwhile, businesses must protect their own intellectual property.

Locally hosted models are useless. In a way, nobody wants to be the underdog.

We are widely acknowledged to be in a global race toward more advanced AI systems. As capabilities grow, governments may impose restrictions on access, especially around dual-use concerns.

Companies building critical systems on external AI infrastructure should at least consider the possibility of regulatory limits or bans.

These reflections are not an argument against AI-assisted development. On the contrary, AI is a great force multiplier. It accelerates prototyping, testing, and enables iteration speeds that would have been unrealistic just a few years ago.

However, I would not discard my programming textbooks just yet.

The AI revolution is only beginning. Rather than replacing professionals, it may ultimately increase the demand for experienced engineers — especially in a time of crisis.

Cheers,

M.

Discussions