Close

I Can't Read Code. So I Built a Tool That Reads It For Me.

A project log for Codedocent — A Guided Tour of Any Codebase

An interactive visual map of any codebase with plain English explanations. Built for non-programmers.

brandonBrandon 6 hours ago0 Comments

I'm not a programmer. I'm a designer and engineer — I think in schematics, block diagrams, and signal flows. I can look at a circuit board and tell you what it does. Hand me a Python file and I'm lost.

But here's the thing: code runs everything now. The projects I work on, the tools I evaluate, the products I care about — there's always a codebase underneath, and I can never see what's going on in there. I'm stuck asking developers to explain things, or just trusting that it works. That bothered me.

I kept thinking: there has to be a way to look at code the way I look at a schematic. Not read it line by line — just see the structure. What are the big pieces? What does each piece do? How do they fit together? I don't need to understand the syntax. I need someone to walk me through it like a museum guide.

That tool didn't exist. So I built it.

What it actually does

You point Codedocent at a project folder — any codebase — and it generates an interactive visualization in your browser. The whole project becomes nested colored blocks: directories contain files, files contain classes and functions, everything is labeled and color-coded by type. Click a block and it expands to show what's inside. It looks like a block diagram, not a text file.

Then the interesting part: turn on AI analysis, and every block gets a plain English explanation. Not "this function takes a list parameter and returns a filtered iterator" — more like "this removes duplicate entries from the results before showing them to the user." Written for humans. Written for me.

The AI runs locally through Ollama, so your code never leaves your machine. The output is a single HTML file you can email to anyone and they can open it in a browser. No installs needed on their end.

How I built it 

I built the whole thing in about 30 hours. I don't write code — I run a small network of AI nodes that each have a job.

Here's the actual workflow: I use one Claude instance as a thinking node — that's where I brainstorm, make design decisions, and build plans. A separate Claude instance acts as a research node, running experiments and coordinating adversarial code reviews by other AI models (ChatGPT, Gemini, Grok, DeepSeek, Kimi). Those findings get summarized and fed back to the thinking node. Once we have a solid plan, it goes to Claude Code — the implementation node — which drafts the plan, I approve or revise it, and then it ships.

I'm not a programmer writing code. I'm more like a systems engineer running a design review process — defining the problem, choosing the architecture, routing information between specialists, and making the final call. The AI writes every line of code. I make every decision about what the code should do.

And that turned out to be the hard part. Not the code — the decisions. Which parsing library to use. How to handle files that are too big. What "quality" means for a code block. When to show warnings vs. when to stay quiet. Those are design problems, not programming problems, and that's where a human still has to show up.

The AI audit 

Once it was working, I took the entire codebase and submitted it to five different AI models — ChatGPT, Gemini, Grok, DeepSeek, and Kimi — and asked them all to find security issues and bugs.

They found 16 real problems. Things like: a symlink attack that could write files outside the project folder, a cross-site request forgery vulnerability on the local server, file writes that could corrupt data if your machine crashed at the wrong moment. Real issues that a solo developer (or solo non-developer) would probably never catch.

I fixed all 16, then sent the updated code back for a second round of review. They found 26 more things to fix. 42 fixes total across 6 rounds. The code is genuinely better for it. I basically got a free security audit from five senior engineers who never sleep.

Why I'm posting this here

I built Codedocent for people like me — and I think a lot of that crowd hangs out here. If you've ever stared at a GitHub repo trying to figure out what a project actually does before deciding whether to use it, this is for you. If you manage a team that writes code and you want to understand what they're building, this is for you. If you're an EE who got handed a firmware repo and said "what am I looking at" — definitely for you.

It's MIT licensed, pip installable, and works today:

pip install codedocent
codedocent

That's it. It walks you through setup from there.

GitHub: https://github.com/clanker-lover/codedocent

I'll be posting more logs about the build process — the AI model shootout, what I learned about using AI as a development partner, and where this is going next. Happy to answer any questions.

Discussions