Close
0%
0%

Codedocent — A Guided Tour of Any Codebase

An interactive visual map of any codebase with plain English explanations. Built for non-programmers.

Similar projects worth following
0 followers
The problem: If you're like me, you work alongside code every day but you don't write it. You read schematics, not source files. When you need to understand what a codebase actually does — maybe you're managing a project, evaluating a tool, or just trying to have an informed conversation with your dev team — you're stuck.
What Codedocent does: Point it at any project folder. It parses the code structure and generates an interactive visualization — nested colored blocks showing how everything is organized, from directories down to individual functions. Click any block and it expands. Turn on AI analysis and each block gets a plain English summary of what it does, written for humans, not programmers.
How it works: Tree-sitter handles the parsing, local AI through Ollama writes the explanations. Everything runs on your machine — no code leaves your computer. The output is a self-contained HTML file you can share with anyone. MIT licensed, pip installable.

What is Codedocent?

Codedocent is a guided tour of any codebase. You point it at a project folder and it generates an interactive visualization in your browser — nested colored blocks representing the structure of the code, from directories down to individual functions. Click any block to expand it. Turn on AI analysis and each block gets a plain English explanation of what it does.

It's built for people who need to understand code but don't write it: project managers, founders, hardware engineers, designers, technical writers, or anyone evaluating an open source project before committing to it.

Key features

  • Interactive visualization — Nested colored blocks you can click to drill into. Directories, files, classes, functions — all visible at a glance.
  • AI-powered explanations — Local AI (via Ollama) generates plain English summaries of what each piece of code does. No jargon.
  • Code quality indicators — Color-coded warnings flag overly complex or oversized code blocks so you can spot problem areas without reading a line.
  • Runs locally — Everything stays on your machine. No cloud, no API keys, no code ever leaves your computer.
  • Shareable output — Generates a single self-contained HTML file. Email it to anyone and they can open it in a browser.
  • GUI or terminal — Use the graphical launcher or the command line. Both work.
  • Setup wizard — Run codedocent with no arguments and it walks you through everything.

Quick start

###b class="inline-flex items-center justify-center relative shrink-0 can-focus select-none disabled:pointer-events-none disabled:opacity-50 disabled:shadow-none disabled:drop-shadow-none border-transparent transition font-base duration-300 ease-[cubic-bezier(0.165,0.85,0.45,1)] h-8 w-8 rounded-md active:scale-95 backdrop-blur-md Button_ghost__BUAoh" type="button" aria-label="Copy to clipboard" data-state="closed"###

pip install codedocent
codedocent

That's it. The wizard handles the rest.

Requirements

  • Python 3.10+
  • Ollama installed locally (for AI features — the tool works without it, you just won't get the English explanations)

Links

  • I Can't Read Code. So I Built a Tool That Reads It For Me.

    Brandon6 hours ago 0 comments

    I'm not a programmer. I'm a designer and engineer — I think in schematics, block diagrams, and signal flows. I can look at a circuit board and tell you what it does. Hand me a Python file and I'm lost.

    But here's the thing: code runs everything now. The projects I work on, the tools I evaluate, the products I care about — there's always a codebase underneath, and I can never see what's going on in there. I'm stuck asking developers to explain things, or just trusting that it works. That bothered me.

    I kept thinking: there has to be a way to look at code the way I look at a schematic. Not read it line by line — just see the structure. What are the big pieces? What does each piece do? How do they fit together? I don't need to understand the syntax. I need someone to walk me through it like a museum guide.

    That tool didn't exist. So I built it.

    What it actually does

    You point Codedocent at a project folder — any codebase — and it generates an interactive visualization in your browser. The whole project becomes nested colored blocks: directories contain files, files contain classes and functions, everything is labeled and color-coded by type. Click a block and it expands to show what's inside. It looks like a block diagram, not a text file.

    Then the interesting part: turn on AI analysis, and every block gets a plain English explanation. Not "this function takes a list parameter and returns a filtered iterator" — more like "this removes duplicate entries from the results before showing them to the user." Written for humans. Written for me.

    The AI runs locally through Ollama, so your code never leaves your machine. The output is a single HTML file you can email to anyone and they can open it in a browser. No installs needed on their end.

    How I built it 

    I built the whole thing in about 30 hours. I don't write code — I run a small network of AI nodes that each have a job.

    Here's the actual workflow: I use one Claude instance as a thinking node — that's where I brainstorm, make design decisions, and build plans. A separate Claude instance acts as a research node, running experiments and coordinating adversarial code reviews by other AI models (ChatGPT, Gemini, Grok, DeepSeek, Kimi). Those findings get summarized and fed back to the thinking node. Once we have a solid plan, it goes to Claude Code — the implementation node — which drafts the plan, I approve or revise it, and then it ships.

    I'm not a programmer writing code. I'm more like a systems engineer running a design review process — defining the problem, choosing the architecture, routing information between specialists, and making the final call. The AI writes every line of code. I make every decision about what the code should do.

    And that turned out to be the hard part. Not the code — the decisions. Which parsing library to use. How to handle files that are too big. What "quality" means for a code block. When to show warnings vs. when to stay quiet. Those are design problems, not programming problems, and that's where a human still has to show up.

    The AI audit 

    Once it was working, I took the entire codebase and submitted it to five different AI models — ChatGPT, Gemini, Grok, DeepSeek, and Kimi — and asked them all to find security issues and bugs.

    They found 16 real problems. Things like: a symlink attack that could write files outside the project folder, a cross-site request forgery vulnerability on the local server, file writes that could corrupt data if your machine crashed at the wrong moment. Real issues that a solo developer (or solo non-developer) would probably never catch.

    I fixed all 16, then sent the updated code back for a second round of review. They found 26 more things to fix. 42 fixes total across 6 rounds. The code is genuinely better for it. I basically got a free security audit from five senior engineers who never sleep.

    Why I'm posting this here

    I built Codedocent for people like me — and I think a lot of that...

    Read more »

View project log

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates