It’s 11:47 PM on a Tuesday. I’m in my home office on my personal computer. There’s a good cup of coffee in the mug — my third. The kids are asleep. On screen is a production deploy log for a SaaS I just shipped — RevelAI, an AI chat app at revelai.ai — and I’m watching the green checkmarks roll in, slightly stunned that any of this exists.
I’m Anders. A slice of the Microsoft community knows me from an open-source script called ConfigMgr Client Health. I built RevelAI on evenings and weekends while having a day job. Almost entirely with an AI agent. Here’s what actually happened.
Table of Contents

What I built with my Claude Code workflow
RevelAI is an AI chat app at revelai.ai. It routes your conversations to the model best suited to the question, with a clean interface and a few opinions baked in about how the defaults should behave.
The features I cared about getting right:
- Multi-model routing — access to all the major frontier models through one subscription.
- Conversation history that actually works, and that you own.
- Image, document, and PDF input — drop in a screenshot, a contract, or a research paper and ask questions about it.
- Image and video generation — generate images and short videos directly from the chat.
- Voice input and voice output — speak to the app and have it read responses back to you.
- Context compaction so long conversations do not run out of room.
- A mobile-friendly UI that does not feel like an afterthought.
On the stack side, the categories are Next.js with React and TypeScript on the frontend, Tailwind and shadcn/ui for styling, a managed authentication provider, hosted billing, Postgres with Prisma on top, OpenRouter as the LLM gateway, and a self-hosted VPS running Docker with GitHub Actions wired up for push-to-deploy.
Infrastructure runs in the low double digits per month. LLM cost is pay-as-you-go and tracks usage. It only takes a few paying subscribers before this project is profitable.
What Claude Code workflow actually did
Claude Code wrote almost all the code for RevelAI. I wrote the workflow that runs it.
I want to be specific about that, because it’s the most useful thing I learned building this app. The skill that mattered turned out not to be writing code. It was setting up the agents, the prompts, and the review gates so the agents could write code without me babysitting every line.
I run five Claude Code agents, each with a specific job:
- Architecture agent — designs the change before any code is written. Decides what to build, what to touch, and what to leave alone.
- Development agent — writes the actual code based on the architectural plan.
- Testing agent — writes the tests, runs them, and reports failures back into the loop.
- Code-review agent — reads the diff like a senior engineer would, and rejects work that doesn’t pass.
- Security-review agent — looks for the categories of bug I don’t want shipping to production.
Before any of them touch a file, I spend a lot of time planning the change with Claude Code. Honestly, more time than I used to spend planning anything as a solo developer. That is where the work actually happens. If the plan is wrong, the agents will execute the wrong thing very efficiently. If the plan is right, they ship it without me.
There is still a human in the loop. I review the pull requests, I make the product decisions, and I tune the agents when they start drifting. They are not the developer team — they are the developers.
The honest claim about all this is not “2x faster” or “10x faster”. It is that I would not have built this app at all without this workflow. That is a different claim, and I think it is a better one.
The weekend that almost killed the project
A few weekends into the build, the project started going backwards.
The agents were producing what I would call AI slop. Code that worked on the first run and quietly broke something else two changes later. Features that had worked the week before stopped working. Bugs I had already fixed came back. Every time I asked Claude Code to fix one issue, two new ones showed up somewhere else. I didn’t realize at the time but it was my workflow that was the problem.
I spent almost the entire weekend chasing individual bugs and getting nowhere. By Sunday evening, I was ready to give up on the whole project.
Instead of asking Claude Code to fix the next bug, I stopped and looked at the workflow itself. The dev agent was producing slop because nothing in the loop was catching it before it landed. The code-review agent was letting through changes it should have rejected. There was no real gate between “the dev agent thinks this is done” and “this is actually done”.
So I rebuilt the loop:
- The code-review agent got stricter. It now rejects anything that introduces regressions, dead code, or sloppy patterns, and bounces the work back to the dev agent with specific feedback.
- For hard bugs, I run multiple debugger agents in parallel on the same issue. Each one investigates from a different angle and presents a competing root cause analysis. I pick the strongest one, and the dev agent implements the fix.
- Nothing merges until the change passes both architecture review and code review without introducing new issues.
The frustrating weekend that almost killed the project ended up being the most useful weekend of the build. Once the loop was rebuilt, the agents stopped producing slop and started shipping clean changes that stuck.
The lesson, if there is one: when your AI is producing bad code, the bug is not in the code. The bug is in your process for letting code in.
Why I built this
The first reason was practical. I had been paying for an AI chat wrapper app, and I noticed my credits were going less and less far month over month. Same plan, same price, less usage. The subscription was being quietly devalued. So I decided to build my own.
Once I was already building my own, I figured I might as well solve a second problem I had been bumping into.
I am a Christian, and I use AI tools every day for work. When I started using them for Bible study, I noticed they would soften passages I did not want softened — adding caveats, reinterpreting hard texts as metaphor when the text is not metaphorical. RevelAI is built so it does not do that. It handles scripture as the people who wrote it intended it.
What I would do differently
A few honest reflections in case any of this is useful.
- Invest in the agent workflow earlier. I treated it as something to set up “once I knew what I was building.” That was wrong. The workflow is what lets you figure out what you are building.
- Put Stripe webhooks behind a queue from day one. Debugging webhook idempotency at one in the morning in production is a kind of pain you do not need to experience to learn from.
- Do not build the marketing site too early. I built mine before I had talked to enough potential users. The marketing site can wait. The user research cannot.
- Write things down as you go. This post would have been a lot easier to write if I had been keeping notes the whole time. I will fix that for the next one.
If there is a meta-lesson, it is that the first paying user taught me more than four weekends of careful planning ever did. Plan less. Ship sooner. The plan you write before shipping is mostly wrong; the plan you write after shipping is mostly right.
What is next
The next big thing is mobile apps. Native iOS and Android. My Apple developer account was just approved, so the iOS build is unblocked and underway. Bible study is a phone-first habit for most people, and a web-only tool is leaving a lot on the table.
I will write a follow-up on the agent workflow itself — prompts, roles, how I tune them — once I have shipped a couple more features through it. The mobile apps will be the first big test of whether the workflow scales beyond the web.
The harder thing, honestly, is distribution. I have built things before, but I have never had to market a SaaS to actual customers. That is a different skill set, and I am going to have to learn it. I have some ideas that I will try that also don’t cost a fortune like targeted ads.
Try it / follow along
RevelAI is live at revelai.ai. There is a free trial. Sign up, take it for a spin, and tell me what is broken — that is genuinely useful feedback.
If you want to see how the agent workflow shakes out as the project grows, follow me here on the blog or on X at @andersrodland.
And if you ship something using a similar Claude Code workflow, I would love to learn about it, let me know on X/Twitter.