Back to Blog

Almost Building a Production Microblog in One Weekend: The Receipts

blog share

Most people claim they "built something with AI." Few show the receipts.

This weekend I built a full-stack microblog for this site. Not a toy. Not a demo. A production-ready application with 72 passing tests, S3 image storage, SEO optimization, and social sharing, deployed and running at kennwilliamson.org/blog.

But here's what makes this different: every commit is public. You can see exactly what AI-assisted development looks like in the real world, not just the highlight reel.

The Build: By the Numbers

Let's start with the receipts:

Timeline:

  • Sunday, Nov 16 (7:12 PM - 8:29 PM): Phases 0-4 (repository, service, API)
  • Tuesday, Nov 18 (morning): Phase 5 (full UI implementation)
  • Tuesday, Nov 18 (evening): CI fixes and validation script
  • Wednesday, Nov 19 (early morning): S3 environment variables and documentation
  • Wednesday, Nov 19 (late morning/afternoon): Word wrapping debugging and markdown renderer pivot
  • Total active development: 12-14 hours across 5 sessions over 3 days (no work Monday)

The Code:

  • 100% AI-written (Claude Code by Anthropic)
  • 72 passing tests (14 repository + 47 service + 11 API)
  • 46 files changed in the final phase
  • 4,248 additions, 328 deletions in Phase 5 alone

Tech Stack:

  • Backend: Rust 1.90.0 + Actix-web
  • Frontend: Nuxt.js 4.0.3 (Vue 3 + SSR + TypeScript)
  • Database: PostgreSQL 17.0
  • Storage: AWS S3
  • Infrastructure: Docker Compose + Nginx + AWS EC2

You can see every commit in the public repository.

The Phases: What AI-Assisted Development Actually Looks Like

Phase 0-2: Infrastructure + Repository Layer (45 minutes)

Commit: e064049 (Nov 16, 7:12 PM)

This is where you set the foundation. The blog needs a database table, repository interfaces, and the basic data structures. Claude handled:

  • PostgreSQL migration with full-text search (tsvector indexes)
  • BlogPostBuilder test utility with fluent API
  • MockBlogRepository and MockImageStorage for unit tests
  • PostgresBlogRepository with 8 methods (create, read, update, delete, list, search, tags)
  • 14 comprehensive testcontainer-based tests (all passing)
  • Full-text search, tag aggregation, pagination, and filtering

What I did: Provided the architecture pattern (3-layer: API → Service → Repository) and reviewed the migration schema for performance indexes.

Phase 3: Service Layer (45 minutes)

Commit: 9f68160 (Nov 16, 8:01 PM)

The business logic layer. This is where posts get created, updated, searched, and validated. Claude implemented:

  • BlogService with builder pattern and dependency injection
  • Full-text search with PostgreSQL's ts_vector
  • Slug generation with collision handling (appends "-2", "-3" suffixes)
  • Auto-generates excerpt from first 160 chars of markdown
  • Sets published_at timestamp when status changes to "published"
  • Utility functions: slugify, strip_markdown, truncate_text
  • 47 passing service tests (100% coverage)
    • 6 create tests, 6 read tests, 4 update tests, 4 delete tests
    • 10 utility tests, 17 builder validation tests

What I did: Course-corrected when Claude tried to reinvent markdown parsing instead of using existing utilities. This is a pattern I see constantly. AI will sometimes ignore established patterns in favor of "building it fresh."

Phase 4: API Routes (30 minutes)

Commit: a3276eb (Nov 16, 8:29 PM)

HTTP endpoints with authentication, validation, and error handling. Claude added:

  • Public endpoints: GET /backend/public/blog/posts, /backend/public/blog/posts/{slug}, /backend/public/blog/tags, /backend/public/blog/search
  • Admin endpoints: POST/PUT/DELETE /backend/protected/admin/blog/posts
  • JWT authentication + admin role authorization for mutations
  • Service registration with MockImageStorage
  • 11 passing HTTP integration tests with testcontainers
    • Public access, authentication errors (401), authorization errors (403)
    • CRUD operations, pagination, search functionality

What I did: Verified authentication middleware was correctly applied. AI sometimes forgets security boundaries.

Phase 5: Full UI, Refactoring, and Polish (3-4 hours)

Commit: 223f34b (Nov 18, 11:54 AM)

This is the big one. Claude built:

  • Admin Interface: BlogEditorForm (400 lines), BlogImageUpload (175 lines), BlogListTab (205 lines)
  • Public Pages: Blog list (/blog), individual post pages (/blog/[slug])
  • Blog Components: BlogPostCard, BlogPostContent (md-editor-v3 preview), BlogSearchBar, BlogPagination, BlogShareButtons
  • Backend: S3ImageStorage implementation (129 lines), image upload API route
  • State Management: Blog Pinia store (156 lines), blogService (108 lines)
  • Form Refactoring: Eliminated duplicate state anti-pattern across 6 forms (register, login, reset-password, forgot-password, SecurityForm, AccountInformationForm)
  • SEO: Meta tags and Open Graph integration for social sharing
  • Infrastructure: Nginx routing, Docker configuration, bot protection design doc
  • 46 files changed: 4,248 additions, 328 deletions

What I did: This is where I spent the most time course-correcting. Claude kept falling into anti-patterns on the frontend: creating duplicate state management, not using our established composables, reinventing patterns we already had.

The hardest part of AI-assisted development isn't getting code written. It's keeping the AI aligned with your existing architecture as the codebase grows.

Post-Phase 5: Debugging and Fixes (6-8 hours across 3 days)

The feature worked, but reality hit when deploying and testing in production:

CI Fixes (Nov 18, 6:09 PM) Commit: 9d312c6

  • Fixed clippy warnings (conditional compilation for S3ImageStorage, async test signatures)
  • Fixed npm security vulnerability (glob command injection)
  • Disabled unused image crate features to remove unmaintained paste dependency
  • All 493 backend tests passing, no security vulnerabilities

CI Validation Script (Nov 18, 6:09 PM) Commit: 42702e8

  • Created ./scripts/ci-check.sh to run exact CI checks locally
  • Prevents "fix CI" commit chains by catching issues before push
  • Runs clippy with --all-targets (catches test file lints that basic cargo clippy misses)

S3 Environment Variables (Nov 19, 5:30 AM - 8:16 AM) Commits: 3867147, e2fc50f

  • Added missing AWS_S3_BUCKET_BLOG_IMAGES to docker-compose files
  • Updated all project documentation for microblog feature
  • Common deployment mistake: environment variables added to code but not to deployment configuration

Word Wrapping Bug (Nov 19, 9:58 AM) Commit: 60f7d05

  • Blog component directory renamed (blog → Blog for consistency)
  • Attempted CSS fix for word-break: break-all causing mid-word breaks ("Wee\nkend")
  • This was treating the symptom, not the root cause

Markdown Renderer Pivot (Nov 19, 11:46 AM) Commit: 64c2fe0

  • Replaced md-editor-v3 (500KB+ editor library) with markdown-it (90% smaller renderer)
  • Added markdown-it-prism (syntax highlighting), @markslides/markdown-it-mermaid (diagrams)
  • Added DOMPurify for XSS protection
  • Created custom Nuxt plugin for safe markdown rendering
  • Net result: 811 deletions vs 208 insertions (603 lines removed!)
  • Bonus fix: Resolved session role duplication bug (setUserSession → replaceUserSession)

What People Don't Tell You About AI-Assisted Development

The Good:

When AI handles implementation, you can focus entirely on design and user experience. No mental bandwidth wasted on syntax, boilerplate, or "how do I do this in Vue again?" Just "what should this do?" and then reviewing the implementation.

I could think at the architecture level the entire time. "This needs a 3-layer pattern with repository traits for testability" became working code in minutes.

The Hard:

You spend more time on architecture and design, not less.

Claude wrote 100% of the code, but I had to:

  • Constantly course-correct when it forgot our patterns
  • Review every change for anti-patterns
  • Verify security boundaries
  • Ensure consistency across 46 files

The analytics feature? We tried to add Umami (self-hosted analytics) but it wasn't working correctly, so we deferred it. With traditional development, I might have spent hours debugging. With AI assistance, I said "this isn't worth the time, let's move on."

The Surprising:

The frontend was harder to keep on track than the backend. Rust's type system and our established repository pattern kept Claude constrained. Vue's flexibility gave it more room to drift into anti-patterns.

Why Build This When Medium and LinkedIn Exist?

Simple: I want to own my audience.

LinkedIn is rented land. If you subscribe to my blog, I can reach you directly. If LinkedIn changes their algorithm tomorrow, I don't lose that connection.

Also, I wanted something fun to build this weekend. Building things is how I learn, how I think, and honestly, how I relax.

The Tech Stack: Why These Choices?

Rust Backend: Type safety catches errors at compile time. When AI generates code, those compile-time guarantees mean fewer bugs make it to runtime. The 3-layer architecture (API → Service → Repository) makes every layer independently testable.

Nuxt.js Frontend: SSR for SEO. Search engines need server-rendered HTML to index blog posts. Client-side SPAs don't cut it for public content.

PostgreSQL: Full-text search out of the box. JSONB for flexible tag storage. Proven reliability. The migration includes 5 performance indexes. This blog is built to scale.

S3 Storage: Blog images live in S3. The EC2 instance uses IAM roles for authentication (no credentials in environment variables). Featured images are public, uploaded via authenticated admin endpoints.

The Markdown Renderer Pivot: When I Didn't Verify AI's Research

After the microblog launched, I wanted to publish this post. That's when reality hit.

What Actually Happened: During Phase 5, I asked Claude to add markdown rendering. Claude did do a web search and recommended md-editor-v3 - 17M weekly downloads seemed legitimate. I approved it without verification. The problem? It's a full editor library (500KB+), and we were only using the preview component. Worse, we weren't even using the actual editor - just a plain <textarea>.

The Debugging Spiral: Word wrapping was breaking mid-word. "Weekend" became "Wee\nkend" in titles. We spent hours:

  • Fighting CSS specificity with !important overrides
  • Trying to override word-break: break-all from the library
  • Discovering the library's opinionated styles were incompatible with our design

The Real Problem: I didn't give Claude a decision matrix for library selection. No criteria like:

  • "Show me bundle size comparison"
  • "Is this an editor or a renderer?"
  • "Does it have TypeScript support?"
  • "Compare top 3 options by weekly downloads"

The Second Research: When I did require Claude to show npm stats, GitHub stars, and use case fit, the truth emerged immediately: markdown-it (11M downloads, 20.6K stars) was the right choice. Secure by default, CommonMark compliant, 90% smaller, and built for display not editing.

The Pivot: Another 4 hours to:

  1. Uninstall md-editor-v3 (removed 64 packages!)
  2. Research markdown-it plugins (avoiding ones with security vulnerabilities)
  3. Install markdown-it, @markslides/markdown-it-mermaid, markdown-it-prism, DOMPurify
  4. Create a custom Nuxt plugin with proper XSS sanitization
  5. Rewrite BlogPostContent component
  6. Create TypeScript declarations for packages without types
  7. Debug container volume mounts for type definitions
  8. Test everything again

The Cost: What was supposed to be a weekend project turned into 10-12 hours because I didn't verify AI's architectural choice with a proper decision framework.

Lessons for Other Developers

If you're considering AI-assisted development:

1. Architecture matters more, not less. AI can write thousands of lines of code. But if your architecture is unclear, those thousands of lines will be inconsistent chaos.

2. Established patterns are your guardrails. Document your patterns. Create examples. The more consistency you have, the easier it is to course-correct when AI drifts.

3. Test-driven development is essential. 72 tests meant I could verify changes quickly. When AI refactored 6 forms in Phase 5, the tests caught regressions immediately.

4. Review everything. AI-written code still needs human judgment. Security, performance, maintainability all require experience AI doesn't have.

5. Give AI a decision matrix for architectural choices. When AI needs to pick a library, provide criteria upfront: "Compare top 3 by weekly downloads, bundle size, maintenance status, and use case fit. Show me the trade-offs." Without this framework, AI will pick something that seems reasonable but might not fit your actual needs.

6. Verify AI's research, especially for big decisions. AI did research and recommended md-editor-v3 with seemingly good stats (17M downloads). But I didn't verify:

  • Is this the right type of library? (Editor vs. renderer)
  • What's the bundle size impact?
  • Are there lighter alternatives?

The 4-hour pivot could have been avoided with 5 minutes of verification when I saw "markdown editor" in the package name.

7. Know when to defer. Analytics wasn't working? Move on. Don't get stuck debugging AI-generated code that's not core to your goal. This is the same insight from my AI philosophy page. AI is a tool for exploration and velocity, not magic.

8. The human is the architect, AI is the builder. You're not just reviewing code for bugs. You're verifying every architectural decision: library choices, design patterns, security boundaries. The moment you stop being the architect, the codebase becomes AI-generated chaos.

What's Next

This blog is live. You're reading the first post on it.

I'll be writing about:

  • Christian Voluntarism and faith
  • Software architecture and AI-assisted development
  • Building in public and learning openly
  • Personal reflections on fatherhood, faith, and technology

Want to follow along? Subscribe at kennwilliamson.org for email notifications of new posts. I'm building an audience outside of LinkedIn's algorithm.

Want to see the code? The repository is public: github.com/kenn-williamson/kennwilliamsondotorg

Want to build something similar? The commit history shows every phase. Use it as a blueprint.

The Receipts

Here are the actual commits (all clickable links to GitHub):

Core Feature Implementation (Sunday, Nov 16):

  • e064049 - feat: implement microblog Phase 0-2 (infrastructure + repository layer)
  • 9f68160 - feat: implement microblog Phase 3 (service layer with 47 passing tests)
  • a3276eb - feat: implement microblog Phase 4 (API routes with 11 passing HTTP tests)

UI Implementation (Tuesday, Nov 18):

  • 223f34b - feat: implement microblog Phase 5 (UI, refactoring, and polish)
  • 9d312c6 - fix: resolve CI errors (clippy, tests, cargo audit, npm audit)
  • 42702e8 - feat: add local CI validation script to prevent remote failures

Deployment Fixes (Wednesday, Nov 19):

  • 3867147 - fix: add AWS S3 bucket environment variable for blog images
  • e2fc50f - fix: add blog S3 env var and update docs for microblog deployment

Markdown Renderer Pivot (Wednesday, Nov 19):

  • 60f7d05 - fix: rename blog components directory and fix word wrapping
  • 64c2fe0 - fix: resolve session role duplication and improve markdown rendering

No smoke and mirrors. No "I used AI" without showing the work. Just public commits you can inspect, test coverage you can verify, and a production app you can use.

That's what AI-assisted development looks like when you show the receipts.


About the Author: Kenn Williamson is an Enterprise Architect, single dad, and follower of Christ exploring the intersection of faith, technology, and Christian Voluntarism. He builds software in public at kennwilliamson.org and shares his journey on LinkedIn.

Share: