Building KennWilliamson.org

Intelligence Augmentation in Practice

Two months part-time, working with Claude (Anthropic) to learn Rust and Nuxt simultaneously. This proves that IA enables rapid learning when paired with strong fundamentals and active validation.

2 Months
Part-Time Dev
600+ Tests
All Layers
Zero→Prod
Rust/Nuxt
Live AWS
Production

Project Overview

KennWilliamson.org serves multiple strategic goals as both a portfolio demonstration and personal platform. This full-stack application showcases professional deployment skills while serving as a learning environment for AI-assisted development in an unfamiliar technology stack.

Portfolio Demonstration

Production-ready full-stack application demonstrating professional engineering practices, deployment expertise, and architectural decision-making.

Learning Platform

Exploring AI-assisted development in Rust and Nuxt.js (unfamiliar technologies) while applying strong architectural knowledge from years of enterprise experience.

Live Production

Deployed at kennwilliamson.org on AWS t3.small instance with SSL, demonstrating real-world deployment capabilities.

Personal Platform

Public testimony platform with community features (incident timers, motivational phrases) and steampunk aesthetic reflecting personal creative vision.

The IA Laboratory

This site isn't just a portfolio. It's a proof-of-concept for the Intelligence Augmentation framework I evangelize professionally. In my talks to manufacturing, oil & gas, and healthcare companies, I explain that GMs (generative models) amplify skilled humans rather than replace them. This project demonstrates that thesis in action.

The Challenge: Learn Rust and Nuxt.js simultaneously (two technologies I'd never used professionally) and build a production full-stack application with proper architecture, comprehensive testing, and real deployment infrastructure.

The Timeline: Two months, part-time (evenings and weekends while juggling work, single dad responsibilities, and three kids' schedules).

Working with Claude (Anthropic) throughout this project, I applied the IA methodology I recommend to clients: human judgment drives decisions, GMs accelerate execution, active validation catches mistakes. The result? A learning curve that would have taken 6+ months in the pre-GM era compressed into 8 weeks part-time.

What IA Enabled

Rapid prototyping, pattern exploration, boilerplate generation, and learning acceleration. Claude helped me explore Rust idioms, Nuxt SSR patterns, and PostgreSQL optimization techniques I'd never encountered.

What Required Human Judgment

Every architectural decision (3-layer pattern, hybrid API design, normalized auth schema). Every security choice (PKCE OAuth, JWT strategy). Every course correction when Claude's suggestions didn't fit requirements.

The IA Pattern in Practice

Claude generates → I validate against requirements → Test reveals gaps → Iterate. For complex features, I used TDD: write tests first, Claude implements, I validate behavior matches specification.

Learning, Not Autopilot

This wasn't Claude writing code while I watched. I learned Rust ownership, lifetimes, and traits. I understand Nuxt SSR hydration, server-side routing, and composition API patterns. IA accelerated learning, it didn't replace it.

For the technical audience: This page documents the full journey: what worked, what didn't, course corrections we made, and lessons learned about practicing IA in a complex learning environment. If you're evaluating IA for your team or exploring GM-assisted development, the details below show what the process actually looks like.

The IA Collaboration Cycle

📱 Workflow diagram available on larger screens

View this page on a tablet or desktop to see the IA collaboration cycle diagram

The Human-AI Feedback Loop: Human judgment drives decisions and validates output. Claude accelerates execution and explores patterns. Tests provide fast feedback. Iteration happens continuously. This cycle ran hundreds of times over 2 months.

What the Site Does

This isn't just a technical demonstration. Real users interact with real features requiring real architecture decisions.

Incident Timers

Users track recovery milestones (days since last incident). Each user controls their timer's visibility through two settings: "Show timer at my personal slug?" and "Show timer in the public directory?" Simple feature on the surface.

What it drove: Feature-based role system (email_verified to create/edit timers, preventing spam accounts), user preference architecture, public/protected API routing patterns.

Motivational Phrases

Community members suggest inspirational quotes. Admin reviews and approves via moderation panel. Approved phrases appear in daily rotation for all users. Straightforward workflow.

What it drove: Domain event system with observer pattern, AWS SES integration for approval notifications, email template architecture (Askama compile-time verification), admin panel with moderation queue.

Personal Testimony & Access Requests

Public platform for professional work and faith journey. Personal content (Origins, The Wilderness, Finding Faith, etc.) requires trusted-contact role. Users request access, I approve manually.

What it drove: SEO requirements (Nuxt SSR for search indexing), role-based content gating, access request workflow with email notifications, data privacy architecture (GDPR/CCPA self-service deletion and export).

These straightforward user features required sophisticated technical architecture. The sections below explain how we built them and what we learned along the way.

Architecture

3-Layer Backend Architecture

Evolution: We didn't start here. Initial architecture used fat repositories mixing service logic with data access. Straightforward but not testable.

Problem: Couldn't test business logic without spinning up a database. Every test was an integration test, slow and harder to debug.

Solution: Refactored to 3-layer architecture with dependency injection. TDD revealed the problem early, and the refactor proved its value immediately. Unit tests ran in seconds instead of minutes.

Clean separation of concerns with testability via dependency injection. This pattern made working with Claude dramatically more efficient. We could validate generated code quickly through fast unit tests, catching issues before they became integration problems.

API Layer (routes/)
• HTTP request/response handling
• Middleware application (auth, RBAC)
• Route scoping: public / protected / admin
• Minimal business logic
Service Layer (services/)
• Business logic and orchestration
• Depends on repository traits (not concrete implementations)
• Modular design: Split large services into focused modules
• Error handling and validation
Repository Layer (repositories/)
• Data access abstraction
• Trait-based design for testability
• PostgreSQL implementations for production
• Mock implementations for unit testing

Why this matters: Services testable with mock repositories (no database needed). Changes to data layer don't cascade through the system. Each layer has single responsibility. Can swap implementations easily.

Hybrid API Patterns

Two distinct API patterns optimized for different use cases:

/api/* (SSR Proxy)

  • • Server-side data fetching for initial page loads
  • • Session-based authentication handling
  • • Nuxt server acts as proxy to backend
  • Why: SEO, security (httpOnly cookies), UX

/backend/* (Direct Rust)

  • • Client-side mutations and API calls
  • • Stateless Rust API for maximum throughput
  • • Direct backend communication
  • Why: Performance, real-time updates

Request Flow Architecture

📱 Architecture diagram available on larger screens

View this page on a tablet or desktop to see the interactive request flow diagram

Four distinct pathways: ① Built-in Session (Green) - nuxt-auth-utils automatic route reads cookie, no backend/DB | ② Custom Auth Routes (Steel) - Login/register/profile hit Rust backend and update session | ③ SSR (Blue) - Gets session + fetches page data from backend | ④ Direct Backend (Cyan dashed) - Client-side data operations bypass Nuxt.
* All HTTP traffic reverse-proxied through Nginx (SSL termination, routing). Pathways ②③④ share single Rust → PostgreSQL connection.

Service Container Pattern

Centralized dependency injection managing service lifecycle and configuration. Supports environment-specific setup (development, testing, production) with appropriate repository implementations and background task configuration.

Technology Stack

Backend Stack

RustActix-WebPostgreSQLSQLx
  • Rust 1.90.0 - Type safety, performance, fearless concurrency. Compile-time guarantees catch errors early, valuable when learning new patterns with Claude.
  • Actix-web 4.x - High-performance async framework handling high throughput with minimal resources, perfect for 2GB RAM deployment constraint.
  • PostgreSQL 17 + SQLx - Compile-time query verification prevents SQL errors. Using pre-built image (ghcr.io/fboulnois/pg_uuidv7) with UUIDv7 extension. Full-text search, JSONB flexibility, ACID compliance.
  • JWT + bcrypt - Stateless authentication with bcrypt password hashing (cost 12) and refresh token rotation.

Frontend Stack

Nuxt.jsVue.jsTypeScriptTailwindCSS
  • Nuxt.js 4.0.3 - Vue 3 with SSR for SEO. Search engines receive server-rendered HTML for portfolio content indexing.
  • TypeScript - Type safety across full stack. Compile-time error catching and better IDE support.
  • TailwindCSS - Utility-first CSS with custom nautical theme and responsive design system.
  • Pinia + Action Composables - State management with action composable pattern. Pure stores, testable composables, separation of concerns.

Infrastructure

DockerNginxAWS
  • Docker Compose - Container orchestration for development and production. Consistent environments, easy deployment, service isolation.
  • Nginx - Reverse proxy with SSL termination. Static file serving, request routing to Nuxt/Rust, performance optimization.
  • AWS t3.small (2GB RAM) - Production deployment with resource constraints driving optimization decisions. SQLx connection pooling with sensible defaults, PostgreSQL tuned for 2GB environment, stateless API design.

Key Features

Authentication & Security

User Need: Account creation with email/password or Google, email verification to unlock features, password reset when forgotten, self-service account deletion.

Technical Implementation:

  • Google OAuth with PKCE: Proof Key for Code Exchange for enhanced security (architected for multi-provider support)
  • Email verification: Secure token-based verification (24-hour single-use tokens) required for trusted operations
  • Password reset flow: AWS SES integration with secure tokens and email delivery tracking
  • Account management: Self-service deletion with immediate permanent removal (GDPR/CCPA compliant)

Role-Based Access Control (RBAC)

User Need: Control access to features and content. Email verification prevents spam accounts from creating timers/phrases. Trusted contacts can view personal testimony content. Admin moderates phrases and manages users.

Technical Implementation:

  • Feature-based role system: user (read-only base), email-verified (create timers/suggestions), trusted-contact (view personal content), admin (full system control)
  • Middleware-based authorization: Applied at route scope level for clear security boundaries
  • Route scoping: Public, protected, and admin routes with automatic permission enforcement
  • Flexible role management: Admin can grant/revoke manageable roles (preserves immutable base role)

Domain Events & Notifications

User Flow: User suggests phrase → System fires domain event → Admin receives email notification → Admin approves/rejects → System fires approval event → User receives confirmation email. Also handles security alerts and access request notifications.

Technical Implementation:

  • Event bus with observer pattern: Type-safe event routing via TypeId, fire-and-forget async execution
  • Email notification system: Security alerts, phrase suggestion approvals, access request notifications
  • AWS SES integration: HTML email templates with bounce/complaint tracking and suppression list
  • Concurrent handler execution: Semaphore-based backpressure, error isolation (one handler failure doesn't affect others)

Admin Panel

User Need: Admin (me) needs to moderate phrase suggestions, grant access requests from trusted contacts, manage user accounts, view system statistics, and handle email deliverability issues.

Technical Implementation:

  • User management: Account activation/deactivation, role assignment, password reset, promote to admin
  • Phrase moderation: Approval workflow for user-submitted motivational phrases with admin review
  • Access request handling: Review and approve/reject trusted-contact role requests with email notifications
  • System dashboard: Statistics, user activity, content moderation queue
  • Email suppression: Bounce and complaint handling for deliverability compliance

Data Privacy & Compliance

User Need: Users need control over their data. Self-service account deletion when leaving. Data export for portability. Privacy-respecting design without invasive tracking.

Technical Implementation:

  • GDPR/CCPA compliance: Self-service account deletion (immediate permanent removal), data export in JSON format
  • Normalized auth schema: 5-table architecture (users, credentials, external_logins, profiles, preferences) with clear data boundaries
  • Email suppression list: AWS SES compliance for bounce and complaint handling
  • Privacy-first design: No analytics tracking, no third-party cookies, minimal data collection

Security & Compliance

JWT + Refresh Token Hybrid

Short-lived access tokens (1 hour) with long-lived refresh tokens (1 week rolling, 6 month hard limit).

  • • Stateless JWT for performance (no DB lookup per request)
  • • Revocable refresh tokens (database-backed)
  • • Token rotation on every refresh (detects theft)

bcrypt Password Hashing

Industry-standard password hashing with cost factor 12.

  • • Slow by design (brute force resistant)
  • • Adaptive cost (can increase over time)
  • • Battle-tested, widely supported

OAuth with PKCE

Authorization Code flow with PKCE (Proof Key for Code Exchange).

  • • Prevents authorization code interception
  • • Required for public clients (SPAs)
  • • More secure than implicit flow

Email Verification

Token-based verification required for trusted operations.

  • • Secure random tokens (24-hour expiration)
  • • Single-use tokens (prevents replay attacks)
  • • Prevents spam accounts, confirms identity

HTTPS/TLS Encryption

All connections encrypted with SSL termination at Nginx.

  • • SSL certificates for production domain
  • • Self-signed certs for local development
  • • HTTP-only secure cookies for auth tokens

GDPR/CCPA Compliance

Data privacy compliance with user rights enforcement.

  • • Immediate account deletion (no grace period)
  • • Data export in machine-readable JSON format
  • • Clear data boundaries in normalized schema

Testing Strategy

Comprehensive test coverage (~600 tests total) across all layers with paradigm-based testing approach. TDD workflow with Claude: write tests defining behavior, Claude implements, validate against specification. Fast feedback loop crucial for learning new patterns.

Testing Pyramid: ~600 Tests Across All Layers

📱 Testing diagram available on larger screens

View this page on a tablet or desktop to see the testing pyramid visualization

Paradigm-Based Testing: Each layer tested at the appropriate level. Repository tests verify database operations with real PostgreSQL. Service tests verify business logic with mocked data. API tests verify HTTP behavior and integration. Frontend tests verify components, composables, and utilities.

Repository Layer

Integration tests with testcontainers

  • • Real PostgreSQL database in Docker
  • • Tests actual SQL queries and transactions
  • • Verifies database constraints and indexes
Service Layer

Unit tests with mock repositories

  • • Fast execution, no database required
  • • Tests business logic in isolation
  • • Mockall for trait-based mocking
API Layer

End-to-end tests with actix-test

  • • Full HTTP request/response cycle
  • • Tests middleware and route handlers
  • • Verifies authentication and authorization

Frontend Testing: Component tests, action composable tests, pure service tests, and utilities. 175+ tests achieving 100% success rate with comprehensive coverage of action composables, pure stores, and services.

Testing philosophy: Each layer tested at appropriate level. Repository tests verify database operations. Service tests verify business logic with mocked data. API tests verify HTTP behavior and integration.

Infrastructure & Deployment

Production Deployment

Live at kennwilliamson.org on AWS t3.small instance (2GB RAM). All resource allocation decisions driven by memory constraints.

Docker Compose

Container orchestration for 4 services: Nginx, Nuxt.js, Rust backend, PostgreSQL. Persistent volumes for database, automated health checks, environment-specific configurations.

Nginx Reverse Proxy

SSL termination, static file serving, request routing. Serves frontend assets, proxies API requests to Nuxt/Rust, gzip compression for performance.

Resource Optimization

SQLx connection pooling with sensible defaults, PostgreSQL tuned for 2GB RAM, stateless Rust API design enables horizontal scaling without memory penalties.

AWS SES Integration

Email delivery with bounce/complaint tracking, HTML email templates, suppression list management for deliverability compliance.

Development Workflow

Automated setup scripts for consistent development environment with hot reload for both frontend and backend.

./scripts/dev-start.sh    # Start all services with hot reload
./scripts/health-check.sh # Verify service health
./scripts/setup-db.sh     # Run database migrations
./scripts/prepare-sqlx.sh # SQLx compile-time verification

CI/CD Pipeline

Automated build, test, and deployment pipeline using GitHub Actions. Every push triggers tests. Every tag triggers production deployment to AWS.

GitHub Actions Workflows

Continuous Integration (CI)

Runs on every push and pull request. Validates code quality before merge.

  • Backend: Rust compilation, Clippy linting (all warnings denied), cargo test suite (600+ tests)
  • Frontend: TypeScript compilation, Vitest suite (175+ tests), build verification
  • Parallel execution: Backend and frontend jobs run concurrently for speed
  • Fast feedback: Catches issues within minutes of push

Continuous Deployment (CD)

Triggers on semantic version tags (v1.0.0, v1.2.3, etc.). Fully automated production deployment.

  • Build & Push: Docker images built for backend, frontend, and migrations. Pushed to GitHub Container Registry (GHCR) with version tags
  • Git LFS: Large assets (images, videos) checked out during build for inclusion in frontend image
  • Deployment: SSH into AWS EC2, pull images from GHCR, run database migrations, restart services with new version
  • Health Checks: Robust 3-tier verification (nginx proxy → direct access → inside container) with verbose logging on failure
  • Automatic Rollback: On health check failure, automatically rolls back to previous version tag

Automated Releases (release-please)

Conventional commits drive automatic versioning and changelog generation.

  • Semantic versioning: feat: bumps minor, fix: bumps patch, BREAKING CHANGE: bumps major
  • Automatic changelog: Generated from conventional commits with links to PRs and commits
  • Release PR workflow: Bot creates PR with version bump and changelog updates, merge triggers release
  • GitHub releases: Automatic creation with release notes, triggering CD pipeline

Deployment Security

Dynamic IP Whitelisting

GitHub Actions runner IP dynamically added to EC2 security group for deployment, then automatically removed. No permanent SSH access from internet.

Secrets Management

GitHub Secrets for AWS credentials, SSH keys, and EC2 configuration. No credentials in repository. Production secrets separate from development.

Container Registry Authentication

GitHub Container Registry with automatic authentication. Public images for easy deployment. Version tagging for rollback capability.

Health-Checked Deployments

30-second downtime during container restart. Robust health checks verify services before declaring success. Automatic rollback on failure minimizes prolonged outages.

Course Corrections & What We Learned

Building a production application while learning two new technologies with IA meant iteration, mistakes, and course corrections. Here are the most instructive examples of what went wrong, how we fixed it, and what it taught us about practicing Intelligence Augmentation.

The Great Template Recursion Rabbit Hole

The Problem: Email templates built with Askama (compile-time templating) threw infinite recursion errors. After a dozen+ debugging attempts and web searches, we concluded Askama wasn't production-ready and switched the entire implementation to minijinja (runtime templating).

The Plot Twist: Minijinja gave us the exact same infinite recursion error, but at runtime during tests, not compile time. That's when we finally spotted it: one template was importing itself. Nothing wrong with Askama. Just a bug Claude missed despite reviewing the template multiple times.

Lesson: GMs can confidently miss obvious bugs even after multiple reviews. Better error messages (runtime vs compile-time) sometimes reveal what repeated analysis doesn't. IA requires human validation even when the GM seems confident. We switched back to Askama.

OOP Instincts Meet Rust Idioms

The Problem: Initial implementation used multiple constructors: new(), new_with_oauth(), new_with_xyz(). Coming from C# where multiple constructors are normal, this felt natural. Claude didn't push back. It seemed reasonable.

The Discovery: Learning Rust idioms, we discovered builder pattern is heavily favored. Refactored everything to builders. Cleaner, more flexible, more idiomatic Rust.

Lesson: GMs can follow your lead even when you're wrong. Need to validate against language/framework best practices independently. Your OOP background doesn't automatically translate to every paradigm.

OAuth Architecture: SPA → Redis-Backed PKCE

Initial Approach: Started with SPA-style OAuth flow. Seemed straightforward, matched common patterns.

The Realization: Our architecture required auth to come from the Rust server, not the Nuxt server or client. Couldn't "trust" the frontend with OAuth secrets. SPA pattern wouldn't work.

The Solution: Added Redis for stateful PKCE token management (we already had it for rate limiting). Refactored entire OAuth flow for server-side security.

Lesson: Security requirements can force architecture changes. Early prototypes don't always reveal security constraints. Sometimes you need to build it to understand what's required.

Session Management: The Mental Model Gap

The Challenge: Updating user profiles, display names, and preferences required keeping session state synchronized. I wanted passthrough routes where updates go through Nuxt server to update the session, making session the source of truth.

What Claude Kept Suggesting: Parallel session + store architecture. Session has auth, store has profile, sync them when either changes. Multiple prompt iterations couldn't get past this pattern.

The Breakthrough: Eventually got to the simpler approach: session as store, passthrough routes, single source of truth. No sync complexity.

Lesson: Sometimes the simpler solution requires patient iteration. GMs can get stuck on a pattern even when you're describing something different. This is where variability in Claude's performance shows: autonomous on large features, stuck on conceptual pivots.

PostgreSQL UUIDv7: When to Choose Pragmatism

The Goal: Use UUIDv7 primary keys (time-ordered, better indexing than random UUIDs).

The Struggle: Couldn't get pg_uuidv7 extension installed on vanilla PostgreSQL 17 Docker image. Tried multiple approaches with Claude, researched Docker multi-stage builds, fought with PostgreSQL extension compilation.

The Pragmatic Choice: Used pre-built image (ghcr.io/fboulnois/pg_uuidv7:1.6.0) with extension already compiled. Moved on.

Lesson: Sometimes pragmatic choice beats perfect solution. Don't let the perfect become the enemy of the shipped. The pre-built image works perfectly. Solving the Docker compilation rabbit hole wouldn't have added value.

Confidently Investigating the Wrong Problem

The Bug: Public timer list wasn't loading for authenticated users. Simple SSR data loading issue.

Claude's Investigation: Spent 20 minutes confidently debugging JWT tokens, SSR hydration patterns, and Pinia state management. Even started editing the wrong component (user timer history instead of public timer list).

The Fix: One line: await useAsyncData('public-timers-list', ...). Authenticated users just needed to load the public timer data. 30 seconds after human redirection.

Lesson: GMs can be confidently wrong about simple problems, investigating complex solutions when the answer is obvious. Human oversight essential to redirect from rabbit holes. Sometimes "just load the data" beats 20 minutes of SSR hydration analysis.

Resource Targeting: 1GB → 2GB

Initial Plan: Target AWS micro instance (1GB RAM, free tier).

The Discovery: AWS free tier is time-limited, not permanent. Micro instance too constrained for full stack (Nginx, Nuxt, Rust, PostgreSQL).

The Adjustment: Landed on t3.small (2GB RAM) as cost-effective tier. Reasonable price, next tier up is a big jump in cost. All optimization decisions driven by this 2GB constraint.

Lesson: Constraints drive optimization. The 2GB limit forced good architectural decisions: stateless API design, efficient connection pooling, PostgreSQL tuning. Constraints can improve design by preventing over-engineering.

What This Demonstrates About IA

This project validates the Intelligence Augmentation framework I evangelize professionally. It's not just a portfolio piece. It's proof that the IA approach works for complex learning in production environments.

The Proof Points

Speed

Two months part-time from zero Rust/Nuxt knowledge to production deployment. Would have taken 6+ months in the pre-GM era.

Quality

600+ tests across all layers. Clean 3-layer architecture. Live at kennwilliamson.org with real infrastructure, not toy project.

Learning

Genuine understanding of Rust (ownership, lifetimes, traits) and Nuxt (SSR hydration, composition API). IA accelerated learning, didn't replace it.

Why This Is Different from "AI Will Replace Developers"

  • I made every architectural decision: 3-layer pattern, hybrid API design, normalized auth schema, security choices. All human judgment calls.
  • I caught Claude's mistakes: Template recursion bug, session management complexity, non-idiomatic Rust patterns. Validation was critical.
  • I validated against best practices: Builder pattern over constructors, TDD for confidence, security requirements driving OAuth changes.
  • Claude accelerated, didn't replace: Generated boilerplate, explored patterns, helped me learn. But I understand what I built.

For Companies Evaluating IA

This project demonstrates what your team can accomplish with IA:

  • Dramatically faster learning curves - New technologies, frameworks, languages accessible in weeks instead of months
  • But requires strong fundamentals - Architecture knowledge, testing discipline, security awareness all critical
  • Not autopilot - Active collaboration with constant validation, course corrections, human judgment
  • Productivity multiplier - When paired with skilled engineers, IA enables what would've seemed impossible on compressed timelines

The Bottom Line: This site proves the thesis from my IA framework. Generative models amplify skilled humans, they don't replace human judgment.

Two months. Part-time. Zero to production. That's what Intelligence Augmentation looks like when practiced effectively.

Interested in working together? Let's connect.