Reader Copilot: From Idea to Full PWA with Remote Claude Code
The Idea
It started in a book club. There are 3 to 8 of us, and we were reading The Da Vinci Code. Every two pages, there was a reference to a painting, a sculpture, a historical site, or a real person. Someone would ask “what does the Virgin of the Rocks actually look like?”, another would google it, someone else would drop a Wikipedia link, and the conversation would stall.
Reader Copilot was born from that friction: an app where you upload an ePub or PDF, the AI analyzes each chapter, extracts all cultural references (artworks, sculptures, music, places, historical figures, texts), and presents them as interactive cards with real images. Not AI-generated images --- real photographs of the actual works, places, and people.
The concept expanded from there:
- Anti-spoiler system: a “While Reading” mode that only shows references up to your current chapter
- Per-chapter chat: ask the AI questions about what you just read
- Quiz/Trivia: for book club meetups
- Club system: with invite codes to share books and reading progress
The tagline says it all: “Every page has a story behind the story”.
The Execution: Claude Code on a Mac Mini
This is the part I find most worth talking about, because the entire project was built using Claude Code running remotely on a Mac Mini.
The Setup
A Mac Mini running 24/7 as a remote development server. Claude Code as the primary development tool. Me, from wherever --- laptop, phone, anywhere --- giving it instructions and reviewing its output.
Development was organized in phases (0 through 6.5), each with tasks managed in an Obsidian backlog synced to Notion. Claude Code didn’t just write code --- it read the PRD, created tasks in the backlog, picked them up, implemented them, ran tests, and moved on to the next one.
What Claude Code Built
Literally everything:
- Scaffolding the Next.js 16 project with React 19
- Database schema: 14 tables in PostgreSQL (Neon) with Drizzle ORM
- Complete authentication system with better-auth
- Book parsing pipeline (ePub and PDF)
- OpenAI API integration for chapter analysis
- 30+ API endpoints
- All UI components with Tailwind CSS 4 and shadcn/ui
The Image Resolution Chain
One of the most interesting parts of the project is how we resolve images for cultural references. We don’t use generative AI --- we need real photos of the Mona Lisa, not a DALL-E interpretation.
The solution is a multi-layer resolution chain with multiple sources:
/** * Resolve a real image for a cultural reference. * Chain: Wikipedia REST -> Wikidata P18 -> Wikimedia Commons -> Museum APIs */export async function fetchImage( searchTerm: string, type: string = "other",): Promise<ImageResult> { // Wikipedia REST — fastest, 1 fetch const wikipedia = await searchWikipedia(searchTerm); if (wikipedia?.imageUrl) return wikipedia;
// Wikidata P18 — structured, reliable const wikidata = await searchWikidata(searchTerm); if (wikidata?.imageUrl) return wikidata;
// Wikimedia Commons — broadest const wikimedia = await searchWikimediaCommons(searchTerm); if (wikimedia?.imageUrl) return wikimedia;
// Museum APIs — artwork/sculpture only const museum = await searchMuseumApis(searchTerm, type); if (museum?.imageUrl) return museum;
return { imageUrl: null, attribution: null, wikipediaUrl: null, source: null };}Wikipedia REST is the fastest (single fetch), Wikidata P18 is the most structured, Wikimedia Commons has the broadest catalog, and the museum APIs (Met Museum, Art Institute of Chicago) are the most precise for artworks. If one source fails, it falls through to the next. Simple, effective, and no dependency on paid services.
Reference Types
The schema supports nine types of cultural references, each with its own visual treatment:
export const referenceTypeEnum = pgEnum("reference_type", [ "artwork", // Paintings, drawings, prints "sculpture", // Sculptures and reliefs "architecture", // Buildings, churches, monuments "music", // Musical compositions "symbol", // Symbols, codes, cryptography "place", // Geographic or historical locations "person", // Historical figures "text", // Texts, manuscripts, books "other", // Everything else]);The Migration Claude Code Handled on Its Own
Midway through development, we made a significant architectural decision: migrate from Cloudflare (D1/R2/Queues) + Anthropic Claude to Vercel + Neon + OpenAI. The reasons were practical --- Vercel simplified Next.js deployment, Neon provided a real Postgres instead of SQLite, and OpenAI GPT-5 Mini/Nano offered better cost-to-quality ratio for chapter analysis.
Claude Code handled the entire migration. It understood the full architecture because it had built it, so swapping providers was clean.
Code Review with Codex CLI
Code reviews were delegated to a Codex CLI subagent (@codex-reviewer) that acted as a second pair of eyes. The agent caught critical issues that could have slipped through manual review:
- Vercel timeouts on long chapter analysis
- Race conditions in parallel reference processing
- Missing security checks on API endpoints
The Final Stack
- Framework: Next.js 16, React 19
- Styling: Tailwind CSS 4, shadcn/ui
- Database: Neon Postgres, Drizzle ORM
- Auth: better-auth
- Storage: Vercel Blob
- AI: OpenAI GPT-5 Mini/Nano
- Deployment: Vercel
The shift in the developer’s role is real. You go from writing code to designing the architecture, defining requirements, and reviewing what the agent produces. It’s a significant mindset shift.
The Redesign via Stitch MCP
Once the app was functionally complete, the design needed a serious upgrade. The UI was functional but generic --- unstyled shadcn/ui components don’t exactly convey personality.
That’s where Google Stitch came in through MCP (Model Context Protocol), integrated directly with Claude Code.
The Process
- I created a design brief describing the app, its 10 main screens, and the design system I wanted
- Stitch generated Material Design 3-inspired design tokens in OKLCh color space
- The redesign was executed in 7 phases, each as a separate commit:
- Design tokens
- Navigation
- Landing and authentication
- Library
- Book detail
- Chapter and references
- Clubs
Everything was merged in a single PR.
Design Decisions
- Color scale: zinc as the neutral base
- Per-reference-type accent colors: purple for artwork, orange for sculpture, blue for architecture, teal for music, amber for symbols, emerald for places, rose for people, cyan for texts
- Typography: Geist as the primary font
- Dark mode: full support from day one
The interesting thing about the Stitch MCP integration is that the AI could generate designs AND apply them to code in the same workflow. There was no handoff between design and development --- it was a continuous process.
One minor but annoying detail: after the redesign, we discovered that font-mono had been accidentally applied to UI labels that should have been font-sans. These things happen when you automate --- you always need to review the details.
Lessons Learned
Remote Claude Code on a Mac Mini works for real full-stack projects. This isn’t an experiment --- it’s a productive workflow. The combination of Claude Code for development, Codex CLI for reviews, and Stitch MCP for design creates an AI-assisted development pipeline that covers the entire cycle.
Organizing work in phases with an Obsidian backlog kept the project on track. Without structure, an AI agent can drift. With clear, prioritized tasks organized by phase, development was linear and predictable.
Mid-project migrations are manageable when the AI knows the entire architecture. Claude Code had built every line of code, so migrating from one provider to another didn’t require explaining context --- it already had it.
The cost of running the app is nearly zero. Everything runs on free tiers: Vercel, Neon, Vercel Blob. For a book club of 3-8 people, the monthly cost is under $1. The real expense is OpenAI API calls during chapter analysis, but with GPT-5 Nano it’s negligible.
The developer’s role changes, it doesn’t disappear. You spend less time writing code and more time thinking about architecture, reviewing output, and making design decisions. It’s a shift that requires adaptation, but the result is that one person can build what previously required a team.
Reader Copilot started as a personal need in a book club. It ended up as a proof of concept of what you can build when you give an AI agent the right tools and a clear work structure. Every page has a story behind the story --- and every project has a story of how it was built.