The Pulse is a full-stack event discovery platform — a mobile app, REST API, admin panel, and marketing website — built from zero to production-ready in approximately 60 days. This white paper documents how a solo developer, working in partnership with advanced AI coding agents (Claude Opus 4.5 and 4.6, via the Antigravity IDE and Google AI Ultra plan), designed and implemented a system comprising four production platforms, 2,100+ automated tests, hardware-backed security, AI-powered content moderation pipelines, a cross-platform push notification system, and a cloud-native deployment infrastructure across three environments.
1. Introduction
The Problem
Discovering local live music, nightlife, and community events remains a fragmented experience. Event information is scattered across venue websites, social media posts, printed flyers, and PDF event guides. No single platform aggregates this information in a way that is comprehensive, timely, and community-scoped.
The Vision
The Pulse aims to be the definitive local event discovery platform — a curated, AI-enhanced experience that surfaces live music and events through a polished mobile app, powered by automated content acquisition and human-in-the-loop moderation.
The Challenge
Building such a platform traditionally requires a team of engineers working across mobile, backend, web, and infrastructure disciplines over many months. The question this project set out to answer was:
Can a single developer, augmented by frontier AI coding agents, ship a production-grade full-stack platform in 60 days?
The answer, documented in this paper, is yes — and the platform has continued to evolve rapidly since initial launch, with 45+ API releases and 28+ mobile releases shipped in the same period.
2. The AI-Augmented Development Model
The Tools
The project was built using Claude Opus 4.5 and later Opus 4.6, accessed through two primary interfaces:
- Antigravity IDE — A dedicated agentic coding environment that provides the AI agent with direct access to the file system, terminal, browser, and code analysis tools. Unlike traditional chat-based AI assistants, Antigravity enables the agent to autonomously explore, edit, build, test, and debug code without requiring the developer to manually copy-paste context.
- Google AI Ultra Plan — Providing access to Claude's most capable models with extended context windows, enabling the agent to hold entire subsystems in working memory simultaneously.
The Developer-Agent Partnership
The development workflow was not one of "AI writes everything." Rather, it was a structured partnership with clearly defined roles:
| Responsibility | Developer | AI Agent |
|---|---|---|
| Product vision & requirements | ✅ Primary | Advisory |
| Architecture decisions | Collaborative | Proposes, implements |
| Implementation | Reviews, directs | ✅ Primary |
| Testing | Reviews results | ✅ Primary (TDD-first) |
| Debugging | Identifies symptoms | ✅ Primary |
| Code review | ✅ Primary | Self-checks |
| Security posture | Defines requirements | Implements, audits |
| Deployment | Approves, triggers | Configures, scripts |
The Knowledge System
A critical enabler of sustained velocity was the persistent knowledge system built into the Antigravity IDE. Over the course of 60 days:
- 21+ Knowledge Items (KIs) were automatically distilled from conversation history, covering architecture patterns, security strategies, deployment procedures, and troubleshooting guides.
- Agent rules codified critical policies — protected branch workflows, TDD mandates, version synchronization, and coding standards — ensuring consistent enforcement across conversation boundaries.
- Workflows automated repetitive multi-step procedures — deploying to Azure, publishing to beta channels, committing and merging — reducing ceremony to a single slash command.
This knowledge architecture meant that on Day 60, the agent had the same (or better) contextual understanding than on Day 1 — a stark contrast to human developers who experience context decay between sessions.
Velocity Metrics
3. Platform Overview
Technology Stack
Mobile
Flutter 3.x, Dart 3.10+, Riverpod, GoRouter
Backend
.NET 9, EF Core, SQL Server, Hangfire
Admin Panel
Next.js 16, React 19, TailwindCSS 4
Website
Astro (static, zero-JS by default)
Auth
Firebase Auth with App Check attestation
Infrastructure
Azure Container Apps, SQL Serverless, BunnyCDN
Data Flow
4. Mobile Application Architecture
State Management: Riverpod
The mobile app uses Riverpod for type-safe, testable state management. Providers expose data to the widget tree without tight coupling, AsyncNotifiers handle paginated data loading with built-in loading/error/data states, and provider overrides enable complete dependency substitution during testing.
Design Philosophy
Dark, charcoal-based themes emphasize high-quality photography and create an atmosphere consistent with nightlife discovery.
A frictionless "Browsing First" model allows anonymous discovery; auth is only required for personalization features.
Content-driven sizing maximizes information density in scrollable lists.
Predictable control placement builds muscle memory across sessions.
Networking Layer
A Dio-based networking layer with a layered interceptor chain provides resilient, secure API communication:
- API Key Interceptor — Traffic gating via
X-Api-Keyheader - App Check Interceptor — Hardware-backed attestation token with exponential backoff retry
- Auth Token Interceptor — Firebase JWT via just-in-time
getIdToken() - Auth Retry Interceptor — Transparent 401 recovery with token refresh
- Version Interceptor — Force-update detection via response headers
5. Backend API Architecture
Clean Architecture
| Project | Responsibility |
|---|---|
| Pulse.Api | Controllers, middleware, configuration, DI |
| Pulse.Core | Domain entities, DTOs, service interfaces, business rules |
| Pulse.Infrastructure | EF Core DbContext, repositories, external integrations |
| Pulse.Shared | Cross-cutting utilities, constants, extensions |
Key Patterns
- Global Query Filters enforce soft-delete and publication state at the query level, preventing accidental data exposure.
- Field Override Pattern — Venue metadata from Google Places can be overridden by curators via a strict resolution chain: Override → Edit → Original.
- URL-Segment API Versioning (
/api/v1/...) with header-based force-update signaling and a formal deprecation lifecycle. - Hangfire for persistent, reliable background job processing with configurable schedules and retry logic.
- Hybrid Deduplication Strategy (v1.3) — Combines deterministic pre-filtering with Gemini LLM-based comparison for maximum accuracy while minimizing AI costs.
- Event Merge & Enrichment — Side-by-side comparison with Custom > Source > Target field precedence, enabling granular curator control over merged content.
- Cross-Platform Push Notifications — Preference-aware FCM/APNs delivery with curator digests, local reminders, and custom broadcasting.
- Tag Normalization — Platform-wide lowercase enforcement with DRY validation services synchronized across backend, mobile, and admin.
6. Content Acquisition Pipeline
The content pipeline is the platform's most technically ambitious component — an AI-powered system that transforms unstructured content from diverse sources into normalized, curated event data.
Gemini 1.5 Flash: The Extraction Engine
| Source Type | Method | Example |
|---|---|---|
| HTML | DOM parsing → Gemini | Venue event calendars |
| Digital PDF | PdfPig → Gemini | Monthly event guides |
| Scanned PDF | Gemini Vision | Digitized print magazines |
| Images | Gemini Vision OCR | Social media flyers |
| Social Media | Image Extractor → Gemini Vision | Facebook event posts |
| ZIP Archives | Bulk event ingestion | Bulk event spreadsheets |
Gemini 1.5 Flash was selected for its 17x cost advantage over Pro, 1M token context window, native multimodal support, and structured JSON output with token-level confidence scoring.
Confidence Scoring
A hybrid model combines three signals for moderation prioritization:
Moderation Workflow
Every event passes through a multi-stage pipeline: AI safety check (OpenAI), hybrid duplicate detection (deterministic + LLM comparison), curator review with side-by-side source comparison, and trust-based auto-approval for high-reputation users. The system follows a fail-open philosophy — infrastructure failures route to human review rather than blocking users.
7. Security Architecture
Authentication
Firebase Authentication provides multi-provider social login (Google, Apple, Facebook, Email). Token lifecycle is fully managed by the SDK — the application never manually handles or persists JWTs. A self-healing pattern detects expired tokens and forces local sign-out to clear stale state.
Device Attestation: Firebase App Check
| Platform | Provider | Security |
|---|---|---|
| iOS | App Attest (Secure Enclave) | Hardware-backed |
| Android | Play Integrity (TEE) | Hardware-backed |
| Development | Debug Provider | Registered tokens |
No embedded secrets — attestation is generated at runtime by device secure hardware with 1-hour TTL tokens and automatic background refresh.
Authorization (RBAC)
A capability-based model governs access: Anonymous (read-only discovery), User (save/submit), Curator (moderate content), Venue Manager (manage claimed venues), and Admin (full access). All 12 admin controllers verified via comprehensive security audit.
Web Security: Cloudflare Turnstile
Unauthenticated web requests (contact forms, public API access) are protected by Cloudflare Turnstile — a silent, privacy-respecting CAPTCHA alternative that requires no user interaction. Mobile clients bypass Turnstile via Firebase App Check attestation, creating a tiered security model: hardware attestation for mobile, invisible challenge for web.
Security Audit (February 2026)
8. Infrastructure & Deployment
Cloud Architecture
| Service | Purpose | Billing |
|---|---|---|
| Azure Container Apps | .NET 9 API hosting | Scale-to-zero |
| Azure SQL Serverless | Relational data | Auto-pause, auto-scale |
| Azure Static Web Apps | Landing page + admin panel | Free tier |
| BunnyCDN | Images & resizing | $1/mo + usage |
| Firebase | Auth, App Check, FCM, Crashlytics | Free tier |
| SendGrid | Transactional email | Free tier |
Tri-Tier Environment Architecture
The platform runs across three isolated environments — Dev, Stage, and Production — each with independent databases, CDN storage zones, and API configurations. Stage mirrors production data via automated sync procedures, enabling pre-release validation with realistic data.
CI/CD
- Xcode Cloud — iOS builds with managed signing → TestFlight
- GitHub Actions — Android builds → Firebase App Distribution
- Agent Workflows — Multi-step deployments via single slash commands
9. Testing & Quality Assurance
TDD-First Mandate
All code changes follow a strict Red-Green-Refactor cycle. This mandate applies to both Flutter and .NET with no exceptions.
Backend Testing Infrastructure
The backend uses Testcontainers with Azure SQL Edge to provide real SQL Server behavior in integration tests. A shared container fixture creates isolated databases per test class, and a custom PulseApiFactory provides configured HTTP clients with authentication. This catches visibility bugs that in-memory stores miss — particularly around EF Core Global Query Filters.
Cross-Platform Coverage
| Layer | Tools | Coverage |
|---|---|---|
| Backend Integration | xUnit, Testcontainers | 1,769 tests |
| Mobile Widget/Integration | Flutter test framework | 412 tests |
| Admin Panel E2E | Playwright | Moderation workflows |
10. Performance Optimizations
- CDN environment isolation — Separate storage zones per environment prevent dev assets from polluting production
- SQL Serverless auto-pause — Database hibernates after idle period, resumes in seconds on first request
- Google Places 30-day cache — SQL cache for Place IDs minimizes repeat API calls
- Session token rotation — Consolidates per-request charges into per-session billing
- Scale-to-zero — Dev/staging environments pause compute when idle
- CDN image resizing — BunnyCDN serves device-appropriate image sizes
- Virtualized lists — All scrollable lists use
ListView.builder - Aggressive
const— Minimizes unnecessary widget rebuilds - Gemini Flash over Pro — 17x cost reduction with comparable extraction quality
- GPT-4o-mini for moderation — Orders of magnitude cheaper for safety classification
11. Cost Analysis
By leveraging serverless infrastructure and free tiers aggressively, the monthly operating cost is remarkably low:
| Stage | MAU* | Monthly Cost |
|---|---|---|
| Seed (Beta) | < 1,000 | ~$100 |
| Growth | ~10,000 | $200 – $400 |
| Scale | ~100,000 | $1,500+ |
*MAU = Monthly Active Users
The architecture ensures cost scales proportionally with usage, with no large fixed-cost commitments until the scale stage.
12. Lessons Learned
What Worked
Agent rules are force multipliers
Codifying conventions into agent-readable rules eliminated entire categories of mistakes. The agent never forgot to create a feature branch, never skipped a test, and never committed API key mismatches.
Persistent knowledge compounds
Automatically distilled knowledge items meant the agent's understanding deepened over time. By Day 60, it could reason about cross-cutting concerns with more precision than on Day 1.
TDD with an AI agent is transformative
With an AI agent, tests are written as naturally as implementation code — there's no psychological resistance. The 2,100+ test suite was built incrementally and caught genuine bugs.
The monorepo paid dividends
A single repository allowed cross-cutting changes atomically. A schema change could include the DTO, migration, test, and mobile model update in one interaction.
What Was Challenging
- Context window management — Complex debugging sessions occasionally exceeded working memory. The solution was breaking problems into smaller investigations.
- Platform-specific gotchas — iOS simulator state, Android signing keys, and Xcode Cloud entitlements required manual intervention the agent could document but not resolve.
- Visual design iteration — The iterative "does this feel right?" polish cycle was slower than with a dedicated designer.
13. Conclusion
The Pulse demonstrates that a single developer, working in purposeful partnership with AI coding agents, can build and ship a production-grade full-stack platform — and continue iterating at startup velocity — in a timeframe that would traditionally require a small team working for months.
The future of software engineering is not human or AI — it's human with AI, each amplifying the other's strengths.
This is v2.0 of this white paper (March 2026). The original version is available at v1.0 — Building The Pulse in 40 Days.