This is a submission for the GitHub Copilot CLI Challenge
From SQLite CLI to Cloud Platform: How I Became an AI Architect
What I Built
TL;DR: I transformed a local SQLite-based CLI tool into a production-grade analytics platform in 30 hours of actual work, without writing a single line of code by hand.
The Starting Point
Every developer has one: a useful local script that becomes your "technical memory." Mine was a DEV.to analytics tracker—a Python CLI tool backed by SQLite—that helped me understand my content performance beyond basic stats. It tracked follower attribution using 7-day windows with 6-hour tolerance, calculated quality scores with weighted formulas, and performed sentiment analysis on comments using VADER. You can read about it in my When DEV.to Stats Aren't Enough: Building My Own Memory article.
This work is not a port of a third-party project. It is an evolution of a codebase I originally created and maintain, and the repository is publicly available on GitHub: https://github.com/pcescato/devto_stats
But it lived in isolation on my machine.
The Vision
I wanted to transform this personal tool into a secure, scalable web platform accessible from anywhere. My non-negotiable constraints:
- PostgreSQL 18 (not 16, not 17—I wanted latest JSONB features and pgvector compatibility for tomorrow)
- SQLAlchemy Core (NOT ORM—I refused to hide my procedural SQL logic behind ORM magic)
- Authentik (self-hosted IAM with granular groups, not just a basic OAuth proxy)
- Caddy outside Docker (bare metal reverse proxy for performance)
- Apache Superset (initially... more on that pivot later)
The Final Stack
After strategically pivoting away from Superset (1GB RAM was too heavy for my 4GB VPS), the production stack became:
| Component | Technology | Purpose |
|---|---|---|
| Backend | FastAPI (async) | High-performance REST API |
| Database | PostgreSQL 18 | Partitioned tables, JSONB, arrays, pgvector-ready |
| Cache | Valkey 8.0 | Redis-compatible in-memory store |
| Frontend | Streamlit | Interactive data visualization (replaced Superset) |
| Security | Authentik + Caddy | Self-hosted IAM with proxy auth |
| Infrastructure | Docker Compose | Containerized deployment |
Key Features
1. The "Sismograph"
Unlike traditional analytics showing cumulative totals, my Sismograph visualizes real-time activity pulses. It calculates deltas between data snapshots to reveal when traffic actually spikes, not just how many views you have total.
2. Author DNA
Automatic thematic classification of content:
- "Expertise Tech" (SQL, PostgreSQL, Docker)
- "Human & Career" (feedback, learning, growth)
- "Culture & Agile" (management, performance)
The system analyzes titles and tags, counting keyword matches to determine dominant themes.
3. Real-Time Activity Monitor (The "Wake-up" Call)
While the Sismograph shows pulses, I needed a way to spot "sleeping" articles that suddenly regain traction months later.
For instance, my article "From Pocket to Wallabag", published 4 months ago, suddenly saw a spike of 10 views in a single morning. This view, implemented via a targeted prompt to GitHub Copilot CLI, aggregates current activity across the entire library. It transforms the platform from a simple archive into an active monitoring tool, saving me from manual, article-by-article checks.
4. Strategic Pivot: Performance Over Weight
Initially, I aimed for Apache Superset for the visualization layer. However, the reality of the field—a 4GB VPS—quickly imposed its limits: Superset alone consumed 1GB of RAM, leaving too little room for the rest of the stack.
Thanks to AI, I was able to perform an immediate architectural pivot:
Zero emotional attachment: Since the code wasn't "hand-written" over several days, I had no hesitation in discarding Superset in favor of Streamlit (512MB) to regain system fluidity.
Reduced cost of change: What would normally have taken days of manual reconfiguration and dashboard rebuilding was resolved in just a few hours of prompt-driven steering.
Future-Proofing while Lean: I used this reclaimed agility to integrate
Vector(1536)columns via pgvector into my PostgreSQL 18 schema. Even though I am not using embeddings yet, the structure is ready for tomorrow without having cost a single effort of complex migration today.
AI doesn't just generate code; it makes architecture malleable. It allowed me to meet strict hardware constraints without sacrificing my long-term technical vision.
Demo
🔗 Live Platform:
- API Documentation: analytics.weeklydigest.me/docs
-
Dashboard: streamlit.weeklydigest.me (requires Authentik authentication: login:
judge, password:Github~Challenge/2k26) - Source Code: GitHub Repository
Architecture Overview
The platform implements a proxy-based forward authentication model:
User Request
↓
Caddy Reverse Proxy (bare metal)
↓
Authentik Verification (SSO, groups: Admin/Judge)
↓
Protected Service (Streamlit/API)
Security Benefits:
✅ Applications remain "auth-agnostic" (zero authentication code in app)
✅ Centralized identity management with granular RBAC
✅ Single Sign-On across all subdomains
Resource Optimization Story
Initial deployment included Apache Superset (1GB RAM), which proved too heavy. I made a strategic architectural pivot:
❌ Removed: Apache Superset (1GB)
✅ Added: Authentik IAM (600MB) + Custom Streamlit dashboard (512MB)
💡 Result: 10% memory footprint reduction + better security + custom UX
Because the code was AI-generated, pivoting took hours, not days. No sunk cost fallacy—just constraint optimization.
My Experience with GitHub Copilot CLI
No, I Didn't Code for 10 Days Straight
I worked mostly in the evenings (2–3 hours per session), plus one Saturday afternoon and evening, and one Sunday morning — roughly 30 hours total to migrate from a SQLite-based CLI to a production-grade cloud platform with SSO.
My Secret Workflow: Three-Stage Delegation
I didn't talk directly to Copilot CLI. I used a cascade of intelligence:
Claude/Gemini (The Architect): Brainstorming and constraint definition. I discussed requirements ("PostgreSQL 18 mandatory", "Core not ORM", "API not CLI"). It structured my fuzzy ideas into precise technical prompts.
GitHub Copilot CLI (The Implementer): I fed the optimized prompts + source files (
@devto_tracker.py,@content_collector.py...). It generated 57-page technical documentation, PostgreSQL schema, FastAPI endpoints, Docker configs.Me (The Guardian): I validated business logic preservation and enforced technical constraints.
Working with Copilot as a layered system
I didn’t “chat” with GitHub Copilot CLI. I treated it as an execution layer inside a broader workflow.
Before Copilot ever saw the code, I clarified non-negotiable constraints using a general-purpose model (Claude, sometimes Gemini or ChatGPT): PostgreSQL 18 (not 16 or 17), SQLAlchemy Core instead of an ORM, Authentik and Caddy outside Docker, Streamlit replacing Superset once memory pressure became an issue. These decisions were made upfront and never negotiated later.
Only once the intent was explicit did I involve Copilot CLI. I pointed it at the real codebase and asked it to extract documentation, schemas, and implementation details. In one pass, it produced a 57-page technical document describing architecture, data flows, algorithms, and business rules — without me writing a single line of code.
The final step was purely human: enforcing invariants. Whenever Copilot proposed a local optimization that conflicted with system-level intent — such as dropping reaction-level history in favor of aggregates — the answer was simply no. Aggregation was allowed only on top of preserved raw data, never instead of it.
What mattered here wasn’t prompt cleverness. It was clarity of constraints. Once those were explicit, Copilot became extremely effective — not as a decision-maker, but as an execution engine.
The quality of the outcome didn’t come from better prompts, but from better invariants.
That structure worked well — until one architectural decision made it clear where responsibility really sits.
The Moment I Had to Remind AI Who's Boss
There was one moment where the limits of delegation became very clear.
At one point, Copilot suggested simplifying the database schema by collapsing the detailed reaction breakdown (likes, unicorns, reading lists) into a single total_reactions integer column. From a purely technical standpoint, the suggestion made sense: fewer columns, simpler queries.
But that optimization would have broken something fundamental. Without this granularity, my weighted follower attribution algorithm collapses. These extra fields weren’t accidental complexity — they were deliberate architectural choices, preserved to ensure the system remains analytical, not just descriptive.
I didn’t “argue” with the AI or try to outsmart it. I simply restated the constraint: the schema was not up for simplification. This wasn’t a performance issue, it was an architectural one. Copilot adjusted immediately and moved on.
The lesson wasn’t that the AI was wrong. It was that local optimization without systemic intent is just guesswork. AI optimizes syntax; humans guard semantics.
Zero Lines Written, 100% Generated, 100% Controlled
- Technical Documentation: 57 pages in one pass (2 hours vs 2-3 days)
- SQL Schema: 26KB, 18 tables, partitioning, JSONB, arrays, pgvector-ready
- FastAPI Endpoints: 14 routes, async, SQLAlchemy Core
- Authentik Integration: Complete Docker Compose setup
- Tests: pytest suite with 82% coverage
I wrote zero lines of Python code. I wrote prompts, I validated architectures, I corrected trajectories.
The Non-Negotiable Invariants
When I said "PostgreSQL 18," it was non-negotiable. Not for whimsy, but because I wanted:
- Improved JSONB performance
- Future pgvector compatibility
When I demanded "SQLAlchemy Core," it was to preserve exact existing SQL patterns. I also enforced the preservation of my 'proximity search' logic — a complex SQL pattern that finds the closest snapshot within a 6-hour tolerance window. While not yet fully exploited in the Streamlit dashboard, keeping this precision infrastructure allows for high-accuracy time-series analysis later on.
AI generates the "how." You must imperative keep the "why" and the "what."
Impact Metrics
| Task | Traditional Estimate | Actual (AI-assisted) | Time Saved |
|---|---|---|---|
| Technical Documentation | 2-3 days | 2 hours | ~90% |
| SQLite → PostgreSQL Migration | 3-4 days | 4 hours | ~85% |
| FastAPI Development | 5-7 days | 6 hours | ~80% |
| IAM Configuration (Authentik) | 2 days | 3 hours | ~75% |
| Total | 12-16 days | ~30 hours | ~80% |
But the real gain isn't time—it's optionality. When I realized Superset consumed too much RAM, I pivoted to Streamlit in hours, not days. Because I hadn't "written" the code, I had no emotional attachment to what had to be discarded.
Beyond Code: The Infrastructure Blueprints
One of the most revealing moments of this challenge wasn’t about writing better prompts or cleaner Python. It was realizing that code alone was not the artifact worth sharing.
Using GitHub Copilot CLI, I deliberately piloted the AI to export and document the production chassis itself—not just the application logic, but the architectural constraints that make the system actually run on a 4GB VPS.
What I Exported
Instead of pushing isolated source files, I created an anonymized Deployment Blueprint in /deploy/production/, capturing the real operating context:
docker-compose.yml— Service orchestration with explicit memory ceilings (FastAPI, Streamlit, Valkey)Caddyfile— Reverse proxy configuration encoding the SSO flow (Caddy → Authentik → applications)deploy_analytics.sh— A zero-downtime deployment script with validation steps.env.example— A complete environment template, with every secret replaced by{{CHANGE_ME}}
This wasn’t about reproducibility for its own sake—it was about making constraints visible.
Defensive Documentation
I also required Copilot to generate what I call defensive documentation: a README that clearly defines boundaries, not just capabilities.
What this directory is NOT
Not a backup — This is an architectural snapshot, not a recovery plan
Not plug-and-play — Domains, networks, and volumes must be adapted per environment
Not containing secrets — All sensitive values have been intentionally scrubbed
This distinction matters. AI didn’t “decide” what was safe to publish, deployable, or acceptable.
It followed instructions.
That, to me, is the real lesson: a modern architect doesn’t delegate responsibility to AI—they use it to enforce clarity, reproducibility, and accountability across the entire system.
Conclusion: From Developer to Prompt Architect
Being an AI architect isn’t about delegating thinking — it’s about orchestrating it.
What this experience taught me is that I’m no longer just a developer who writes code. I’ve become an architect who writes constraints, curates business logic, and decides where complexity is acceptable — and where it isn’t.
GitHub Copilot CLI isn't my coding assistant. It's an execution engine — and I'm not a "prompt engineer." I'm an architect who writes constraints, not code.
It implements what I specify, quickly and relentlessly, but it doesn’t own the intent. When architectural decisions mattered — like preserving reaction-level granularity in my data model — the responsibility stayed firmly with me.
The real shift isn’t that AI writes code for us. It’s that it forces us to be explicit about our decisions. The clearer the intent, the less the AI needs to “think” — and the more effective it becomes.
As this system scales from thousands to tens of thousands of records, I’m less worried about the code I wrote and more confident in the constraints I defined. Those constraints can evolve. Prompts can be rewritten. Architecture can be re-expressed without dragging years of accidental technical debt behind it.
The future of development isn’t AI replacing developers. It’s developers moving upstream — from implementation to strategic architecture — with AI handling the translation.
GitHub Copilot CLI Challenge — January 2026. 40 commits, 0 lines written by hand, 30 hours of actual work, 7,078+ records migrated, production-grade security deployed.





Top comments (8)
Looks great — congratulations on both the idea and the execution! 👏
You’ve shown really well that AI can massively accelerate the implementation side, but the human still defines the intent, the constraints, and what “correct” actually means. AI may write more and more code for us, but (at least for now 😉) it’s still people who decide what should be built and how it should behave.
Really impressive work and a thoughtful reflection on where our role as developers is heading.
Thanks, Sylwia — that really means a lot.
What really surprised me in this project is how responsibility didn't fade with AI — it actually became sharper. Once implementation is largely delegated, intent and constraints stop being "background knowledge" and turn into first-class concerns.
I love how you framed it: defining what "correct" means is still very much a human job — and ironically, AI makes that job harder, not easier, because now there's nowhere to hide behind implementation details.
I have the feeling that, as AI gets better at execution, that part of our role will only become more visible — not less.
beautiful design, Pascal! 💯
I logged into the site... it's elegant! 🏆
Thanks, Aaron! I’m really glad you logged in and tried it out — keeping the experience clean and unobtrusive was very much part of the goal.
Absolutely inspiring work, Pascal! 👏 I love how you framed AI as an execution engine rather than a decision-maker — it really highlights the evolving role of developers as architects of intent and constraints. The way you pivoted from Superset to Streamlit under hardware limits is a perfect example of practical agility, and your focus on preserving semantic integrity over shortcuts really resonates.
It makes me wonder: as more developers adopt AI-assisted workflows, how do you see the balance between human oversight and AI execution evolving in larger, collaborative projects?
Thanks a lot, Azhar — I really appreciate that.
In larger, collaborative projects, I think the balance shifts less around how much AI is used, and more around where intent is anchored. AI scales execution extremely well, but intent doesn’t scale automatically — it has to be shared, documented, and defended.
My intuition is that human oversight won’t disappear; it will become more explicit and more collective. Constraints, invariants, and architectural decisions will need to be made visible and agreed upon upfront, otherwise AI just accelerates divergence instead of alignment.
In that sense, AI doesn’t reduce the need for collaboration — it actually raises the bar. Teams won’t be coordinating around code as much as around intent, trade-offs, and what must never be optimized away. Execution can be delegated. Responsibility, I think, cannot.
Great explanation!
Thanks! Appreciate you reading through it. 👍