Caching is the most talked-about performance strategy in backend engineering, and the most poorly understood. "Just add Redis" is advice you'll hear everywhere — and it's almost always incomplete. Caching decisions are layered: database query caching, HTTP response caching, client-side caching, server-side fragment caching. Each one solves a different problem. Each one breaks differently when it's done wrong. Stale data, cache stampedes, inconsistent states — these are the real challenges, and no tutorial skips to them fast enough. This series does. We build a Django and Next.js application from scratch — not because the app itself is interesting, but because it gives us a concrete, realistic surface to apply every caching strategy against. The app is a demo. The caching is the point.
In this series, we are building a high-traffic Housing Portal. Before we introduce any complexity, we need a reproducible, production-grade baseline. Today, that means setting up a containerized monorepo using Django (DRF), Next.js, and PostgreSQL — from a completely empty directory to a running stack, with every gotcha documented as we hit it.
Why a Monorepo? The Architectural Decision
Before we write a single line of code, let's talk about why we're structuring things this way. This isn't arbitrary.
A monorepo keeps your API contracts and frontend types in the same version-controlled repository. When your Django serializer changes a field name, your Next.js developer sees it in the same pull request. No crossed wires. No "it worked in staging" mysteries.
By wrapping the entire stack in Docker Compose, we achieve something even more valuable: environmental parity. The database version, the Python runtime, the Node version — they are identical on your laptop, on your teammate's machine, and eventually in production. This is the single most important thing you can do before you start optimizing.
The Target Structure
By the end of this post, your filesystem will look exactly like this:
housing-caching-demo/
├── backend/
│ ├── core/ # Django root config (settings, urls, wsgi)
│ ├── housing/ # Our domain app (models, views, serializers)
│ ├── requirements.txt # Pinned Python dependencies
│ ├── manage.py # Django's command-line utility
│ └── Dockerfile # The backend container recipe
├── frontend/
│ ├── app/ # Next.js App Router pages & components
│ ├── package.json # Node dependencies
│ └── Dockerfile # The frontend container recipe
├── .gitignore # Covers Python, Node, and Docker
└── docker-compose.yml # The orchestrator — ties everything together
Every directory and file has a purpose. We will build this structure step by step, and I will explain why each piece exists before we create it.
Part A: Project Scaffolding
This section is about creating the skeleton. We are not writing application logic yet — we are creating the empty rooms before we furnish them.
Step 1: Initialize the Workspace and Git
We start with a single command to create our project root, then immediately initialize Git. Version control from day one is non-negotiable.
mkdir housing-caching-demo && cd housing-caching-demo
git init
Why
&&? It chains commands — the second only runs if the first succeeds. Ifmkdirfails (e.g., the directory already exists), you won't accidentallycdinto the wrong place.
Step 2: Create a Multi-Stack .gitignore
A .gitignore is boring but critical. We need to ignore generated files from three different ecosystems: Python, Node.js, and eventually Docker. GitHub maintains community-vetted templates for exactly this.
# Download the Python template
curl -o .gitignore https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore
# Append the Node template (the -a flag means "append", not overwrite)
curl -a -o .gitignore https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore
Beginner tip: The
-aflag on the secondcurlis the critical detail. Without it, the Node template would replace the Python one instead of adding to it. Always double-check your.gitignorewithcat .gitignoreafter this step.Intermediate note: If
curlis not available (common on some Windows setups), you can manually create this file. At minimum, addvenv/,__pycache__/,*.pyc,node_modules/,.next/, and.envto an empty file.
Part B: The Backend — Django + DRF
The backend is the data engine of our portal. We use Django because it gives us a powerful ORM, a built-in admin panel, and — via Django REST Framework — a clean API layer, all out of the box.
Step 3: Create the Backend Directory and Virtual Environment
mkdir backend && cd backend
python -m venv venv
Why a virtual environment? It isolates this project's Python packages from your system Python. If you install Django 5.x here, it doesn't conflict with a different project that needs Django 3.x. This is standard Python hygiene.
Activate the environment. The command differs by OS:
| Operating System | Activation Command |
|---|---|
| Linux / macOS | source venv/bin/activate |
| Windows (PowerShell) | venv\Scripts\Activate |
| Windows (cmd.exe) | venv\Scripts\activate |
You'll know it worked when you see (venv) appear at the start of your terminal prompt.
Advanced note: Once everything is containerized with Docker, you won't need the venv to run the app locally. We create it now so you can install packages and generate
requirements.txtwithout polluting your system. The venv directory itself is already covered by your.gitignore.
Step 4: Install Dependencies
These are the three pillars of our backend:
pip install django djangorestframework django-cors-headers psycopg2-binary
Let's break down why each package is here:
django — The core framework. It handles routing, the ORM, the admin interface, and the request/response lifecycle. We are using Django 5.x.
djangorestframework (DRF) — Turns Django into an API platform. It gives us serializers (for data validation and transformation), browsable APIs for debugging, and clean permission handling. This is how our Next.js frontend will talk to the backend.
django-cors-headers — Cross-Origin Resource Sharing. When your frontend runs on localhost:3000 and your API runs on localhost:8000, the browser will block API calls by default as a security measure. This package tells Django which origins are allowed to make requests. Without it, your frontend will hit a wall immediately.
psycopg2-binary — The Python driver that lets Django actually talk to PostgreSQL. Django's ORM is database-agnostic in code, but it needs an adapter under the hood. The -binary variant includes pre-compiled C extensions, so you don't need a C compiler on your machine.
Step 5: Pin Your Dependencies
pip freeze > requirements.txt
Why pin dependencies?
pip freezerecords the exact version of every installed package (e.g.,Django==5.1.3, not justDjango). When someone else — or your Docker container — runspip install -r requirements.txt, they get the identical dependency tree. This is how you prevent the "it works on my machine" problem at the Python level.
Step 6: Scaffold the Django Project and App
django-admin startproject core .
python manage.py startapp housing
Two commands, two very different things happening:
startproject core . creates the project configuration. The . at the end is important — it tells Django to put the config files inside the current directory (backend/) rather than creating a nested backend/core/ folder. The core directory that appears contains settings.py, urls.py, and wsgi.py — the "nervous system" of your Django application.
startapp housing creates our first domain application. In Django, an "app" is a self-contained module of functionality. housing will eventually own our property models, our API views, and our serializers. Django projects are built by composing multiple apps together.
cd .. # Return to the monorepo root
Part C: The Frontend — Next.js
Next.js is our frontend framework. We chose it because it supports both server-side rendering (SSR) and client-side rendering in the same app, which is essential for a housing portal — listing pages need to be indexed by search engines (SSR), while interactive filters need to feel instant (client-side).
Step 7: Create the Next.js Application
npx create-next-app@latest frontend
This will launch an interactive prompt. Here are the exact choices and why each one matters:
| Prompt | Answer | Why |
|---|---|---|
| Would you like to use TypeScript? | Yes | Catches type errors at build time. When your Django API changes a field, TypeScript will flag it in the frontend before it reaches production. |
| Would you like to use ESLint? | Yes | Enforces consistent code style and catches common bugs automatically. |
| Would you like to use Tailwind CSS? | Yes | Utility-first CSS framework. Speeds up styling enormously and keeps CSS co-located with components. |
Would you like to use src/ directory? |
No | Keeps the structure flat and simpler. The app/ directory sits directly in frontend/. |
| Would you like to use App Router? | Yes | Next.js 14+'s default routing system. It supports React Server Components, which is the future of Next.js performance. |
| Would you like to customize the default import alias? | No | The default @/ alias is fine. Customizing it adds complexity with no benefit at this stage. |
After this completes, your frontend/ directory is fully scaffolded with a working Next.js application.
Part D: Containerization — Docker
This is where the magic happens. We are going to wrap our backend and frontend in Docker containers, then orchestrate them together with Docker Compose. After this section, a single command will boot your entire stack — database, API, and frontend — identically on any machine.
Step 8: The Backend Dockerfile
# backend/Dockerfile
FROM python:3.11-slim
# --- Environment Configuration ---
# Prevent Python from writing .pyc bytecode files into the container.
# These are a performance optimization for repeated runs, but inside a container
# the image is rebuilt anyway, so they just waste space.
ENV PYTHONDONTWRITEBYTECODE=1
# Force Python's stdout/stderr to flush immediately.
# Without this, print() statements and log messages can get "swallowed"
# and appear all at once when the container stops — making debugging a nightmare.
ENV PYTHONUNBUFFERED=1
# Suppress the warning "Running pip as root is discouraged".
# Inside a container, we ARE root, and that's fine. This silences noise in build logs.
ENV PIP_ROOT_USER_ACTION=ignore
# Set the working directory inside the container.
# All subsequent COPY and RUN commands are relative to this path.
WORKDIR /app
# --- Dependency Installation (Layered for Cache Efficiency) ---
# Copy ONLY requirements.txt first, then install.
# Docker builds in layers. If requirements.txt hasn't changed, Docker reuses the
# cached layer and skips the pip install entirely — even if your source code changed.
# This makes rebuilds dramatically faster during development.
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code.
# This layer invalidates (and rebuilds) only when your actual code changes.
COPY . .
# The command that runs when the container starts.
# 0.0.0.0 is critical — it tells Django to listen on ALL network interfaces,
# not just localhost. Inside a container, "localhost" means the container itself.
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Advanced — Why
python:3.11-slimand notpython:3.11? Theslimvariant is based on Debian Bookworm Slim instead of full Debian. It's roughly 60% smaller. For a production image, we'd go even further withpython:3.11-alpine, but that requires musl-compatible wheels and adds complexity.slimis the right balance for this series.
Step 9: The Docker Compose Orchestrator
This is the single most important file in the project. It defines all of your services and how they connect.
# docker-compose.yml
services:
# --- The Database ---
db:
image: postgres:15
environment:
POSTGRES_DB: housing_db
POSTGRES_USER: user
POSTGRES_PASSWORD: password
ports:
- "5432:5432" # Maps host port 5432 -> container port 5432
volumes:
- pgdata:/var/lib/postgresql/data # Persist data across container restarts
# --- The Cache (we'll use this heavily in Part 2+) ---
redis:
image: redis:7-alpine
# --- The API Server ---
backend:
build: ./backend # Build from the Dockerfile in ./backend/
volumes:
- ./backend:/app # Mount local code into the container for live-reload
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgres://user:password@db:5432/housing_db
depends_on:
- db # Don't start backend until db container is up
# --- The Frontend ---
frontend:
build: ./frontend
volumes:
- ./frontend:/app
- /app/node_modules # Prevent host node_modules from overwriting container's
ports:
- "3000:3000"
depends_on:
- backend # Don't start frontend until backend is up
# Named volume for persistent database storage
volumes:
pgdata:
Let's unpack the decisions here:
The volumes on backend and frontend mount your local source code into the container. This means when you edit a file on your machine, the change appears inside the container instantly — no rebuild required. This is what makes the development loop fast.
The /app/node_modules anonymous volume on frontend is a subtle but important trick. Without it, the host's node_modules (which might be compiled for a different OS) would shadow the container's correctly-compiled node_modules. This line says: "use the container's own node_modules, don't let the host override it."
The pgdata named volume on db ensures your database survives a docker compose down. Without it, every time you stop the containers, all your data is gone.
The depends_on directives control startup order. But note — depends_on only means "wait for the container to start", not "wait for the service inside to be ready". We will fix this race condition shortly.
Advanced — Network resolution: Docker Compose automatically creates a bridge network. Inside this network, every service is reachable by its name. So
frontendcan reachbackendathttp://backend:8000, andbackendcan reachdbatdb:5432. This is not the same aslocalhost— see the troubleshooting section below.
Here is a block diagram to have mental model before going to troubleshooting and verification.
Part E: Troubleshooting — Merged with Setup
This is where most tutorials skip to "and now it works." It doesn't. Here are the real problems you will hit, in the order you will hit them, and how to fix each one. Each fix is integrated into the setup flow so you can resolve it immediately and keep moving.
Problem 1: "Port 5432 is already in use"
When you'll hit this: During docker compose up, if you have PostgreSQL installed and running locally on your machine.
Why it happens: Docker is trying to map container port 5432 to host port 5432 (via the ports directive), but your local PostgreSQL is already occupying that host port.
The fix — stop your local PostgreSQL:
| OS | Command |
|---|---|
| Linux | sudo systemctl stop postgresql |
| macOS (Homebrew) | brew services stop postgresql |
| Windows | Open services.msc, find "postgresql", right-click → Stop |
The better fix — if you want both running: Change the host port in docker-compose.yml:
db:
ports:
- "5433:5432" # Host port 5433 maps to container port 5432
The container still uses 5432 internally. Only the port exposed to your host machine changes. Your DATABASE_URL inside Docker does not need to change — it talks to the container's port, not the host's.
Problem 2: "Django can't connect to the database"
When you'll hit this: Almost immediately after the first docker compose up. The backend container starts, tries to connect to PostgreSQL, and crashes.
Why it happens: This is a classic race condition. depends_on: db tells Docker to start the db container before the backend container. But PostgreSQL takes 3–5 seconds to initialize and start accepting connections after the container is running. Django starts instantly and tries to connect before PostgreSQL is ready.
The immediate fix:
docker compose restart backend
By the time you run this, PostgreSQL has had enough time to finish initializing. The backend connects successfully.
The permanent fix (we'll implement this in Part 2): We will add an entrypoint.sh script that uses netcat to poll the database port in a loop before running manage.py. This is the production-grade pattern:
# Preview of Part 2's entrypoint.sh logic (don't add this yet)
until nc -z db 5432; do
echo "Waiting for database..."
sleep 1
done
python manage.py migrate
exec "$@"
For now, the manual restart is sufficient. We'll automate it soon.
Problem 3: "Next.js SSR fails to fetch from the backend"
When you'll hit this: Once you start writing server-side data fetching in Next.js (Part 2+), your API calls will fail with a connection error — even though everything looks correct.
Why it happens: Inside the frontend container, localhost:8000 refers to the frontend container itself, not the backend. Containers are isolated processes. There is no shared localhost.
The fix — always use the service name for server-side requests:
// ❌ WRONG — this looks for port 8000 on the frontend container itself
const res = await fetch("http://localhost:8000/api/properties/");
// ✅ CORRECT — Docker's internal network resolves "backend" to the right container
const res = await fetch("http://backend:8000/api/properties/");
Important nuance: This only applies to requests made on the server (in Next.js Server Components or
getServerSideProps). Browser-based requests (from Client Components running in the user's browser) still needlocalhost:8000, because the browser has no access to Docker's internal network. We will handle this distinction cleanly in Part 2 with environment variables.
Problem 4: "I edited a file but nothing changed"
When you'll hit this: During development, after modifying a Python or Next.js file.
Why it happens: Either the volume mount isn't working, or the development server's file-watcher didn't pick up the change.
The fix:
# For Django — the dev server should auto-reload, but if it doesn't:
docker compose restart backend
# For Next.js — if hot-reload isn't working:
docker compose restart frontend
# Nuclear option — rebuild everything from scratch:
docker compose down
docker compose up --build
Advanced: If you're seeing stale code consistently, check that your volume path in
docker-compose.ymlmatches your actual directory structure. A single typo (e.g.,./backendvs./backend/) can cause the mount to silently fail.
Part F: Git Strategy — Version Control as a Time Machine
We are not just dumping code into main. This entire series uses a branch-per-part strategy. Each part of the tutorial lives on its own branch, which means you can jump to any point in the optimization journey at any time.
Step 10: Commit and Branch
# Make sure you're in the project root
cd housing-caching-demo
# Stage and commit everything
git add .
git commit -m "chore: initial monorepo structure with Django, Next.js, and Docker"
# Push to main (assuming you've created a repo on GitHub)
git remote add origin https://github.com/your-username/housing-caching-demo.git
git push -u origin main
# Create the branch that represents "Part 1 complete"
git checkout -b part-1-setup
git push origin part-1-setup
Why branch-per-part? In Part 3, when we're deep in Redis configuration, you might want to see exactly what changed between Part 1 and Part 2. Git makes this trivial:
git diff part-1-setup..part-2-drf-cachingThis shows you a precise, reviewable diff of every change across the entire repo between two points in the series. It's one of the most underrated development workflows.
Part G: Launch and Verify
Everything is in place. Let's boot the stack and confirm it's alive.
Step 11: Build and Start
# From the project root (housing-caching-demo/)
docker compose up --build
The --build flag forces Docker to (re)build the images before starting. On first run, this will take a minute or two as it pulls base images and installs dependencies. Subsequent runs will be faster thanks to Docker's layer caching.
What you should see in the terminal:
db | ... ready to accept connections
redis | ... Ready to accept connections tcp
backend | ... Starting development server at http://0.0.0.0:8000/
frontend | ... ready on http://localhost:3000
If backend crashes and restarts, that's the race condition from Problem 2. Give it 5 seconds, then:
docker compose restart backend
Step 12: Verify Each Service — The Right Way
Here is where most tutorials say "open localhost:3000 and you're done." That's not verification — that's hope. A production mindset means proving that each layer of the stack is healthy independently, before you trust the layers above it. We verify bottom-up: infrastructure first, then the application on top.
Layer 1: Raw Infrastructure — Prove the Containers Are Alive
We use docker compose exec to run commands inside a running container. This lets us talk directly to PostgreSQL and Redis using their native CLI tools — no application code involved. If these pass, the foundation is solid regardless of what Django or Next.js does.
Redis — The Ping Test
The simplest possible health check. If Redis responds, it's running and accepting connections.
docker compose exec redis redis-cli ping
Expected output:
PONG
Redis — The Memory Report
This tells you how much RAM your cache is currently consuming. Right now it'll be nearly zero — we haven't stored anything yet. But this command becomes critical later in the series when we're actually caching property listings. Bookmark it.
docker compose exec redis redis-cli info memory
You'll see a block of stats. The line to watch is used_memory_human — that's your cache footprint in human-readable form (e.g., used_memory_human:1.05M).
Redis — The Manual Write/Read Test
Proves the full cycle: connect, write data, read it back. If get returns what set stored, Redis is not just alive — it's functional.
docker compose exec redis redis-cli set test_key "housing portal works"
docker compose exec redis redis-cli get test_key
Expected output on the second command:
"housing portal works"
Clean up after yourself:
docker compose exec redis redis-cli del test_key
PostgreSQL — List All Databases
This connects to the PostgreSQL server using psql and lists every database. You should see housing_db — the one we defined in docker-compose.yml.
docker compose exec db psql -U user -l
Expected output:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+----------------------
housing_db | user | UTF8 | en_US.utf8 | en_US.utf8 |
...
PostgreSQL — Connect and Inspect Tables
Connect directly to housing_db and list its tables.
docker compose exec db psql -U user -d housing_db
Once inside the psql prompt, run these commands one at a time:
\dt
This lists all tables. Right now, the output will be:
Did not find any relations.
That is correct and expected. We haven't run migrate yet — Django hasn't created any tables. This is not a failure. It's confirmation that the database is empty and ready to be populated. We will run migrations in Part 2.
Exit the prompt when you're done:
\q
A quick one-liner alternative — if you just want to prove the database accepts queries without entering the interactive prompt:
docker compose exec db psql -U user -d housing_db -c "SELECT 1;"
Expected output:
?column?
----------
1
(1 row)
If you see (1 row), PostgreSQL is alive, accepting connections, and executing queries. Done.
Layer 2: The Application — Prove Django and Next.js Are Serving
Now we move up the stack. The containers are healthy. Are the applications inside them actually running and reachable?
The API — Browser Check
http://localhost:8000/
http://localhost:8000/admin/
If you see the Django home or admin login page, the backend container is running, Django has started its development server, and HTTP requests are reaching it. You do not need to log in — the login page appearing is the proof. Here is screenshots:
The Frontend — Browser Check
http://localhost:3000/
The default Next.js welcome page. It has no connection to our API yet — that's Part 2. But seeing this page means the frontend container is running, Next.js compiled successfully, and it's serving pages.
The API — curl Check (from your terminal, not the browser)
This is more useful than a browser for automation and CI pipelines. It also shows you the raw HTTP response, which is what you actually care about.
curl -I http://localhost:8000/admin/
Expected output:
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
...
The Frontend — curl Check
curl -I http://localhost:3000
Expected output:
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
...
Layer 3: The Integration — Django Talking to DB and Redis
This is the layer that will fail — and that failure is the lesson.
We have proven that PostgreSQL is running. We have proven that Redis is running. We have proven that Django is serving pages. But Django does not yet know how to find PostgreSQL or Redis. Why? Because we haven't updated settings.py.
Django's startproject command generates a settings.py with SQLite as the default database and no cache backend configured at all. The DATABASE_URL environment variable we set in docker-compose.yml is sitting there, unused. Django doesn't read it automatically — it needs to be wired in.
Let's prove this is the case. Run these commands and watch them fail. Understanding why they fail is more valuable than any success message.
Here is the corrected block — just the Attempt 1 section, ready to drop in as a direct replacement:
Attempt 1: Django's database shell
docker compose exec backend python manage.py dbshell
What you'll see:
CommandError: You appear not to have the 'sqlite3' program installed or on your path.
This error is the smoking gun — and it tells you two things at once. First, Django is trying to open a SQLite shell, not PostgreSQL. It ignored the DATABASE_URL we set in docker-compose.yml entirely, because settings.py still has the default SQLite configuration from startproject. Second, SQLite itself isn't even installed inside our python:3.11-slim container — there is no reason to include it, because we never intended to use it. The error isn't a missing tool. It's proof that Django is looking in the completely wrong direction.
We have two paths forward from here. The quick one: run migrations against the default SQLite backend just to see dbshell open a prompt, which proves the command itself works but doesn't solve anything real. The correct one: update settings.py to read DATABASE_URL and point at PostgreSQL. We are taking the correct one. That's Part 2's first task — and when we come back and re-run this exact command, it will open a psql prompt instead.
Attempt 2: Django's cache via the Python shell
docker compose exec backend python manage.py shell
Once inside, run:
from django.core.cache import cache
cache.set('blog_test', 'success')
print(cache.get('blog_test'))
What you'll see:
success
This looks like it worked — but it didn't work the way we want. Django's default cache backend is LocMemCache: an in-process, in-memory cache that lives inside the Django process itself. It has nothing to do with Redis. It doesn't survive a restart. It isn't shared between processes. It's a placeholder that Django provides so the cache API doesn't crash when no backend is configured.
Exit the shell:
exit()
Why show the failures? This is the entire point of Part 1. We have a running PostgreSQL container and a running Redis container, but Django is talking to neither of them. In Part 2, we will update
settings.pyto readDATABASE_URL, point the cache backend at Redis, and then re-run these exact same commands. You will see the difference immediately. The infrastructure is ready. The wiring is next.
The Verification Summary
Here is the complete status board. Every green tick is something you can reproduce right now, exactly as shown above.
| Service | Test Command | What Success Looks Like | Status |
|---|---|---|---|
| Redis | docker compose exec redis redis-cli ping |
PONG |
✅ Infrastructure |
| PostgreSQL | docker compose exec db psql -U user -d housing_db -c "SELECT 1;" |
(1 row) |
✅ Infrastructure |
| Backend (HTTP) | curl -I http://localhost:8000/admin/ |
HTTP/1.1 200 OK |
✅ Application |
| Frontend (HTTP) | curl -I http://localhost:3000 |
HTTP/1.1 200 OK |
✅ Application |
| Django → PostgreSQL | docker compose exec backend python manage.py dbshell |
Opens psql prompt |
⏳ Part 2 |
| Django → Redis |
cache.set() / cache.get() in manage.py shell
|
Reads from Redis | ⏳ Part 2 |
The two ⏳ items are not bugs. They are the agenda for Part 2. The infrastructure is bulletproof. The application wiring is next.
What We Built — And What Comes Next
Let's take stock. In a single session, we went from an empty directory to a fully containerized, multi-service application:
The backend is a Django 5.x project with DRF installed, isolated in a Python 3.11 container, with its dependencies pinned and its environment configured for production-style logging.
The frontend is a Next.js 14+ application with TypeScript, Tailwind CSS, and the App Router, running in its own container with hot-reload working.
The database is PostgreSQL 15, running in a container with persistent storage, accepting queries, and waiting to be used.
The cache is Redis 7, running in its own container, responding to commands, and completely idle — because nothing is talking to it yet.
The orchestration is handled by a single docker-compose.yml that wires everything together, and a branch-per-part Git strategy that lets you track the evolution of the system.
Here is the honest state of things: the infrastructure is bulletproof. PostgreSQL is healthy. Redis is healthy. Django is serving pages. Next.js is rendering. But Django is still talking to SQLite and caching in local memory. The DATABASE_URL we set in docker-compose.yml is sitting there, unused. That gap — the wiring between a running application and a running infrastructure — is exactly what Part 2 closes.
What's in Part 2
In Part 2, we do three things. First, we update settings.py to read DATABASE_URL and point the cache backend at Redis — and we re-run the exact verification commands from this post to prove the connection. Second, we design a database schema for the housing portal. Third, we seed it with realistic data. We will design that schema to be intentionally naive — one that will perform terribly under load. That's the setup for everything that comes after: fixing it with caching.
Stay tuned.




Top comments (0)