DEV Community

Cover image for Load Balancing a Simple Static Site with Docker and Nginx: A Beginner's Journey
Niraj Maharjan
Niraj Maharjan

Posted on

Load Balancing a Simple Static Site with Docker and Nginx: A Beginner's Journey

Introduction

Ever wondered how websites handle thousands of requests simultaneously without crashing? The answer is load balancing,distributing incoming traffic across multiple servers. In this article, I'll walk you through my journey of building a load-balanced static website using Docker and Nginx, including the real problems I faced and how I solved them.

By the end of this tutorial, you'll understand:

  • How to containerize a simple Node.js application with Docker
  • How to run multiple instances of the same app using Docker Compose
  • How to set up Nginx as a reverse proxy and load balancer
  • How to troubleshoot common issues like SELinux permission errors
  • How to access your application from other devices on your network

This guide is perfect for beginners who want to understand the fundamentals of containerization and load balancing.


What We're Building

We're going to create:

  1. A simple static website with vanilla HTML, CSS, and JavaScript
  2. An Express.js server to serve our static files
  3. Three Docker containers running identical instances of our app
  4. An Nginx load balancer to distribute traffic between the three instances

Architecture Overview:

The image is about the architecture of how nginx is setup as loadbalancer serving three instances of the same container


Step 1: Creating the Static Site with Express

First, let's create a simple Node.js application that serves static HTML, CSS, and JavaScript files.

Project Structure

simple-static-site/
├── public/
│   ├── index.html
│   ├── style.css
│   └── script.js
├── server.js
├── package.json
├── Dockerfile
└── docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

The Express Server (server.js)

const express = require('express');
const path = require('path');

const app = express();
const PORT = process.env.PORT || 3001;
const APP_NAME = process.env.APP_NAME || 'Default App';

// Serve static files from the 'public' directory
app.use(express.static(path.join(__dirname, 'public')));

// Add a simple API endpoint to identify which instance is responding
app.get('/api/info', (req, res) => {
  res.json({
    appName: APP_NAME,
    port: PORT,
    timestamp: new Date().toISOString()
  });
});

app.listen(PORT, () => {
  console.log(`${APP_NAME} listening on port ${PORT}`);
});
Enter fullscreen mode Exit fullscreen mode

package.json

{
  "name": "simple-static-site",
  "version": "1.0.0",
  "description": "A simple static site with Express server",
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "express": "^4.18.2"
  }
}
Enter fullscreen mode Exit fullscreen mode

Key Points:

  • We use express.static() to serve files from the public directory
  • The APP_NAME environment variable helps us identify which container is responding
  • The /api/info endpoint is useful for verifying load balancing is working

github link for the project : https://github.com/NirajMaharjan/Static_Site.git


Step 2: Dockerizing the Application

Now let's containerize our application so we can run multiple instances easily.

Creating the Dockerfile

# Use official Node.js runtime as base image
FROM node:18-alpine

# Set working directory inside container
WORKDIR /app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install dependencies
RUN npm install --production

# Copy application code
COPY . .

# Expose port 3001
# Note: This is just documentation - it doesn't actually publish the port
EXPOSE 3001

# Start the application
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Important Note about EXPOSE:
The EXPOSE instruction in the Dockerfile is purely documentation. It tells other developers which port the application uses, but it doesn't actually make the port accessible from outside the container. We'll handle actual port publishing in our docker-compose.yml file.

Building the Docker Image

# Build the image
docker build -t simple-static-site .

# Verify the image was created
docker images
Enter fullscreen mode Exit fullscreen mode

You should see your simple-static-site image in the list.


Step 3: Running Multiple Instances with Docker Compose

Docker Compose allows us to define and run multiple containers with a single configuration file. We'll create three instances of our application, each with a unique name and port.

docker-compose.yml

version: '3.8'

services:
  app1:
    build: .
    container_name: app1
    environment:
      - APP_NAME=App1
      - PORT=3001
    ports:
      - "3002:3001"
    restart: unless-stopped

  app2:
    build: .
    container_name: app2
    environment:
      - APP_NAME=App2
      - PORT=3001
    ports:
      - "3003:3001"
    restart: unless-stopped

  app3:
    build: .
    container_name: app3
    environment:
      - APP_NAME=App3
      - PORT=3001
    ports:
      - "3004:3001"
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

Breaking Down the Configuration:

  • build: . - Builds the Docker image from the Dockerfile in the current directory
  • container_name - Gives each container a friendly name
  • environment - Sets environment variables inside each container
    • APP_NAME - Used to identify which instance is responding
    • PORT - The internal port the app listens on (always 3001 inside the container)
  • ports - Maps host ports to container ports
    • Format: "HOST_PORT:CONTAINER_PORT"
    • "3002:3001" means: "Map host port 3002 to container port 3001"
  • restart: unless-stopped - Automatically restart the container if it crashes

Starting the Containers

# Start all services
docker-compose up -d

# Verify all containers are running
docker-compose ps

# Check logs
docker-compose logs -f
Enter fullscreen mode Exit fullscreen mode

Testing Individual Instances

Test each instance directly to ensure they're working:

# Test App1
curl http://localhost:3002/api/info

# Test App2
curl http://localhost:3003/api/info

# Test App3
curl http://localhost:3004/api/info
Enter fullscreen mode Exit fullscreen mode

Each should return different app names, confirming all three instances are running independently.


Step 4: Setting Up Nginx as a Load Balancer

Now comes the exciting part—configuring Nginx to distribute traffic across our three application instances.

Two Deployment Approaches

There are two common ways to deploy Nginx with Docker:

  1. Nginx on the host VM (what we'll use)

    • Nginx runs directly on your virtual machine
    • Docker containers run your application
    • Best for learning and understanding how reverse proxies work
  2. Nginx in a Docker container

    • Everything runs in Docker
    • Easier to manage and deploy
    • Better for production environments

We'll focus on approach #1 because it helps you understand the fundamentals better.

Installing Nginx on Your VM

# Update package list
sudo apt update

# Install Nginx
sudo apt install nginx -y

# Start Nginx
sudo systemctl start nginx

# Enable Nginx to start on boot
sudo systemctl enable nginx

# Check status
sudo systemctl status nginx
Enter fullscreen mode Exit fullscreen mode

Configuring Nginx for Load Balancing

Create a new Nginx configuration file:

sudo nano /etc/nginx/sites-available/loadbalancer
Enter fullscreen mode Exit fullscreen mode

Add this configuration:

# Define upstream servers (your Docker containers)
upstream app_backend {
    # Round-robin load balancing (default)
    server localhost:3002;
    server localhost:3003;
    server localhost:3004;
}

server {
    listen 8080;
    server_name localhost;

    location / {
        # Proxy requests to the upstream backend
        proxy_pass http://app_backend;

        # Important proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
Enter fullscreen mode Exit fullscreen mode

Understanding the Configuration:

  • upstream app_backend - Defines a group of backend servers

    • Nginx will distribute requests among these servers using round-robin by default
  • server localhost:3002/3003/3004 - Points to our Docker containers

    • Remember: Docker published these ports on the host
  • listen 8080 - Nginx listens on port 8080 for incoming requests

  • proxy_pass http://app_backend - Forwards requests to one of the upstream servers

  • proxy_set_header directives - Preserve important request information

    • The backend servers can see the original client IP, not just Nginx's IP

Activating the Configuration

# Create symbolic link to enable the site
sudo ln -s /etc/nginx/sites-available/loadbalancer /etc/nginx/sites-enabled/

# Test configuration for syntax errors
sudo nginx -t

# Reload Nginx to apply changes
sudo systemctl reload nginx
Enter fullscreen mode Exit fullscreen mode

Step 5: The Problem—"Bad Gateway" Error

When I first tried to access http://localhost:8080, I got a 502 Bad Gateway error. This is a common issue, so let's troubleshoot it step-by-step.

Debugging Process

1. Check if Nginx is running:

sudo systemctl status nginx
Enter fullscreen mode Exit fullscreen mode

✅ Nginx was running fine.

2. Check if Docker containers are accessible:

curl -v http://localhost:3002
curl -v http://localhost:3003
curl -v http://localhost:3004
Enter fullscreen mode Exit fullscreen mode

✅ All containers responded correctly.

3. Check Nginx error logs:

sudo tail -f /var/log/nginx/error.log
Enter fullscreen mode Exit fullscreen mode

Found the problem!

2026/01/12 14:11:15 [crit] 35238#35238: *16 connect() to 127.0.0.1:3004 
failed (13: Permission denied) while connecting to upstream, 
client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", 
upstream: "http://127.0.0.1:3004/", host: "localhost:8080"
Enter fullscreen mode Exit fullscreen mode

The key phrase here is "Permission denied (13)". This is a SELinux security issue.


Step 6: Understanding and Fixing SELinux

What is SELinux?

SELinux (Security-Enhanced Linux) is a security module that enforces strict access controls on Linux systems. By default, SELinux prevents Nginx from making network connections to other services—even on localhost!

This is a security feature, not a bug. SELinux operates on the principle of least privilege: programs should only have the permissions they absolutely need.

The Solution

We need to allow Nginx to make network connections:

# Check current SELinux status
sudo getsebool -a | grep httpd_can_network_connect

# Allow Nginx to make network connections
sudo setsebool -P httpd_can_network_connect 1

# Verify the change
sudo getsebool httpd_can_network_connect
Enter fullscreen mode Exit fullscreen mode

What this command does:

  • setsebool - Set SELinux boolean (a true/false configuration)
  • -P - Make the change persistent (survives reboots)
  • httpd_can_network_connect - The specific permission we're enabling
  • 1 - Enable (use 0 to disable)

Testing the Fix

Now try accessing your site again:

curl http://localhost:8080
Enter fullscreen mode Exit fullscreen mode

🎉 Success! You should now see your static site being served through Nginx.

Verifying Load Balancing

Let's verify that Nginx is actually distributing requests across all three containers:

# Make multiple requests to the /api/info endpoint
for i in {1..9}; do
  curl -s http://localhost:8080/api/info | grep appName
done
Enter fullscreen mode Exit fullscreen mode

You should see responses rotating between App1, App2, and App3:

{"appName":"App1",...}
{"appName":"App2",...}
{"appName":"App3",...}
{"appName":"App1",...}
{"appName":"App2",...}
{"appName":"App3",...}
...
Enter fullscreen mode Exit fullscreen mode

This confirms that Nginx is using round-robin load balancing—distributing requests evenly across all three backend servers.


Step 7: Accessing from Other Devices on Your Network

Want to access your site from your phone or another computer on the same network? There's one more configuration needed.

The Problem: NAT Mode

If you're running this in a virtual machine (like VirtualBox or VMware), your VM's network adapter is likely in NAT mode by default.

What is NAT mode?

  • NAT (Network Address Translation) allows your VM to access the internet
  • But other devices on your network can't directly access your VM
  • The VM gets a private IP that's only accessible from the host machine

The Solution: Bridged Mode

Switch your VM's network adapter to Bridged mode:

In VirtualBox:

  1. Power off your VM
  2. Go to VM Settings → Network → Adapter 1
  3. Change "Attached to:" from "NAT" to "Bridged Adapter"
  4. Select your physical network adapter (usually your WiFi or Ethernet card)
  5. Start your VM

After switching to Bridged mode:

Your VM will get its own IP address on your local network, just like any other device.

Finding Your VM's IP Address

# Find your IP address
ip addr show

# Or use
hostname -I
Enter fullscreen mode Exit fullscreen mode

Look for an IP address like 192.168.1.x or 10.0.0.x.

Accessing from Other Devices

Now you can access your load-balanced site from any device on your network:

http://YOUR_VM_IP:8080
Enter fullscreen mode Exit fullscreen mode

For example:

http://192.168.1.100:8080
Enter fullscreen mode Exit fullscreen mode

Testing from your phone:

  1. Connect your phone to the same WiFi network
  2. Open a browser and go to http://YOUR_VM_IP:8080
  3. Refresh the page multiple times and check the /api/info endpoint
  4. You'll see different app names, proving load balancing works across devices!

Understanding the Complete Flow

Let's trace what happens when you access http://YOUR_VM_IP:8080:

  1. Client Request → Your browser sends a request to port 8080
  2. Nginx Receives → Nginx running on the VM accepts the request
  3. Load Balancing → Nginx selects one of the three upstream servers (round-robin)
  4. Proxy to Docker → Nginx forwards the request to localhost:3002, 3003, or 3004
  5. Docker Routes → Docker routes the request to the corresponding container
  6. Express Responds → The Express server in the container processes and responds
  7. Nginx Returns → Nginx sends the response back to your browser

All of this happens in milliseconds!


Alternative Approach: Nginx in Docker

Earlier, I mentioned there are two ways to deploy Nginx. Let's briefly explore the second approach.

Docker Compose with Nginx

You can run Nginx inside a Docker container too:

version: '3.8'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - app1
      - app2
      - app3

  app1:
    build: .
    environment:
      - APP_NAME=App1
    # No ports mapping needed - Nginx accesses via Docker network

  app2:
    build: .
    environment:
      - APP_NAME=App2

  app3:
    build: .
    environment:
      - APP_NAME=App3
Enter fullscreen mode Exit fullscreen mode

Key Differences:

  • No port mapping on app containers—they're only accessible within the Docker network
  • Service names as hostnames—Nginx references app1, app2, app3 directly
  • Everything in Docker—easier to deploy and tear down

Nginx Configuration for Docker

upstream app_backend {
    server app1:3001;  # Use service names, not localhost
    server app2:3001;
    server app3:3001;
}

server {
    listen 80;

    location / {
        proxy_pass http://app_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}
Enter fullscreen mode Exit fullscreen mode

Benefits of this approach:

  • Everything is containerized and portable
  • Docker handles networking automatically
  • Easier to deploy to production (just copy docker-compose.yml)
  • No SELinux issues since everything is in Docker

Common Issues and Solutions

Issue 1: "Cannot connect to Docker daemon"

Problem: Docker service isn't running.

Solution:

sudo systemctl start docker
sudo systemctl enable docker
Enter fullscreen mode Exit fullscreen mode

Issue 2: Port Already in Use

Problem: Another service is using ports 3002, 3003, 3004, or 8080.

Solution:

# Find what's using the port
sudo lsof -i :8080

# Kill the process or change your port in docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

Issue 3: Containers Start Then Immediately Stop

Problem: Application crashes on startup.

Solution:

# Check container logs
docker-compose logs app1

# Common causes:
# - Missing dependencies in package.json
# - Syntax errors in server.js
# - Wrong working directory in Dockerfile
Enter fullscreen mode Exit fullscreen mode

Issue 4: Changes Not Reflected

Problem: Modified code but container still runs old version.

Solution:

# Rebuild containers
docker-compose up -d --build

# Or completely rebuild
docker-compose down
docker-compose build --no-cache
docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

Issue 5: SELinux Blocks Connections (Even After Fix)

Problem: Still getting permission denied errors.

Solution:

# Check SELinux logs for specific denials
sudo ausearch -m avc -ts recent

# Temporarily disable SELinux for testing (NOT for production)
sudo setenforce 0

# Re-enable when done
sudo setenforce 1
Enter fullscreen mode Exit fullscreen mode

Best Practices and Production Considerations

Security

  1. Never disable SELinux in production—configure it properly instead
  2. Use environment variables for sensitive data—never hardcode credentials
  3. Limit exposed ports—only expose what's necessary
  4. Use HTTPS in production—set up SSL/TLS certificates with Let's Encrypt
  5. Run containers as non-root users—add a USER directive in your Dockerfile

Performance

  1. Use health checks—ensure Nginx doesn't route to unhealthy containers
   upstream app_backend {
       server app1:3001 max_fails=3 fail_timeout=30s;
       server app2:3001 max_fails=3 fail_timeout=30s;
       server app3:3001 max_fails=3 fail_timeout=30s;
   }
Enter fullscreen mode Exit fullscreen mode
  1. Configure connection pooling—keep persistent connections to backends
   upstream app_backend {
       server app1:3001;
       keepalive 32;  # Maintain 32 connections
   }
Enter fullscreen mode Exit fullscreen mode
  1. Enable caching—reduce load on backend servers
   proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m;

   location / {
       proxy_cache my_cache;
       proxy_pass http://app_backend;
   }
Enter fullscreen mode Exit fullscreen mode

Monitoring

  1. Add health check endpoints to your Express app:
   app.get('/health', (req, res) => {
     res.status(200).json({ status: 'healthy' });
   });
Enter fullscreen mode Exit fullscreen mode
  1. Monitor Nginx status:
   location /nginx_status {
       stub_status;
       allow 127.0.0.1;
       deny all;
   }
Enter fullscreen mode Exit fullscreen mode
  1. Collect logs centrally—use tools like ELK stack or Grafana Loki

Scaling

  1. Horizontal scaling—add more containers in docker-compose.yml
  2. Use Docker Swarm or Kubernetes—for automatic scaling and orchestration
  3. Implement session persistence—if your app needs sticky sessions
   upstream app_backend {
       ip_hash;  # Route same IP to same backend
       server app1:3001;
       server app2:3001;
       server app3:3001;
   }
Enter fullscreen mode Exit fullscreen mode

Conclusion

Congratulations! You've successfully:

✅ Created a containerized Node.js application

✅ Run multiple instances using Docker Compose

✅ Set up Nginx as a reverse proxy and load balancer

✅ Debugged SELinux permission issues

✅ Made your application accessible across your network

✅ Learned two different deployment approaches

Key Takeaways

  1. Docker simplifies deployment—build once, run anywhere
  2. Load balancing improves reliability—if one container fails, others handle the traffic
  3. SELinux is your friend—configure it properly rather than disabling it
  4. Nginx is powerful—reverse proxy, load balancer, cache, and more
  5. Debugging is systematic—check logs, test components individually, isolate issues

Next Steps

Ready to level up? Try these challenges:

  1. Add SSL/TLS encryption—secure your site with HTTPS
  2. Implement session management—use Redis for shared sessions across containers
  3. Add a database—connect your app to PostgreSQL or MongoDB
  4. Deploy to the cloud—try AWS, DigitalOcean, or Heroku
  5. Set up CI/CD—automate deployment with GitHub Actions
  6. Monitor performance—add Prometheus and Grafana
  7. Try Kubernetes—graduate to container orchestration

Resources


Questions or Feedback?

Did you encounter different issues? Have suggestions for improving this guide? Drop a comment below! I'd love to hear about your experience and help troubleshoot any problems you face.

Happy coding! 🚀


This article is based on my real experience building and troubleshooting a load-balanced application. All errors and solutions are documented as they happened—because learning from mistakes is the best way to truly understand technology.

Top comments (0)