DevOps Journey
Core Concepts

Reverse Proxies

Reverse proxy concepts and implementation

Reverse Proxies

A reverse proxy sits in front of backend servers and forwards client requests to them.

What is a Reverse Proxy?

A reverse proxy:

  • Receives requests from clients
  • Forwards requests to backend servers
  • Returns responses to clients
  • Clients don't know which backend handles their request
  • Acts as gateway to backend infrastructure

Benefits

  1. Load Balancing - Distribute traffic across servers
  2. SSL Termination - Decrypt HTTPS once at proxy
  3. Caching - Cache responses from backends
  4. Security - Hide backend infrastructure
  5. Compression - Compress responses
  6. Rate Limiting - Control request rate
  7. Authentication - Centralized auth layer

Nginx Reverse Proxy Configuration

Basic Setup

upstream backend {
    server 192.168.1.10:8000;
    server 192.168.1.11:8000;
    server 192.168.1.12:8000;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Load Balancing Methods

# Round-robin (default)
upstream backend {
    server server1.com;
    server server2.com;
    server server3.com;
}

# Least connections
upstream backend {
    least_conn;
    server server1.com;
    server server2.com;
}

# IP hash (sticky sessions)
upstream backend {
    ip_hash;
    server server1.com;
    server server2.com;
}

# Weighted
upstream backend {
    server server1.com weight=3;
    server server2.com weight=1;
}

Health Checks

upstream backend {
    server backend1.com max_fails=3 fail_timeout=30s;
    server backend2.com max_fails=3 fail_timeout=30s;
    server backend3.com backup;
}

URL Path Routing

server {
    listen 80;
    server_name example.com;

    location /api/ {
        proxy_pass http://api-backend;
    }

    location /static/ {
        proxy_pass http://cdn-backend;
    }

    location / {
        proxy_pass http://web-backend;
    }
}

Best Practices

  • Use upstream groups for load balancing
  • Set appropriate timeouts
  • Configure health checks
  • Use connection reuse
  • Implement proper header forwarding
  • Monitor upstream servers
  • Cache appropriate responses
  • Implement SSL/TLS termination

On this page