NGINX Notes

What is NGINX?

NGINX is a high-performance web server that is also used as:

Reverse Proxy Server Load Balancer HTTP Cache API Gateway Static File Server

It is designed to handle very high traffic efficiently.

Key Features of NGINX

High Concurrency Can handle 10,000+ concurrent requests with minimal resource usage.
HTTP Caching Caches HTTP requests for faster response times.
Reverse Proxy Forwards client requests to one or more backend servers.
Load Balancer Distributes incoming traffic across multiple application nodes.
API Gateway Acts as an entry point for managing API traffic.
Static File Serving Serves static files such as HTML, images, videos, CSS, and JS.

NGINX Use Cases Summary

Feature Purpose
Reverse Proxy Forward requests to backend servers
Load Balancer Distribute traffic across multiple servers
Caching Faster response by serving cached content
Static Hosting Serve HTML, images, CSS, and JS files directly
API Gateway Manage and route API traffic
                ┌──────────────────────┐
                │      Internet        │
                └─────────┬────────────┘
                          │
                  Incoming Requests
                          │
                ┌─────────▼───────────┐
                │   NGINX Server      │
                │ (Reverse Proxy)     │
                └─────────┬───────────┘
        ┌─────────────────┼──────────────────┐
        │                 │                  │
┌───────▼───────┐ ┌───────▼───────┐ ┌──────▼────────┐
│ Web Server 1  │ │ Web Server 2  │ │ Web Server 3  │
│ (App Node)    │ │ (App Node)    │ │ (App Node)    │
└───────────────┘ └───────────────┘ └───────────────┘
        │                 │                  │
        └────────── Response Back ──────────┘

How It Works

  • Client sends a request.
  • NGINX receives it.
  • NGINX decides: serve a static file or forward the request to a backend server.
  • The response is returned to the client.

Final Architecture Summary

Client  -->  NGINX  -->  (Cache check)
                            |
                   ┌────────┴────────┐
                   │                 │
             Static Files      Backend Servers
           (HTML/CSS/Images)   (Node.js, APIs)

A forward proxy sits between clients and the internet. The client sends its request to the proxy, and the proxy forwards it to the target server on the client's behalf.

Forward Proxy Diagram
Key Point In a forward proxy setup, the server does not know which client originally made the request. The server only sees the proxy's IP address. This is commonly used for anonymity, content filtering, and bypassing geo-restrictions.

A reverse proxy sits in front of one or more backend servers. The client sends its request to the reverse proxy, and the proxy decides which backend server should handle it.

Reverse Proxy Diagram
Key Point In a reverse proxy setup, the client does not know which backend server actually resolved the request and sent the response. The client only communicates with the proxy. This is used for load balancing, SSL termination, caching, and security.

In production, a reverse proxy rarely talks to just one backend. NGINX can distribute incoming requests across multiple application servers (often called upstream servers). Here's the full picture:

Multi-Server Architecture

         ┌──────────────┐
         │   Clients    │
         │ (Browsers /  │
         │  Mobile Apps)│
         └──────┬───────┘
                │  HTTPS (port 443)
                ▼
        ┌───────────────────┐
        │      NGINX        │
        │  (Reverse Proxy)  │
        │                   │
        │  ┌─────────────┐  │
        │  │  SSL Termin. │  │   ← decrypts HTTPS
        │  └─────────────┘  │
        │  ┌─────────────┐  │
        │  │  Routing     │  │   ← matches location rules
        │  └─────────────┘  │
        │  ┌─────────────┐  │
        │  │  Load Bal.   │  │   ← picks a backend
        │  └─────────────┘  │
        └──┬──────┬──────┬──┘
           │      │      │
     HTTP  │      │      │  HTTP
   :3001   │  :3002│  :3003│
           ▼      ▼      ▼
     ┌─────────┐ ┌─────────┐ ┌─────────┐
     │ App #1  │ │ App #2  │ │ App #3  │
     │ Node.js │ │ Node.js │ │ Node.js │
     └─────────┘ └─────────┘ └─────────┘

How It Works — Step by Step

  1. Client connects to https://example.com.
  2. NGINX terminates SSL — decrypts the request so backends don't need to handle HTTPS.
  3. Location matching — NGINX checks the URL path against its location blocks to decide which upstream group handles the request.
  4. Load balancing — NGINX picks one server from the upstream group using the configured algorithm (round-robin, least connections, etc.).
  5. Proxying — The request is forwarded over plain HTTP to the chosen backend.
  6. Response — The backend replies to NGINX, which sends the response back to the client over HTTPS.

1. Defining an Upstream Group

The upstream block tells NGINX the addresses of your backend servers:

upstream app_servers {
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
}
What happens here? NGINX distributes every new request to the next server in the list using round-robin by default. If one server is down, NGINX automatically skips it.

2. Connecting location → upstream

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://app_servers;

        # Forward useful headers to the backend
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
proxy_set_header — Why? Without these headers the backend only sees NGINX's own IP. By forwarding X-Real-IP and X-Forwarded-For, your app can log the real client address, apply rate-limits per user, etc.

3. Routing Different Paths to Different Backends

A single NGINX instance can proxy multiple services at once based on the URL path:

upstream frontend {
    server 127.0.0.1:3000;
}

upstream api_backend {
    server 127.0.0.1:5000;
    server 127.0.0.1:5001;
}

upstream admin_panel {
    server 127.0.0.1:8080;
}

server {
    listen 80;
    server_name example.com;

    # React / Next.js frontend
    location / {
        proxy_pass http://frontend;
    }

    # REST API — load-balanced across 2 nodes
    location /api/ {
        proxy_pass http://api_backend;
    }

    # Admin dashboard — single server
    location /admin/ {
        proxy_pass http://admin_panel;
    }
}
# Request routing examples:
#
# GET  /                → frontend  (127.0.0.1:3000)
# POST /api/users       → api_backend (5000 or 5001)
# GET  /admin/dashboard → admin_panel (127.0.0.1:8080)

4. Load-Balancing Strategies

Strategy Directive How It Works
Round Robin (default) Requests are distributed sequentially across servers
Least Connections
least_conn;
Sends the request to the server with the fewest active connections
IP Hash
ip_hash;
Same client IP always goes to the same server (sticky sessions)
Weighted
server ... weight=3;
Higher-weight servers receive proportionally more traffic

Weighted + Least Connections Example

upstream app_servers {
    least_conn;

    server 127.0.0.1:3001 weight=5;   # powerful machine — gets 5× traffic
    server 127.0.0.1:3002 weight=1;   # smaller instance
    server 127.0.0.1:3003 weight=1 backup;  # only used if others are down
}

5. Passive Health Checks

NGINX automatically marks a server as unavailable if it fails repeatedly:

upstream app_servers {
    server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3003 backup;
}
What this means If a server fails 3 times within 30 seconds, NGINX stops sending traffic to it for the next 30 seconds. The backup server only receives traffic when all primary servers are down.

6. Full Production-Ready Config

events {
    worker_connections 1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    # ── Upstream groups ──
    upstream frontend {
        server 127.0.0.1:3000;
    }

    upstream api {
        least_conn;
        server 127.0.0.1:5000 weight=3;
        server 127.0.0.1:5001 weight=2;
        server 127.0.0.1:5002 backup;
    }

    # ── HTTPS Server ──
    server {
        listen 443 ssl;
        server_name example.com;

        ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

        # Frontend
        location / {
            proxy_pass http://frontend;
            proxy_set_header Host $host;
        }

        # API
        location /api/ {
            proxy_pass http://api;
            proxy_set_header Host              $host;
            proxy_set_header X-Real-IP         $remote_addr;
            proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }

    # ── HTTP → HTTPS redirect ──
    server {
        listen 80;
        server_name example.com;
        return 301 https://$host$request_uri;
    }
}

Using Docker (Ubuntu Container)

docker run -it -p 8080:80 ubuntu

Install NGINX

apt-get update
apt-get install nginx

Or alternatively:

apt update
apt install nginx

Check Version

nginx -v

Start NGINX Service

service nginx start

Configuration Directory

cd /etc/nginx
ls -lh

Main Config File

All configuration is inside:

/etc/nginx/nginx.conf

Install Editor and Edit Config

apt install vim
vim nginx.conf

Test Configuration

nginx -t

Reload NGINX

service nginx reload

Or:

nginx -s reload

Install Utilities (Optional)

apt install coreutils

Static Website Setup

mkdir /etc/nginx/websites

Place your HTML files here.

events {
}

http {

    include /etc/nginx/mime.types;

    server {
        listen 80;
        server_name _;

        root /etc/nginx/websites;

        index index.html;

    }
}

Handles connection processing. Configures how NGINX deals with incoming network connections at a low level.

The main HTTP block that wraps all web-related configuration including servers, upstreams, and global settings.

Tells NGINX about file types so it can serve them with the correct Content-Type header. Covers HTML, CSS, JS, images, and more.

Defines a virtual server. Each server block can listen on a specific port and respond to specific domain names.

Tells the server to listen for HTTP requests on port 80 (the default HTTP port).

The underscore (_) is a catch-all that means "accept requests for all domain names." In production, replace this with your actual domain.

Specifies the filesystem path where website files are located. For example: /etc/nginx/websites