Nginx powers over a third of all websites on the internet, and for good reason. It handles thousands of concurrent connections with minimal memory usage, serves static files faster than almost any alternative, and doubles as a reverse proxy, load balancer, and SSL termination point. Whether you are deploying a static site, a Node.js application, or a complex microservices architecture, understanding Nginx configuration is a skill that will serve you throughout your career.
This guide takes you from the basics of how Nginx processes requests through writing production-ready configurations with SSL, caching, gzip compression, security headers, and reverse proxy setups. Every example is practical and ready to adapt for your own projects.
How Nginx Processes Requests
Before editing configuration files, you need to understand how Nginx decides what to do with an incoming request. Unlike Apache, which spawns a new process or thread per connection, Nginx uses an event-driven, asynchronous architecture. A small number of worker processes handle thousands of connections simultaneously using non-blocking I/O. This is why Nginx excels at serving static content and proxying requests to backend applications.
When a request arrives, Nginx follows a specific decision path:
- Listen directive matching: Nginx checks which server block matches the IP address and port of the incoming connection.
- Server name matching: Among server blocks that match the port, Nginx compares the Host header against
server_namedirectives to find the correct virtual host. - Location matching: Within the selected server block, Nginx evaluates
locationdirectives to determine which block handles the specific URI. - Directive execution: Nginx applies the directives inside the matched location block: serving a file, proxying to a backend, returning a redirect, or performing another action.
Understanding this flow is essential because misunderstanding location matching is the single most common source of Nginx configuration bugs. If you are building applications with server-side rendering or client-side rendering, knowing exactly how Nginx routes each request becomes even more important.
Installation and File Structure
On most Linux distributions, Nginx installs with a single command. On Ubuntu or Debian, use apt install nginx. On CentOS or RHEL, use yum install nginx or dnf install nginx. If you are using Docker for your development environment, the official nginx image works out of the box.
After installation, the file structure follows a predictable pattern:
- /etc/nginx/nginx.conf: The main configuration file. This sets global settings like the number of worker processes, logging paths, and includes for additional configuration files.
- /etc/nginx/conf.d/: Directory for site-specific configuration files. Any
.conffile placed here gets automatically included by the main config. - /etc/nginx/sites-available/ and /etc/nginx/sites-enabled/: Debian/Ubuntu convention for managing virtual hosts. Configuration files live in sites-available, and symlinks in sites-enabled activate them.
- /var/log/nginx/: Access and error logs. Essential for debugging configuration issues.
- /usr/share/nginx/html/: Default web root directory.
Always test your configuration before reloading Nginx with nginx -t. This validates the syntax and catches errors before they take your site offline.
Understanding Server Blocks
Server blocks (analogous to Apache’s virtual hosts) define how Nginx handles requests for different domains. Each server block listens on a port and responds to one or more domain names.
A minimal server block looks like this:
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com/public;
index index.html;
}
The listen directive tells Nginx which port to bind. The server_name directive specifies which domain names this block handles. You can use exact names, wildcards (*.example.com), or regular expressions. The root directive sets the base directory for serving files, and index specifies the default file when a directory is requested.
Location Blocks and Matching Rules
Location blocks are where most of the routing logic lives. Nginx supports several types of location matching, evaluated in a specific priority order:
- Exact match (
=): Highest priority.location = /favicon.icomatches only that exact URI. - Preferential prefix (
^~): If this prefix matches, Nginx stops looking for regex matches. Useful for static asset directories. - Regular expression (
~and~*): Case-sensitive and case-insensitive regex matches. Evaluated in order of appearance in the configuration file. - Standard prefix: Regular prefix match. If no regex matches, the longest prefix match wins.
Here is a practical example combining multiple location types:
server {
listen 80;
server_name example.com;
root /var/www/example.com/public;
# Exact match for the homepage
location = / {
try_files /index.html =404;
}
# Static assets — stop searching after this match
location ^~ /static/ {
expires 30d;
add_header Cache-Control "public, immutable";
}
# PHP files
location ~ \.php$ {
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Everything else
location / {
try_files $uri $uri/ =404;
}
}
The try_files directive is particularly important. It tells Nginx to check for files in order and return a fallback if none exist. For single-page applications built with frameworks that use client-side rendering, you will commonly see try_files $uri $uri/ /index.html to route all requests back to the app’s entry point.
Production-Ready Configuration with SSL, Caching, and Security
A development configuration rarely survives contact with production. Real-world deployments need SSL/TLS encryption, response compression, proper caching headers, security headers to prevent common attacks, and rate limiting to protect against abuse. The following configuration brings all of these elements together.
Before implementing SSL, ensure you understand the OWASP Top 10 security vulnerabilities so you can layer Nginx security on top of secure application code. For web performance optimization, the caching and compression settings below make a significant difference in page load times.
# /etc/nginx/conf.d/example.com.conf
# Production-ready Nginx configuration with SSL, caching, gzip, and security
# Rate limiting zone — 10 requests per second per IP
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# Redirect www to non-www (HTTPS)
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://example.com$request_uri;
}
# Main server block
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name example.com;
# Document root
root /var/www/example.com/public;
index index.html index.htm;
# SSL Configuration
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
# Security Headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Gzip Compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/xml
application/rss+xml
image/svg+xml
font/woff2;
# Static Assets with Long Cache
location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Rate limiting on general pages
location / {
limit_req zone=general burst=20 nodelay;
try_files $uri $uri/ /index.html;
}
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Logging
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log warn;
}
This configuration covers the essentials that every production site needs. The SSL settings follow current best practices: TLS 1.2 and 1.3 only, strong cipher suites, OCSP stapling for faster certificate verification, and HSTS headers to prevent protocol downgrade attacks. The gzip settings compress text-based responses without wasting CPU on already-compressed formats like JPEG or PNG. Rate limiting protects against brute-force attacks and abusive crawlers.
If you manage infrastructure with tools like Terraform, you can template this configuration and deploy it consistently across multiple servers. Teams managing complex deployments with Kubernetes often use the Nginx Ingress Controller, which translates Kubernetes Ingress resources into Nginx configuration automatically.
Reverse Proxy Configuration for Node.js Applications
One of the most common uses of Nginx in modern web development is as a reverse proxy in front of Node.js, Python, Go, or other application servers. Nginx handles SSL termination, static file serving, compression, and load balancing, while your application server focuses on processing business logic. This separation is particularly important in microservices architectures where multiple backend services need a single entry point.
# /etc/nginx/conf.d/app.example.com.conf
# Reverse proxy configuration for a Node.js application
upstream node_backend {
# Load balancing across multiple Node.js instances
server 127.0.0.1:3000 weight=3;
server 127.0.0.1:3001 weight=3;
server 127.0.0.1:3002 weight=3;
# Connection pooling
keepalive 32;
# Health checks (available in Nginx Plus or with
# the ngx_http_upstream_hc_module)
# health_check interval=10 fails=3 passes=2;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Security headers
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Strict-Transport-Security "max-age=63072000" always;
# Client upload size limit
client_max_body_size 50M;
# Timeouts for long-running requests
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 120s;
# Gzip compression
gzip on;
gzip_vary on;
gzip_types text/plain application/json application/javascript text/css;
# Serve static assets directly from Nginx
location /static/ {
alias /var/www/app.example.com/static/;
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# API endpoints — proxy to Node.js
location /api/ {
limit_req zone=api burst=50 nodelay;
proxy_pass http://node_backend;
proxy_http_version 1.1;
# Headers for proper proxying
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Keep-alive connections to upstream
proxy_set_header Connection "";
# Buffering settings
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 4 32k;
# Do not pass along server identity
proxy_hide_header X-Powered-By;
}
# WebSocket endpoint
location /ws/ {
proxy_pass http://node_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# All other requests — proxy to Node.js for SSR
location / {
proxy_pass http://node_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_hide_header X-Powered-By;
}
# Health check endpoint
location = /health {
access_log off;
proxy_pass http://node_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
access_log /var/log/nginx/app.example.com.access.log;
error_log /var/log/nginx/app.example.com.error.log warn;
}
This configuration demonstrates several production patterns. The upstream block defines a pool of three Node.js instances with round-robin load balancing, and keepalive 32 maintains persistent connections to avoid the overhead of TCP handshakes on every request. Static files are served directly by Nginx using the alias directive, which is dramatically faster than routing them through Node.js.
The proxy_set_header directives ensure your application receives the correct client IP address and protocol information even though the connection passes through a proxy. Without X-Forwarded-For and X-Real-IP, your application logs would show all traffic coming from 127.0.0.1. The WebSocket configuration in /ws/ sets extended timeouts because WebSocket connections are long-lived by design.
Nginx as a Load Balancer
As your application grows, a single server will not handle all the traffic. Nginx supports several load balancing algorithms that you configure in the upstream block:
- Round Robin (default): Distributes requests evenly across all servers. Simple and effective when all backends have similar capacity.
- Least Connections (
least_conn): Sends each new request to the server with the fewest active connections. Better when request processing times vary significantly. - IP Hash (
ip_hash): Routes requests from the same client IP to the same backend server. Useful for applications that store session data in memory rather than a shared store like Redis. - Weighted: Assign
weightvalues to send more traffic to more powerful servers. A server withweight=5receives five times more requests than a server withweight=1.
You can also mark servers as backup (used only when all primary servers are down) or down (temporarily removed from the pool). Combined with the max_fails and fail_timeout parameters, Nginx automatically removes unresponsive backends and retries them after a cooldown period.
Performance Tuning
Default Nginx settings work for moderate traffic, but high-traffic sites benefit from tuning several parameters in nginx.conf:
- worker_processes: Set to
autoto match the number of CPU cores. Each worker process runs independently and handles connections using its own event loop. - worker_connections: The maximum number of simultaneous connections each worker can handle. A value of 2048 or 4096 is common. The total connection capacity is
worker_processes * worker_connections. - sendfile: Enable this to use the kernel’s
sendfile()system call for serving static files. This avoids copying data between kernel space and user space, reducing CPU usage and latency. - tcp_nopush and tcp_nodelay: Enable both.
tcp_nopushoptimizes the number of packets sent by waiting until a full packet is ready.tcp_nodelaydisables Nagle’s algorithm for keep-alive connections, reducing latency for small responses. - keepalive_timeout: How long idle keep-alive connections stay open. The default of 75 seconds is reasonable; lower it if you need to free connections faster under heavy load.
- open_file_cache: Caches file descriptors, metadata, and existence checks. This eliminates repeated filesystem calls for frequently accessed static files.
For sites expecting serious traffic, consider splitting your infrastructure. Use Nginx for SSL termination and static content, a dedicated application tier for business logic, and a caching layer like Varnish or Redis for dynamic content. If you are evaluating managed hosting alternatives, platforms reviewed in our comparison of Vercel, Netlify, and Cloudflare Pages handle much of this complexity for you, though with less control over the underlying server configuration.
Common Configuration Patterns
Single-Page Application with HTML5 History Mode
When deploying React, Vue, or Angular applications that use client-side routing, Nginx must return index.html for any route that does not match an actual file:
location / {
try_files $uri $uri/ /index.html;
}
Without this, refreshing the page on a route like /dashboard/settings returns a 404 because Nginx looks for a file at that path.
WordPress or PHP-FPM
WordPress requires Nginx to pass PHP files to PHP-FPM and handle pretty permalinks:
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_intercept_errors on;
}
CORS Headers for API Servers
When your API serves requests from a different domain, you need to add CORS headers:
location /api/ {
if ($request_method = 'OPTIONS') {
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
add_header Access-Control-Allow-Headers "Authorization, Content-Type";
add_header Access-Control-Max-Age 86400;
return 204;
}
add_header Access-Control-Allow-Origin $http_origin always;
proxy_pass http://backend;
}
Debugging Nginx Configuration
When things go wrong, these techniques help you find the problem quickly:
- Test before reload: Always run
nginx -tbefore applying changes. This catches syntax errors without affecting running traffic. - Increase error log verbosity: Change
error_loglevel fromwarntodebugtemporarily:error_log /var/log/nginx/error.log debug;. This generates enormous output, so use it only for short debugging sessions. - Check which server block handles a request: Add a custom header like
add_header X-Debug-Server "block-name";to each server block. Inspect response headers withcurl -Ito see which block responded. - Watch logs in real time: Use
tail -f /var/log/nginx/error.logwhile making test requests to see errors as they happen. - Verify upstream connectivity: If proxying fails, test the backend directly with
curl http://127.0.0.1:3000from the server to confirm the backend is responding.
Project management during complex infrastructure changes is critical. Tools like Taskee help development teams track configuration changes, assign deployment tasks, and ensure nothing falls through the cracks when multiple team members work on server infrastructure simultaneously.
Security Hardening Beyond the Basics
The production configuration above includes essential security headers, but hardening Nginx further involves several additional steps:
- Disable server tokens: Add
server_tokens off;in thehttpblock to prevent Nginx from revealing its version number in error pages and response headers. - Restrict HTTP methods: If your site only needs GET and POST, reject other methods:
if ($request_method !~ ^(GET|HEAD|POST)$) { return 405; } - Block suspicious user agents: Bots that do not identify themselves or use known malicious user-agent strings can be blocked at the Nginx level before they reach your application.
- Limit request body size: Set
client_max_body_sizeto the minimum your application needs. A blog might need 10MB for image uploads, while an API that only accepts JSON might need just 1MB. - Content Security Policy: Add a CSP header to control which resources the browser can load:
add_header Content-Security-Policy "default-src 'self'; script-src 'self';" always;
For organizations planning their web infrastructure, consulting with experienced web development agencies can help you design an Nginx configuration that meets your specific security and performance requirements from the start.
Nginx vs. Other Web Servers
Understanding when Nginx is the right choice helps you make better architecture decisions:
- Nginx vs. Apache: Nginx handles static content and concurrent connections more efficiently. Apache has more flexible per-directory configuration via .htaccess files. For most modern deployments, Nginx is the better choice unless you specifically need .htaccess support.
- Nginx vs. Caddy: Caddy provides automatic HTTPS with zero configuration and a simpler config syntax. Nginx offers more granular control, better performance at scale, and a larger ecosystem of modules. Choose Caddy for simplicity, Nginx for control.
- Nginx vs. Traefik: Traefik integrates natively with container orchestrators and provides automatic service discovery. If you are running services on Kubernetes or Docker Swarm, Traefik may require less manual configuration. Nginx remains faster for raw HTTP serving.
Monitoring and Log Analysis
Running Nginx in production without monitoring is flying blind. At minimum, you should track:
- Request rate: How many requests per second Nginx handles. Sudden spikes may indicate a DDoS attack or a viral page.
- Error rates: Track 4xx and 5xx responses separately. A spike in 502 errors means your backend is failing. A spike in 404s may indicate broken links or a scanning bot.
- Response times: Measure both Nginx processing time and upstream response time. Nginx adds the
$request_timeand$upstream_response_timevariables that you can include in your log format. - Connection counts: Monitor active connections, reading connections, writing connections, and waiting connections. The
stub_statusmodule provides these metrics at a dedicated endpoint.
A custom log format that includes timing information makes log analysis significantly easier:
log_format detailed '$remote_addr - [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct=$upstream_connect_time '
'urt=$upstream_response_time';
Feed these logs into a monitoring stack like Prometheus with the Nginx Exporter, or use ELK (Elasticsearch, Logstash, Kibana) for full-text search across your logs.
Frequently Asked Questions
What is the difference between Nginx and Apache, and which should I use?
Nginx uses an event-driven architecture that handles thousands of concurrent connections with minimal memory, making it superior for serving static content and acting as a reverse proxy. Apache uses a process-per-connection model that consumes more resources under load but offers per-directory configuration through .htaccess files. For most modern web applications, Nginx is the recommended choice due to its performance advantages and lower resource consumption. Apache may be preferable if you rely heavily on .htaccess for distributed configuration or need specific Apache modules.
How do I set up SSL/HTTPS with Nginx using Let’s Encrypt?
Install Certbot (the Let’s Encrypt client) with apt install certbot python3-certbot-nginx on Debian/Ubuntu. Run certbot --nginx -d yourdomain.com -d www.yourdomain.com and Certbot will automatically obtain a certificate, modify your Nginx configuration to use it, and set up auto-renewal. Certificates renew automatically via a systemd timer or cron job. For manual configuration, obtain the certificate with certbot certonly --webroot -w /var/www/html -d yourdomain.com and reference the certificate files in your Nginx server block as shown in the production configuration example in this article.
How do I configure Nginx as a reverse proxy for a Node.js or Python application?
Create an upstream block defining your backend server (for example, upstream backend { server 127.0.0.1:3000; }) and use proxy_pass http://backend; inside a location block. Set essential proxy headers: proxy_set_header Host $host, proxy_set_header X-Real-IP $remote_addr, and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for. These headers ensure your application receives the correct client information. For WebSocket support, add proxy_set_header Upgrade $http_upgrade and proxy_set_header Connection "upgrade". Let Nginx handle SSL termination and static file serving to reduce the load on your application server.
Why am I getting a 502 Bad Gateway error in Nginx?
A 502 Bad Gateway error means Nginx successfully received the client request but could not get a valid response from the upstream (backend) server. The most common causes are: the backend application is not running or has crashed, the upstream address or port in the Nginx configuration is wrong, the backend is overloaded and timing out, or a firewall is blocking the connection between Nginx and the backend. Check the Nginx error log (/var/log/nginx/error.log) for specific details, verify the backend is running with curl http://127.0.0.1:3000 from the server, and ensure the proxy_pass address matches where your application is actually listening.
How can I improve Nginx performance for high-traffic websites?
Start with these optimizations: set worker_processes auto to match your CPU cores, increase worker_connections to 4096 or higher, enable sendfile on and tcp_nopush on for efficient file delivery, and activate gzip compression for text-based content. Enable the open_file_cache directive to avoid repeated filesystem lookups for popular files. For static assets, set long expires headers so browsers cache them locally. Use keepalive connections to upstream backends to eliminate TCP handshake overhead. If a single server is not enough, configure upstream load balancing across multiple backend instances using the least_conn algorithm for the most even distribution.