Nginx: The Lightning-Fast Web Server You Should Be Using

nginx web server
dianfajar
7 January 2024

Introduction to nginx

nginx (pronounced “engine-x”) is a free, open-source, high-performance HTTP server, reverse proxy, and IMAP/POP3 proxy server. Originally created by Igor Sysoev in 2002 to solve the C10K problem of handling 10,000 concurrent connections on a single web server, nginx has grown to become one of the most widely used web servers in the world.

Some key features and advantages of nginx include:

  • High performance – nginx uses an asynchronous event-driven approach which provides high concurrency and throughput with low memory usage. This makes it well suited to handle modern web applications and APIs with a large number of concurrent connections.
  • Easy scalability – nginx’s modular architecture makes it easy to scale horizontally by adding more machines behind a load balancer.
  • Advanced load balancing – nginx includes a powerful load balancing engine to distribute traffic across multiple application servers and optimize resource utilization.
  • Reverse proxy and caching – nginx can function as a reverse proxy to cache static content, minimize requests to the backend application servers, and provide SSL/TLS termination.
  • Web server capabilities – In addition to reverse proxying and load balancing, nginx can also function as a high-performance web server by directly serving static files, dynamic content, and SSL encrypted traffic.
  • Low resource consumption – nginx has a very small memory footprint compared to other web servers and can handle tens of thousands of concurrent connections with minimal RAM usage.
  • Configuration flexibility – nginx provides a simple, declarative configuration syntax allowing extensive customization to adapt to any use case. Modules further extend nginx capabilities.

With its ability to handle high traffic loads with optimal resource usage, rich feature set, and active open source community, nginx has become an integral part of many modern web application deployments.

Architecture and Performance

Nginx utilizes an event-driven, non-blocking, asynchronous architecture that enables it to handle tens of thousands of concurrent connections with minimal resource usage. This makes nginx highly performant and scalable compared to traditional web servers like Apache that use a synchronous, thread-based model.

The core of nginx consists of independent event-driven processes that handle requests asynchronously. When a request comes in, it is passed to one of the worker processes as an event using the most efficient method available on the given operating system. The worker process handles the request asynchronously by communicating with other upstream components like proxied servers or the filesystem with non-blocking I/O operations. It does not wait for responses before moving on to handle other incoming requests.

This event-driven approach minimizes CPU usage even with a large number of concurrent connections, as time-consuming operations do not block the worker process. The asynchronous, non-blocking nature of nginx allows it to handle thousands of connections using very little memory. A single nginx process can support tens of thousands of concurrent connections from clients.

The event-driven architecture makes nginx very good at IO-bound tasks like serving static files, proxying requests or handling slow clients. It can deliver maximum performance with minimal hardware resources required. Sites like Netflix use nginx to handle massive amounts of traffic and requests. Overall, nginx is optimized for high concurrency, high performance and low resource usage.

Load Balancing

Nginx can be used to distribute incoming requests across multiple backend servers, a technique known as load balancing. This improves performance by spreading the workload and prevents overloading any single server.

Nginx supports various load balancing algorithms and techniques:

  • Round Robin: Requests are distributed evenly across the backend servers in a circular order. This is the default method.
  • Weighted Round Robin: Servers can be assigned a weight (e.g. based on performance), and Nginx will send more requests to servers with higher weights.
  • IP Hash: Requests from the same IP get sent to the same backend server. Useful for applications that need session persistence.
  • Least Connections: Directs requests to the server with the fewest active connections, ideal for long sessions.
  • Generic Hash: Requests are distributed based on a user-defined key in the request. Useful for sticking users to specific servers.

Nginx also supports passive health checks to automatically detect unhealthy backends and stop sending requests to them until they recover. Active health checks can actively probe backends to preemptively detect failures.

With its flexible load balancing capabilities, Nginx is well-suited for building scalable and reliable web applications and services. The load balancing helps avoid downtime and improves performance for users.

Reverse Proxy: Nginx can be configured as a reverse proxy to cache static content, compress responses, and terminate SSL/TLS connections before passing requests to backend servers. This improves performance, security, and reliability.

Caching Static Content : Nginx caches static files like images, CSS, and JavaScript in memory. This speeds up load times since files can be served directly from nginx’s cache instead of hitting the backend on every request. The cache can be configured to invalidate files after a certain time period or when the source file is modified.

Compression : Nginx compresses server responses using gzip or brotli before sending them to clients. This saves bandwidth and speeds up transfers. Compression is enabled by default for common text-based formats like HTML, JSON, XML, and text.

SSL/TLS Termination : Nginx decrypts HTTPS traffic at the edge then passes unencrypted HTTP to the backend servers. This offloads intensive SSL/TLS encryption/decryption to nginx and allows backend services to focus on application logic. Nginx can also handle TLS certificates and key pairs instead of each backend server needing its own certificates.

Virtual Hosting

One of nginx’s most useful features is its ability to host multiple websites on a single server. This is accomplished through virtual hosting, which allows nginx to serve different domains from the same IP address and TCP port.

There are two main approaches to virtual hosting in nginx:

Name-Based Virtual Hosting

With name-based virtual hosting, nginx selects the correct website to show based on the domain name requested. The server looks at the Host header in the HTTP request to determine which site to serve.

Name-based virtual hosting is convenient as it allows hosting multiple websites without needing dedicated IP addresses for each one. However, it does have some limitations:

  • SSL/TLS encryption requires a different IP per certificate, so name-based hosting is problematic for HTTPS sites.
  • Server Name Indication (SNI) is needed to read the Host header over SSL/TLS. Older browsers may not support SNI.

IP-Based Virtual Hosting

IP-based virtual hosting uses a different IP address for each website hosted. The server determines the correct site to show based on the destination IP.

With IP-based hosting, each site can use SSL/TLS without any issues. There is no need for SNI support. However, dedicated IPs are required for each website hosted.

Overall, name-based virtual hosting is more common today due to lack of available IPv4 addresses. SNI and HTTP/2 resolve most of the SSL/TLS issues with name-based hosting. IP-based hosting is simpler but requires more IP resources.

Nginx is flexible and supports both name-based and IP-based virtual hosting approaches. It can serve multiple websites from a single server very efficiently.

Static file serving

Nginx is highly optimized for serving static files and can handle thousands of simultaneous connections with low memory usage. This makes it well-suited for serving images, CSS, JavaScript, and other static assets for high traffic websites.

A key advantage of Nginx for static files is its simple and efficient configuration. The root directive is used to define the document root where the files are stored. For example:

root /var/www/html;

The index directive specifies the default file to serve for directories. Typically this is index.html:

index index.html; 

With just these two directives, Nginx will directly serve files from the defined document root. It also caches file metadata and handles partial requests for optimizing large file downloads.

Nginx examines the file only on initial request and then caches the information. Subsequent requests are served directly from memory rather than checking the disk. This avoids unnecessary disk I/O and significantly improves performance.

The sendfile directive enables direct transmission of files between Nginx and the client for maximum throughput. It bypasses sending data through userspace:

sendfile on;

In summary, Nginx provides a very fast and lightweight way to serve static content. With simple configuration of the root and index, it can efficiently handle high volumes of traffic and throughput for static websites and assets.

Rate Limiting

Nginx has built-in rate limiting capabilities that allow restricting the number of connections or requests per client IP address. This helps prevent brute force attacks, DDoS attacks, and overall system overload from excessive requests.

To limit connections per IP address, the limit_conn directive can be used along with the limit_conn_zone directive to create a shared memory zone for storing state. For example:

limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

server {

  ...

  limit_conn conn_limit_per_ip 30;

}

This limits each IP to 30 concurrent connections. The limit_conn_zone directive allocates 10MB of shared memory to store the state.

To limit request rate per IP address, the limit_req directive can be used along with limit_req_zone. For example:

limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=10r/s; 

server {

  ...

  limit_req zone=req_limit_per_ip burst=20 nodelay;

}

This limits each IP to 10 requests per second, with a burst capability of 20 requests. The limit_req_zone directive allocates 10MB of shared memory to store state. The burst allows a burst of requests above the set rate, to accommodate occasional spikes in traffic.

Nginx rate limiting provides a powerful way to protect against abusive clients and traffic surges. Careful configuration of the rate limiting rules allows optimizing for legitimate traffic while limiting malicious actors.

Access Control

Nginx provides several methods for controlling access to your web server and applications. Two commonly used methods are IP-based access rules and password authentication.

IP-based Allow/Deny

Nginx allows administrators to control access to resources based on a client’s IP address or subnet. The allow directive specifies which IP addresses are permitted access, while deny restricts access.

For example, you may want to only allow requests from your office’s IP range:

allow 192.168.1.0/24;
deny all;

This allows the 192.168.1.x subnet and denies everything else. The allow/deny rules are evaluated in order, with the first match applied.

Password Authentication

Nginx can prompt clients for a username and password before allowing access to resources. This is configured using the auth_basic directive.

For example:

auth_basic “Restricted”;
auth_basic_user_file /etc/nginx/.htpasswd;

This prompts the client for credentials defined in the .htpasswd file. The prompt text “Restricted” is customizable.

Password authentication allows basic access restriction without the need for separate IP-based rules. It is useful for protecting admin interfaces, APIs, or other resources.

SSL/TLS Support

Nginx has built-in support for SSL/TLS encryption to secure website connections. This allows websites to serve content over HTTPS, providing encrypted connections between the server and clients.

Nginx is able to terminate SSL/TLS connections and handle the encryption/decryption involved. The web server acts as the endpoint for the encrypted connection, decrypting requests before passing them to the application server and encrypting responses.

This removes the need for each application server to handle SSL separately. Nginx handles SSL connections very efficiently, with minimal overhead.

An important aspect of SSL/TLS configuration is choosing secure cipher suites and settings. Nginx allows fine-grained control over the cipher suites used for encryption. Strong cipher suites like AES can be prioritized, while weak and obsolete ciphers are disabled.

Cipher settings in Nginx are optimized for performance and security out of the box. But the configurations can also be customized as needed for specific security policies and hardware. Key exchange algorithms, encryption ciphers, MAC digests and more can all be configured.

Nginx supports advanced TLS protocols including TLS 1.3, providing fast and secure HTTPS connectivity. TLS session caching and session tickets help reduce SSL handshake overhead.

Overall, the flexible SSL/TLS termination and optimized cipher configurations make Nginx an efficient choice for handling HTTPS traffic securely.

Customization

One of the strengths of nginx is how customizable it is through modules, extensions, scripting, and configuration. This makes nginx flexible enough to handle many different use cases.

Modules and Extensions

Nginx has a modular architecture that allows many features to be added through modules. There are official nginx modules as well as third party modules. Some examples include:

  • ngx_http_gzip_module – for gzip compression of responses
  • ngx_http_geoip_module – for looking up visitor country by IP
  • ngx_http_image_filter_module – for image manipulation and resizing
  • ngx_http_postgres_module – for accessing PostgreSQL databases

Modules allow nginx to be extended for needs like security, monitoring, localization, uploads, API’s, and more.

Custom Headers and Rewrites

Nginx can be used to add, modify, or remove any part of HTTP requests and responses. This includes adding custom headers, changing the user-agent, removing default headers, and rewriting URLs.

For example, the proxy_set_header directive can set custom headers to better identify client requests. Rewrite rules can redirect or modify URIs to handle custom URL structures.

Lua Scripting

Nginx supports embedded Lua scripts to implement advanced web application logic. Lua code can be run during various phases of request processing to customize how nginx handles requests and responses.

Some examples of what Lua scripting enables:

  • Dynamic control of nginx configuration
  • Access and modify request/response headers
  • Make backend API calls
  • Run custom authentication logic
  • Implement rate limiting policies
  • Parse JSON, process images, and more

Lua support allows implementing practically any use case without having to touch nginx core code. It brings the programmability and flexibility of a web framework to nginx.