🚀 Depnix is currently in private beta. Request early access today!
Scaling Beyond a Single Server: A Guide to Nginx Load Balancing

Scaling Beyond a Single Server: A Guide to Nginx Load Balancing

Arafat IslamArafat Islam
April 28, 2026
4 min read

Introduction

Building an application is only half the battle. As your user base grows, a single virtual machine (VM) eventually hits its performance ceiling. Whether it is CPU exhaustion or memory saturation, you will eventually need to scale horizontally by adding more servers. But how do you distribute traffic across multiple instances seamlessly?

The answer is load balancing, and Nginx is one of the most popular, reliable, and efficient tools for the job. In this guide, we will explore how to configure Nginx as a load balancer for your Depnix-managed VMs, ensuring high availability and improved performance.

Why Use Nginx for Load Balancing?

Nginx began as a web server, but its asynchronous, event-driven architecture makes it an exceptional load balancer. It can handle tens of thousands of concurrent connections with a very low memory footprint. By placing Nginx in front of your application servers, you gain:

  • Scalability: Easily add or remove backend servers as demand fluctuates.
  • Redundancy: If one server fails, Nginx redirects traffic to healthy ones.
  • SSL Termination: Handle HTTPS encryption at the load balancer level to reduce the load on your app servers.
  • Flexibility: Use various algorithms to decide how traffic is distributed.

The Core Concept: The Upstream Module

To use Nginx as a load balancer, you use the upstream directive. This defines a group of servers that Nginx can proxy requests to. This block lives inside the http context of your Nginx configuration.

Basic Configuration

Here is a simple example where we balance traffic between two application servers:

http {
    upstream my_app_servers {
        server 192.168.1.10:8080;
        server 192.168.1.11:8080;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://my_app_servers;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

In this setup, Nginx listens on port 80 and passes all incoming traffic to the my_app_servers group.

Load Balancing Algorithms

Nginx offers several methods to distribute traffic. Choosing the right one depends on your application's needs.

1. Round Robin (Default)

Traffic is distributed evenly across the servers in the list. It is best when all backend servers have similar hardware specifications.

2. Least Connections

Nginx tracks how many active connections each server has and sends the next request to the server with the fewest active sessions. This is ideal for requests that take a varying amount of time to process.

upstream my_app {
    least_conn;
    server 10.0.0.1;
    server 10.0.0.2;
}

3. IP Hash

This method uses the client's IP address to determine which server should handle the request. This ensures that a specific user always reaches the same backend server, which is helpful for applications that store session data locally (sticky sessions).

upstream my_app {
    ip_hash;
    server 10.0.0.1;
    server 10.0.0.2;
}

Health Checks and Reliability

One of the primary benefits of a load balancer is its ability to handle server failures. Nginx performs "passive" health checks by default. If a server fails to respond or returns an error, Nginx will mark it as unavailable and try another server in the group.

Parameters for Control

  • max_fails: The number of unsuccessful attempts that should happen in a specific duration to consider the server unavailable (default is 1).
  • fail_timeout: The time during which the specified number of unsuccessful attempts should happen, or the time the server is considered unavailable (default is 10 seconds).
upstream backend {
    server 10.0.0.1 max_fails=3 fail_timeout=30s;
    server 10.0.0.2;
}

Advanced Tips for Production

SSL Termination

Instead of managing SSL certificates on every single app server, you can install your certificates on the Nginx load balancer. Nginx handles the heavy lifting of decryption and communicates with your backend servers over a secure private network.

Weighting Servers

If you have one server that is significantly more powerful than the others, you can assign it a higher weight:

upstream backend {
    server 10.0.0.1 weight=3; # Receives 3x more traffic
    server 10.0.0.2 weight=1;
}

Conclusion

Implementing Nginx as a load balancer is a foundational step in building a resilient, scalable infrastructure. It allows you to grow your application across multiple Depnix VMs without changing how your users access your service. By mastering upstream configurations, choosing the right balancing algorithm, and implementing health checks, you ensure your application remains fast and available even under heavy load.

Ready to scale? Deploy your next set of application nodes on Depnix and place an Nginx load balancer in front today!