Nginx : Setting Up Nginx with Load Balancing
This guide explains how to configure Nginx as a load balancer for multiple backend applications running on different ports. The setup includes three application servers running on ports 8001, 8002, and 8003, with Nginx balancing requests among them.
Nginx Configuration Details
Individual Server Blocks
Each backend application is configured as an independent Nginx server block:
Server 1 Configuration (server1.conf)
[root@server1 conf.d]# cat /etc/nginx/conf.d/server1.conf | grep -v "#" server {
listen 8001;
server_tokens off;
server_name _;
client_max_body_size 35M;
charset utf-8;
large_client_header_buffers 4 16k;
root /usr/share/nginx/server1;
index index.html index.php index.cgi;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}Server 2 Configuration (server2.conf)
[root@server1 conf.d]# cat /etc/nginx/conf.d/server2.conf | grep -v "#"
server {
listen 8002;
server_tokens off;
server_name _;
client_max_body_size 35M;
charset utf-8;
large_client_header_buffers 4 16k;
root /usr/share/nginx/server2;
index index.html index.php index.cgi;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}Server 3 Configuration (server3.conf)
[root@server1 conf.d]# cat /etc/nginx/conf.d/server3.conf | grep -v "#"
server {
listen 8003;
server_tokens off;
server_name _;
client_max_body_size 35M;
charset utf-8;
large_client_header_buffers 4 16k;
root /usr/share/nginx/server3;
index index.html index.php index.cgi;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}Load Balancer Configuration
The main Nginx configuration (nginx.conf) includes an upstream block to balance traffic across the three servers:
root@server1 conf.d]# cat /etc/nginx/nginx.conf | grep -v "#"
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
upstream appstack {
server 127.0.0.1:8001 weight=1;
server 127.0.0.1:8002 weight=1;
server 127.0.0.1:8003 weight=1;
}
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
location / {
proxy_pass http://appstack;
}
}
Testing the Load Balancer
To verify that the load balancer is distributing requests evenly among the backend servers, the following command can be used:
while true; do elinks --dump http://localhost; sleep 1; doneExpected Output (Rotating Applications)
Application 1
Application 2
Application 3
Application 1
Application 2
Application 3
Application 1
Application 2Conclusion
This configuration ensures that incoming requests are evenly distributed among the three backend application servers, improving performance and reliability. Additional configurations such as session persistence, SSL termination, or rate limiting can be added based on requirements.
=======================================================================
Thank you, check out more blogs for such information..!
For any queries,feel free to reach out me on shubhammore07007@gmail.com
Comments
Post a Comment