worker_connections are not enough - Nginx, docker
In my production server we have several upstreams which are docker containers running behind a reverse proxy with nginx. One of this containers is a mqtt broker (mosquitto) that we use to connect through websockets. This is our nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream br-frontend {
server br-frontend:3000;
}
upstream br-backend {
server br-backend:5000;
}
upstream mosquitto {
server mosquitto:9001;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
server {
listen 443 ssl default_server;
server_name _;
location / {
proxy_pass http://br-frontend/;
}
location /api {
proxy_pass http://br-backend;
}
location /swagger.json {
proxy_pass http://br-backend/swagger.json;
}
location /swaggerui {
proxy_pass http://br-backend/swaggerui;
}
location /mosquitto-ws {
proxy_pass http://mosquitto;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
}
Yesterday, our production server crashed because of the following error. I've read that I could increase the number of the worker_connections but I don't think that is the best solution. I've also read in other questions that I might have an infinite loop in my nginx.conf file but I'm not able to see it.
2018/12/14 00:23:12 [alert] 6#6: 1024 worker_connections are not enough
2018/12/14 00:23:13 [alert] 6#6: *14666 1024 worker_connections are not enough while connecting to upstream, client: *.*.*.*, server: _, request: "GET /mosquitto-ws HTTP/1.1", upstream: "http://172.21.0.5:9001/mosquitto-ws", host: "****"
Update: docker-compose.yml
version: '3'
services:
mongodb:
image: mongo:latest
volumes:
- './data/db:/data/db'
- './data/configdb:/data/configdb'
ports:
- 27017:27017
br-backend-express:
working_dir: /app
command: npm run execute-prod
image: ${ACR}/br-backend-express:${tag}
ports:
- "5000:5000"
depends_on:
- mongodb
mosquitto:
image: ${ACR}/mosquitto:${tag}
depends_on:
- br-backend-express
br-bridge:
working_dir: /app
image: ${ACR}/br-bridge:${tag}
command: npm run execute-prod
depends_on:
- mosquitto
- mongodb
br-frontend:
image: ${ACR}/br-frontend:${tag}
nginx:
image: ${ACR}/nginx:${tag}
ports:
- 443:443
- 80:80
depends_on:
- br-frontend
- br-backend-express
Any help would be appreciated. Thanks.
According to Nginx docs on worker_connections
"… Sets the maximum number of simultaneous connections that can be opened by a worker process. It should be kept in mind that this number includes all connections (e.g. connections with proxied servers, among others), not only connections with clients. …"
Given that your config uses proxied connections it shouldn't be surprising that under intensive enough traffic Nginx would start lack of free connections.
It also mentions worker_rlimit_nofile
to be adjusted in regards, so pay your attention to this as well.
And finally although it's not directly relevant to the issue but I'd recommend anyways using worker_processes auto
to achieve better distribution of Nginx load between available cores.