The purpose of the article is to explain default configurations (which can cause bottleneck) at Node.js, Apache, and OS level and how to change those configurations. Any configuration listed below in the diagram can affect client request processing time.
In the below example, the Node.js server is running behind a reverse proxy server like Apache. And Process Manager PM2 is also used to load balance Node.js applications.

NodeJS
Concurrent Requests
By default how many concurrent requests can a Node.js handle?
Since JavaScript execution in Node.js is single-threaded, at a time, one request will be executed but by doing asynchronous operations and using event loop, at a given point of time, Node.js will be handling many requests.
Every connection to/from the node server needs a socket to be created by the kernel. In Unix-based systems, any I/O operation (socket, accept, read, write, etc.) done by a process needs a file descriptor. By default, each process has a limit on how many file descriptors it can create. Below are limits on a Node.js server process.
The node process can create 16384 file descriptors per the above limits. To know how many file descriptors got created by each Node.js process running with PM2.
If this (16384) number is reached for a Node.js process, then — when a Node.js process requests to create a new file descriptor to the kernel, the kernel will through EMFILE (EMFILE The per-process limit on the number of open file descriptors has been reached.) and Node.js will throw
To fix this, we can increase the limit add the below line at the end of the file /etc/security/limits.conf
Note: If you want to add based on user please note that the process (PM2/node) should start with below user.
There is system-wide limit on the number of file descriptors to be created
To know currently system-wide how many descriptors got created
To test various configurations on Node.js and Apache, below blocking and non-blocking request methods are used.
And you can also refer below Apache proxy configuration (httpd.conf) used to route requests to Node.js
Regarding OS, the below experiment was done on CentOS.
Node.js Backlog
What is the default backlog of the node process?
If there is a blocking thread executing in Node.js, how many requests can be kept in the backlog?
In Unix-based systems, the max number of connections that can be kept in backlog at a socket is based on the below setting.
For example — 131 (blocking) requests are sent to Node.js using below command.
Using netstat
command, we can see that 131 requests came to 8443 port.
Out of those one got established with node.
129 requests connections waiting to 8443 port to connect to node and 1 request is on SYN state.
SYN retries are based on below settings
SYN connections timeout can happen in ~100 seconds. After retries, the connection reset will be sent by the server
131st request got TCP RST which led to Connection reset by peer after 102 seconds
To fix the above issue, we can increase backlog on a socket
To make above setting permanent add below line to /etc/sysctl.conf
Run sysctl -p
or restart to reflect the above setting.
After doing the above setting, if 1000 blocking requests are sent to the node server
Only 512 requests are in backlog
After 105 seconds, 513th request got Connection reset by peer
Only 512 are in the backlog because the default backlog setting on the node server is 511.
Node default backlog can be increased by passing backlog parameter in listen method like below.
After the above setting, when 1000 blocking requests are made, 999 requests are kept in the backlog.
Connection timeout
If 131 non-blocking requests are sent to node
131 requests came to 8443 port
Out of those, all 131 got established with node.
0 requests are waiting in SYN backlog & 0 requests are waiting on 8443 port to get connections
After 2 minutes, 1st request got disconnected
Because Node.js, by default, has 2 minutes timeout, after that, a connection will be terminated.
The above timeout can be increased by using setTimeout method.
Note: Timeout will happen only when the request is kept in event loop. If a (blocking) request is currently executing in Node.js will have no effect on this setting.
Apache
Proxying to node
Proxying requests to a node from Apache using mod_proxy
notice timeout setting — when Apache forwards the request to node, it waits for 600 seconds (10 mins) before terminating.
Having fine-tuned Node.js, lets look at Apache configurations.
502 Proxy error
By default net.core.somaxconn set to 128. When 131 blocking requests are sent through httpd to Node.js using the below command.
131 requests established with httpd on port 80
131 requests forwarded to Node.js 8443
1 is SYN backlog & 129 requests connections waiting to 8443 port to get connections, 1 request is with node process.
131st request got TCP RST from Node.js, Apache sent 502 Proxy Error to client.
Note: Apache will send 502 Proxy Error
when SIGTERM (kill -3) or SIGKILL (kill -9) or restart is done on Node.js.
Server limit
When 512 non-blocking requests are sent to Apache, only 256 requests are sent to Node.js.
Note: SYN connections timeout can be ~100 seconds. After certain retries the connection reset will be send by Apache server
Why only 256 requests are sent to Node.js? by default, the ServerLimit is 256, so Apache can create only 256 processes. (one process for each request).
This can be increased by adding below configuration in Apache httpd.conf
Apache Backlog
After increasing the socket backlog by increasing net.core.somaxconn to 1024 When 1000 requests are sent to Apache, Apache only sends 512 requests.
Apache has, by default, have a backlog of 511 requests. This can be increased by adding the below setting in httpd.conf configuration
I hope this helps find any bottlenecks at Node.js, Apache, and OS. – RC
Comments