Warm tip: This article is reproduced from stackoverflow.com, please click
fastcgi nginx

How can I make nginx handle fastcgi requests concurrently?

发布于 2020-05-25 12:50:17

Using a minimal fastcgi/nginx configuration on ubuntu 18.04, it looks like nginx only handles one fastcgi request at a time.

# nginx configuration
location ~ ^\.cgi$ { 
    # Fastcgi socket
    fastcgi_pass  unix:/var/run/fcgiwrap.socket;

    # Fastcgi parameters, include the standard ones
    include /etc/nginx/fastcgi_params;
}

I demonstrate this by using a cgi script like this:

#!/bin/bash

echo "Content-Type: text";
echo;
echo;
sleep 5;
echo Hello world

Use curl to access the script from two side-by-side command prompts, and you will see that the server handles the requests sequentially.

How can I ensure nginx handles fastcgi requests in parallel?

Questioner
Steve Hanov
Viewed
67
Soleil - Mathieu Prévot 2020-03-12 06:55

In order to have Nginx handles fastcgi requests in parallel you'll need several things:

  1. Nginx >= 1.7.1 for threadpools, and this configuration:
worker_processes N; // N as integer or auto

where N is the number of processes, auto number of processes will equate the number of cores; if you have many IO, you might want to go beyond this number (having as many processes/threads as cores is not a warranty that the CPU will be saturated).

In terms of NGINX, the thread pool is performing the functions of the delivery service. It consists of a task queue and a number of threads that handle the queue. When a worker process needs to do a potentially long operation, instead of processing the operation by itself it puts a task in the pool’s queue, from which it can be taken and processed by any free thread.

Consequently, you want to choose N bigger than the maximum number of parallel requests. Hence you can pick say 1000, even if you got 4 cores; for IO, threads will only take some memory, not much CPU.

  1. When you have many IO requests with large latencies, you'll also need aio threads in the 'http', 'server', or 'location' context, which is a short for:
# in the 'main' context
thread_pool default threads=32 max_queue=65536;

# in the 'http', 'server', or 'location' context
aio threads=default;

You might see that switching from Linux to FreeBSD can be an alternative when dealing with slow IO. See the reference blog for deeper understanding.

Thread Pools in NGINX Boost Performance 9x! (www.nginx.com/blog)