Question

I have Nginx 1.4.4 and PHP 5.5.6. I'm making long-polling requests. Problem is, that if I cancel the HTTP request sent via Ajax, requests are still processing (they don't stop). I tested it with the PHP mail() function at end of file, and mail is still coming the file didn't stop).

I'm worried, because I think that it might cause server crash because of the high load of unclosed requests. Yes, I tried ignore_user_abort(false); but with no changes. Is possible that I should change something in Nginx?

  location ~ \.php$ {    
    try_files $uri =404;
    include fastcgi_params;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;  
  }
Was it helpful?

Solution

The bad news is you are almost certainly not going to be able to solve you problem how you want to solve it. The FastCGI signal sent when a client closes a connection before receiving a request is FCGI_ABORT_REQUEST

A Web server aborts a FastCGI request when an HTTP client closes its transport connection while the FastCGI request is running on behalf of that client. The situation may seem unlikely; most FastCGI requests will have short response times, with the Web server providing output buffering if the client is slow. But the FastCGI application may be delayed communicating with another system, or performing a server push.

Unfortunately it looks like neither the original fast-cgi implementation nor PHP-FPM support the FCGI_ABORT_REQUEST signal, and so can't be interrupted.

The good news is there are better ways to solve this problem. Basically you should never have requests that take a long time to process. Instead if a request needs a long time to process you should:

  • Push it to a queue of tasks that need to be processed.
  • Return the 'task ID' to the client.
  • Have the client poll periodically to see if that 'task' is completed and when it is completed display the results.

In addition to those 3 basic things - if you're concerned about wasting system resources when a client is no longer interested in the results of a request you should add:

  • Break tasks into small pieces of work, and only move tasks from one work 'state' to the next, if the client is still asking for the result.

You don't say what your long running task is - let's pretend that it's to download a large image file from another server, manipulate that image, and then store it in S3. So the states for this task would be something like:

TASK_STATE_QUEUED
TASK_STATE_DOWNLOADING //Moves to next state when finished download
TASK_STATE_DOWNLOADED
TASK_STATE_PROCESSING  //Moves to next state when processing finished
TASK_STATE_PROCESSED
TASK_STATE_UPLOADING_TO_S3 //Moves to next state when uploaded
TASK_STATE_FINISHED

So when the client sends the initial request, it gets back a taskID and then when it queries the state of that task, either:

  • The server reports that the task is still being worked on

or

  • If it's in one of the following states, the client request bumps it to the next status.

i.e.

TASK_STATE_QUEUED => TASK_STATE_DOWNLOADING
TASK_STATE_DOWNLOADED => TASK_STATE_PROCESSING
TASK_STATE_PROCESSED => TASK_STATE_UPLOADING_TO_S3

So only requests that the client is interested in continue to be processed.

btw I'd strongly recommend using something that is designed to work performantly as a Queue for holding the queue of tasks (e.g. Rabbitmq, Redis or Gearman ) rather than just using MySQL or any database. Basically, SQL just isn't that great at acting as a queue and you would be better using the appropriate technology from the start, rather than using the wrong tech to start, and then having to swap it out in an emergency when your database becomes overloaded when it's trying to do hundreds of inserts, updates per second just to manage the tasks.

As a side benefit, by breaking long running process up into tasks, it becomes really easy to:

  1. See where the processing time is being spent.
  2. See and detect fluctuations in processing time (e.g. if CPUS reach 100% utilization, then the image resize will be suddenly taking much longer).
  3. Throw more resources at the steps that are slow.
  4. You can give status update messages to the client, so they can see progress in the task, which gives a better UX rather than it just sitting there 'doing nothing'.

OTHER TIPS

What exactly are you doing in those long running requests? If whatever you are doing is causing the FastCGI process to wait for some system call like waiting for a database to return a result, the aborted HTTP client connection will not cause this call to be interrupted. If I recall correctly, the effect of ignore_user_abort(false) is merely that the PHP script is aborted as soon as it tries to output something to the (now lost) connection. The script will not write any output while it is waiting for a system call.

If possible, you should split the task the long running script is performing into smaller chunks and check the connection status in between processing them. Ensure that the script terminates if the connection was terminated:

while (!$done_yet) {
    if(connection_status() != CONNECTION_NORMAL) {
        break;
    }
    do_more_work();
}

In the PHP documentaion you'll find more information on connection handling if you like.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top