limit_req_zone to sleep instead of an error 504 or 404
CM Fields
cmfileds at gmail.com
Thu Jun 21 21:55:48 UTC 2012
I would like to propose to replace the error code "limit_req_zone" sends back
to a user with a simple sleep statement instead.
I am using limit_req_zone on some production servers to limit the amount of
downloads each ip can execute over time. "fastcgi_param" is used to execute
an external fastcgi script to deliver the data to the client from a
back end cluster.
Users are accessing the nginx front end using automated scripts and accessing
data without problem as long as they stay below our limit_req_zone value of 200
requests per 60 seconds.
When they hit the limit_req_zone value the clients are being limited correctly
and being delayed until their requests per minute goes below the
specified value.
My issue is when a user hits the limit an error 503 (for local files) or an
error 404 (for fastcgi scripts) is sent back to the user. The error
503 is documented
on the nginx site and expected as "service unavailable". The error 404
for fastcgi is
not mentioned and took some trial and error to figure out why it was
in the logs.
For those interested, here is a sanitized copy of the error.log. The
client does a
head request and has exceeded 200 req/min by 0.094 req/min. At this
point I would
imagine a 503 would be sent back to the client. Instead, the second error line
shows nginx trying to look for the URI on the local file system. The problem is
file.bz2 is the file the fastcgi script would serve out and is _not_ a
local file in
htdocs. The script is really located at /disk/web/cgi/getter .
Here is the nginx.conf for getting a file through the fast cgi method:
limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m;
root /disk/web/htdocs;
limit_req zone=one burst=100 nodelay;
location ~* ^/script/[\w.-]+$ {
gzip off;
fastcgi_max_temp_file_size 0;
fastcgi_split_path_info ^((?U).+script)(/?.+)$;
fastcgi_intercept_errors on; #<--------
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /usr/local/nginx/conf/fastcgi_params;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_NAME /cgi/getter;
fastcgi_param SCRIPT_FILENAME /disk/web/cgi/getter;
}
2012/06/21 15:04:43 [error] 19685#0: *11354 limiting requests, excess:
200.094 by zone "one", client: 128.x.x.x, server: ourserver, request:
"HEAD /cgi/getter/file.bz2 HTTP/1.1", host: "ourserver"
2012/06/21 15:04:53 [error] 19685#0: *11354 open()
"/disk/web/htdocs/cgi/getter/file.bz2" failed (2: No such file or
directory), client: 128.x.x.x, server: ourserver, request: "HEAD
/cgi/getter/file.bz2 HTTP/1.1", host: "ourserver"
Sending back an error 404 (File not found) for a file that does exist through
an external script causes a lot of confusion to the user. This is may be just
a unforeseen coding issue though.
I propose a better situation is to limit users, but when they hit the predefined
limit_req_zone value have the options to sleep for a few seconds, check if
they are under the request limit and then complete the original request. A
simple delay in responding to the client, similar to Maxim's ngx_http_delay,
would be invaluable.
Perhaps adding an option to the limit_req_zone directive would be good.
limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m; error=2sec
(or if you want to send an error)
limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m; error=504
Another idea is for limit_req_zone to trigger an unused error code (12345) and
allow the nginx config to trap it. This way the admin could delay the response
or even send back an error.
error_page 12345 @abuse
location /abuse {
delay 5s;
}
I am going to look in the nginx source for where nginx is sending the 504 for
limit_req_zone from. If anyone has any other ideas I would be happy to hear
them and very open to suggestions.
More information about the nginx
mailing list