Weird timeouts, not sure if I've set the right threshholds
Igor Sysoev
is at rambler-co.ru
Thu May 1 08:54:37 MSD 2008
On Wed, Apr 30, 2008 at 09:37:09PM -0700, mike wrote:
> It is proxying (basically just load balancing) to 3 upstream nginx
> webservers which do FastCGI/PHP/normal static files.
>
> I get a lot of "upstream timed out" errors and I can't determine why
> exactly. Perhaps my buffers are too high, too low, timeouts are too
> high, too low? I bumped up the timeouts on my proxy machine higher
> than I normally would set so it wouldn't timeout as often but it still
> does. All the machines are quad-core xeons with 4GB RAM, SATA2 disks,
> dedicated to web/fastcgi/php, all connected via a private gigabit
> VLAN... the files are hosted on NFS, but I'm not seeing any errors
> related to that in logs either..
The timeout errors have a "while ..." string that describes conditions,
when the error has happened.
> nginx PROXY:
>
> user www-data www-data;
> worker_processes 4;
> worker_cpu_affinity 0001 0010 0100 1000;
> working_directory /var/run;
> error_log /var/log/nginx/error.log error;
> pid /var/run/nginx.pid;
>
> events {
> worker_connections 1024;
> }
>
> http {
> upstream webservers {
> server web01:80;
> server web02:80;
> server web03:80;
> }
> include /etc/nginx/mime.types;
> default_type application/octet-stream;
> sendfile on;
> tcp_nopush on;
> tcp_nodelay on;
> client_max_body_size 100m;
> client_header_buffer_size 8k;
> large_client_header_buffers 12 6k;
> keepalive_timeout 5;
> gzip on;
> gzip_static on;
> gzip_proxied any;
> gzip_min_length 1100;
> #gzip_http_version 1.0;
> gzip_comp_level 2;
> gzip_types text/plain text/html text/css application/x-javascript
> text/xml application/xml application/xml+rss
> gzip_disable "MSIE [1-6]\.";
> gzip_vary on;
> server_names_hash_max_size 4096;
> server_names_hash_bucket_size 128;
> server {
> listen 80;
> access_log off;
> location / {
> proxy_pass http://mikehost;
> proxy_next_upstream error timeout http_500 http_503 invalid_header;
> proxy_max_temp_file_size 0;
> proxy_read_timeout 50;
> proxy_connect_timeout 30;
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_ignore_client_abort on;
> }
> }
> }
>
>
> nginx WEBSERVERS:
>
> user www-data www-data;
> worker_processes 4;
> worker_cpu_affinity 0001 0010 0100 1000;
> working_directory /var/run;
> error_log /var/log/nginx/error.log debug;
> pid /var/run/nginx.pid;
>
>
>
> events {
> worker_connections 1024;
> }
>
> http {
> include /etc/nginx/mime.types;
> default_type application/octet-stream;
>
> set_real_ip_from 10.13.5.16;
> access_log off;
> sendfile on;
> tcp_nopush on;
> tcp_nodelay on;
> client_max_body_size 100m;
> client_header_buffer_size 8k;
> large_client_header_buffers 12 6k;
> keepalive_timeout 5;
> server_tokens off;
> gzip off;
> gzip_static off;
> server_names_hash_max_size 4096;
> server_names_hash_bucket_size 128;
>
> # then an example vhost block
> server {
> listen 80;
> server_name michaelshadle.com www.michaelshadle.com;
> index index.php;
> root /home/mike/web/michaelshadle.com/;
> location ~ .php {
> include /etc/nginx/fastcgi.conf;
> fastcgi_pass 127.0.0.1:11000;
> fastcgi_index index.php;
> }
> if (!-e $request_filename) {
> rewrite ^(.+)$ /wordpress/index.php?q=$1 last;
> }
Instead of this "if" it's better to use:
location / {
error_page 404 = //wordpress/index.php?q=$uri;
}
> }
>
>
> Any thoughts? Would turning off some buffers on the proxy make it
> better, or would turning off buffering on the webservers make it
> better? Not quite sure here where the best place would be to change
> (if anywhere...)
The timeout errors has no relation to buffers. These errors usually
means that backends are too slow.
--
Igor Sysoev
http://sysoev.ru/en/
More information about the nginx
mailing list