ngx_http_upstream_round_robin.c
Adam Horvath
Adam.Horvath at edgeware.tv
Wed May 16 18:13:12 UTC 2018
Privet Max!
Yes our proxy_next_upstream is set. Below is our config.
>From the logging at:
https://github.com/nginx/nginx/blob/master/src/http/ngx_http_upstream_round_robin.c#L594
We got maybe 50/second of these in our loop:
2018/05/16 17:26:34 [debug] 23317#0: *452 free rr peer 1 0
2018/05/16 17:26:34 [debug] 23317#0: *452 free rr peer 2 2
2018/05/16 17:26:34 [debug] 23317#0: *452 free rr peer 1 0
2018/05/16 17:26:34 [debug] 23317#0: *452 free rr peer 2 2
2018/05/16 17:26:34 [debug] 23317#0: *452 free rr peer 1 0
2018/05/16 17:26:34 [debug] 23317#0: *452 free rr peer 2 2
Etc...
Which led me to speculate around: state == 2 == NGX_PEER_NEXT
I must also mention that our application is deeply entangled with callbacks into http_request, so it is possible that we have messed up there but I am following this lead for the moment. I am at the moment trying to see the same error with our config and a "vanilla nginx" but I am having build problems : (
And we are actually using VERSION nginx-1.11.6... our build pipeline is not [yet] quite accommodated to changing nginx versions, sorry.
Thank You
Adam
worker_processes auto;
worker_cpu_affinity auto;
#user edgeware;
worker_rlimit_core 500M;
working_directory /opt/cores/;
error_log /var/log/edgeware/ew-repackager//error.log debug;
events {
worker_connections 1024;
}
http {
access_log /var/log/edgeware/ew-repackager//access.log combined;
upstream live {
server 10.16.48.227:8090;
server 10.16.48.227:8090 backup;
}
server {
listen 80;
server_name 0.0.0.0;
#aio on;
directio off;
open_file_cache max=1000 inactive=5m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
location / {
root /home/adam/;
}
location /live/ {
internal;
proxy_pass http://live/;
proxy_next_upstream error timeout invalid_header http_404 http_500 http_502 http_503 http_504;
proxy_read_timeout 4s;
}
}
}
More information about the nginx-devel
mailing list