upstream max_fails/fail_timeout logic?
Thomas Nyberg
tomuxiong at gmail.com
Sat Jan 30 16:31:14 UTC 2016
Hello I've set up an http proxy to a couple of other servers and am
using max_fails and fail_time in addition to having a proxy_read_timeout
to force failover in case of a read timeout. It seems to work fine, but
I have two questions.
1) I'm not totally understanding the logic. I can tell that if the
timeout hits the max number of times, it must sit out for the rest of
the fail_timeout time and then it seems to start working again at the
end of the time. But it also seems like it only needs to fail once (i.e.
not a full set of max_fails) to be removed from consideration again. But
then it seems like it doesn't fail again for a long time, it needs to
fail max_fails times again. How does this logic work exactly?
2) Is the fact that an upstream server is taken down (in this temporary
fashion) logged somewhere? I.e. some file where it just says "server hit
max fails" or something?
3) Extending 2), is there any way to "hook" into that server failure?
I.e. if the server fails, is there a way with nginx to execute some sort
of a program (either internal or external)?
Thanks for any help! I've been reading the documentation, but I get lost
at times so if it's written there and I'm just being an idiot, please
tell me to RTFM (with a link if possible please :) ).
Also I forgot to mention, I'm using the community version on Linux mint:
$ nginx -v
nginx version: nginx/1.4.6 (Ubuntu)
$ uname -a
Linux mint-carbon 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8
09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Cheers,
Thomas
More information about the nginx
mailing list