A problem with the keepalive module and the direcitve proxy_next_upstream
姚伟斌
yaoweibin at gmail.com
Mon Jan 14 08:11:20 UTC 2013
Hi, folks,
We have found a bug with the keepalive module. When we used the keepalive
module, the directive proxy_next_upstream seems disabled.
We use Nginx as reverse server. Our backend servers simply close connection
when read some abnormal packets. Nginx will call the function
ngx_http_upstream_next() and try to use the next server. The ft_type
is NGX_HTTP_UPSTREAM_FT_ERROR. We want to turn off the try mechanism with
such packets. Otherwise, it will try all the servers every time. We use
directive proxy_next_upstream off. If it's not keepalive connection,
everything is fine. If it's keepalive connection, it will run such code:
2858 if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) {
2859 status = 0;
2860
2861 /* TODO: inform balancer instead */
2862
2863 u->peer.tries++;
2864
The status is cleared to be 0. The below code will never be touched:
2896 if (status) {
2897 u->state->status = status;
2898
2899 if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type))
{
The variable of tries and u->conf->next_upstream become useless.
I don't know why the cached connection should clear the status, Can we just
remove the code from line 2858 to 2864? Is there any side effect?
--
Weibin Yao
Developer @ Server Platform Team of Taobao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20130114/11be4144/attachment-0001.html>
More information about the nginx
mailing list