<div dir="ltr">Hi Ruslan,<div style>I agree, I think it should be in the request too.</div><div style>There's only 1 "request retry" per request, there's no point in putting it anywhere else.</div></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Wed, Apr 24, 2013 at 5:57 PM, Ruslan Ermilov <span dir="ltr"><<a href="mailto:ru@nginx.com" target="_blank">ru@nginx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Wed, Apr 24, 2013 at 04:32:20PM +0300, Shai Duvdevani wrote:<br>
> >> diff -ur /old/src/http/ngx_http_upstream.c<br>
> /new/src/http/ngx_http_upstream.c<br>
> >> --- /old/src/http/ngx_http_upstream.c 2013-04-21 18:25:09.619437856<br>
> +0000<br>
> >> +++ /new/src/http/ngx_http_upstream.c 2013-04-23 21:29:06.106568703<br>
> +0000<br>
> >> @@ -2904,6 +2904,11 @@<br>
> >> if (status) {<br>
> >> u->state->status = status;<br>
> >><br>
> >> + if (u->conf->next_upstream_tries != NGX_CONF_UNSET_UINT &&<br>
> ++r->us_tries >= u->conf->next_upstream_tries) {<br>
> >> + ngx_http_upstream_finalize_request(r, u, status);<br>
> >> + return;<br>
> >> + }<br>
> >> +<br>
> >> if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type)) {<br>
> >><br>
> >> #if (NGX_HTTP_CACHE)<br>
> ><br>
> >Introducing r->us_tries for this looks wrong, there is no need for<br>
> >such counter at request level. Instead, probably u->peer.tries<br>
> >should be set accordingly.<br>
> ><br>
> >The test against NGX_CONF_UNSET_UINT looks wrong, too, and<br>
> >suggests that configuration inheritance isn't handled properly.<br>
><br>
> [Gist: <a href="https://gist.github.com/shai-d/5446961" target="_blank">https://gist.github.com/shai-d/5446961</a> ]<br>
><br>
> Maxim, thank you for your review! :)<br>
><br>
> I agree about comparing to NGX_CONF_UNSET_UINT. It should be set to 0<br>
> (endless tries) by default.<br>
><br>
> I avoided u->peer.tries because we wanted N retries per request and not N<br>
> retries per upstream.<br>
<br>
You store the limit per upstream{}, but want it to affect tries<br>
per request? That's kinda strange.<br>
<br>
> As I understand it, all requests share the same instance of peers.<br>
> If this is the case, In a high concurrency system with some percentage of<br>
> errors, peers will statistically always have tries > N and many requests<br>
> will be lost.<br>
> Am I wrong?<br>
<br>
_______________________________________________<br>
nginx-devel mailing list<br>
<a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
</blockquote></div><br></div>