<div dir="ltr">Hi,<div><br></div><div> Sorry got back to this thread after long time. First of all, thanks to all for suggestions. Alright, i have also checked with rate_limit module, should this work as well or it should be only limit_conn (to parse error_log and constructing redirect URL).<br><br>P.S : Actuall looks like limit_conn needs to recompile nginx as it is not included in default yum install nginx repo. So i tried with rate_limit which is built-in within nginx repo.<br><br><a href="http://greenroom.com.my/blog/2014/10/rate_limit-with-nginx-on-ubuntu/">http://greenroom.com.my/blog/2014/10/rate_limit-with-nginx-on-ubuntu/</a><br><br>Regards.</div><div>Shahzaib</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 17, 2015 at 2:30 AM, Francis Daly <span dir="ltr"><<a href="mailto:francis@daoine.org" target="_blank">francis@daoine.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Mon, Jun 15, 2015 at 01:45:42PM +0300, Valentin V. Bartenev wrote:<br>
> On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote:<br>
<br>
</span>Hi there,<br>
<span class=""><br>
> > If there are exceeding 1K requests for <a href="http://storage.domain.com/test.mp4" rel="noreferrer" target="_blank">http://storage.domain.com/test.mp4</a> ,<br>
> > nginx should construct a Redirect URL for rest of the requests related to<br>
> > test.mp4 i.e <a href="http://cache.domain.com/test.mp4" rel="noreferrer" target="_blank">http://cache.domain.com/test.mp4</a> and entertain the rest of<br>
> > requests for test.mp4 from Caching Node while long tail would still be<br>
> > served from storage.<br>
<br>
</span><span class="">> You can use limit_conn and limit_req modules to set limits:<br>
> <a href="http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html" rel="noreferrer" target="_blank">http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html</a><br>
> <a href="http://nginx.org/en/docs/http/ngx_http_limit_req_module.html" rel="noreferrer" target="_blank">http://nginx.org/en/docs/http/ngx_http_limit_req_module.html</a><br>
><br>
> and the error_page directive to construct the redirect.<br>
<br>
</span>limit_conn and limit_req are the right answer if you care about concurrent<br>
requests.<br>
<br>
(For example: rate=1r/m with burst=1000 might do most of what you want,<br>
without too much work on your part.)<br>
<br>
I think you might care about historical requests, instead -- so if a<br>
url is ever accessed 1K times, then it is "popular" and future requests<br>
should be redirected.<br>
<br>
To do that, you probably will find it simpler to do it outside of nginx,<br>
at least initially.<br>
<br>
Have something read the recent-enough log files[*], and whenever there are<br>
more that 1K requests for the same resource, add a fragment like<br>
<br>
location = /test.mp4 { return 301 <a href="http://cache.domain.com/test.mp4" rel="noreferrer" target="_blank">http://cache.domain.com/test.mp4</a>; }<br>
<br>
to nginx.conf (and remove similar fragments that are no longer currently<br>
popular-enough, if appropriate), and do a no-downtime config reload.<br>
<br>
You can probably come up with a module or a code config that does the<br>
same thing, but I think it would take me longer to do that.<br>
<br>
<br>
[*] or accesses the statistics by a method of your choice<br>
<span class="HOEnZb"><font color="#888888"><br>
f<br>
--<br>
Francis Daly <a href="mailto:francis@daoine.org">francis@daoine.org</a><br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</div></div></blockquote></div><br></div>