Redirect on specific threshold !!

shahzaib shahzaib shahzaib.cb at gmail.com
Sat Aug 29 11:57:19 UTC 2015


Hi,

  Sorry got back to this thread after long time. First of all, thanks to
all for suggestions. Alright, i have also checked with rate_limit module,
should this work as well or it should be only limit_conn (to parse
error_log and constructing redirect URL).

P.S : Actuall looks like limit_conn needs to recompile nginx as it is not
included in default yum install nginx repo. So i tried with rate_limit
which is built-in within nginx repo.

http://greenroom.com.my/blog/2014/10/rate_limit-with-nginx-on-ubuntu/

Regards.
Shahzaib

On Wed, Jun 17, 2015 at 2:30 AM, Francis Daly <francis at daoine.org> wrote:

> On Mon, Jun 15, 2015 at 01:45:42PM +0300, Valentin V. Bartenev wrote:
> > On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote:
>
> Hi there,
>
> > > If there are exceeding 1K requests for
> http://storage.domain.com/test.mp4 ,
> > > nginx should  construct a Redirect URL for rest of the requests
> related to
> > > test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest
> of
> > > requests for test.mp4 from Caching Node while long tail would still be
> > > served from storage.
>
> > You can use limit_conn and limit_req modules to set limits:
> > http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
> > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
> >
> > and the error_page directive to construct the redirect.
>
> limit_conn and limit_req are the right answer if you care about concurrent
> requests.
>
> (For example: rate=1r/m with burst=1000 might do most of what you want,
> without too much work on your part.)
>
> I think you might care about historical requests, instead -- so if a
> url is ever accessed 1K times, then it is "popular" and future requests
> should be redirected.
>
> To do that, you probably will find it simpler to do it outside of nginx,
> at least initially.
>
> Have something read the recent-enough log files[*], and whenever there are
> more that 1K requests for the same resource, add a fragment like
>
>   location = /test.mp4 { return 301 http://cache.domain.com/test.mp4; }
>
> to nginx.conf (and remove similar fragments that are no longer currently
> popular-enough, if appropriate), and do a no-downtime config reload.
>
> You can probably come up with a module or a code config that does the
> same thing, but I think it would take me longer to do that.
>
>
> [*] or accesses the statistics by a method of your choice
>
>         f
> --
> Francis Daly        francis at daoine.org
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20150829/f850b9d7/attachment.html>


More information about the nginx mailing list