Forward single request to upstream server via proxy_store !!

shahzaib shahzaib shahzaib.cb at gmail.com
Thu Sep 25 18:42:52 UTC 2014


@RR, thanks a lot for the explanation and examples. It really helped me :)

>>set req.url = regsub(req.url, "\?.*", "");

It will also prevent users seeking the video because the arguments after
"?" will remove whenever user will try to seek the video stream, isn't it ?

>>unset req.http.Cookie;
unset req.http.Accept-Encoding;
unset req.http.Cache-Control;

I'll apply it right at the top of vcl_recv.

>>If you insist on using proxy_store I would probably also add
proxy_ignore_client_abort on;

Well, only proxy_store is able to fulfill my requirements that is the
reason i'll have to stick with it.

I am bit confused about the varnish. Actually, i don't need any kind of
caching within the varnish as nginx already doing it via proxy_store. I
just need varnish to merge the subsequent requests into 1 and forward it to
nginx and i think varnish is doing it pretty well . Nevertheless, i am
confused if malloc caching will have any odd effect on the stream behavior
? Following is the curl request for video file on caching server and Age
parameter is also there :-

 curl -I
http://edge.files.com/files/videos/2014/09/23/1411461292920e4-720.mp4
HTTP/1.1 200 OK
Date: Thu, 25 Sep 2014 18:26:24 GMT
Content-Type: video/mp4
Last-Modified: Tue, 23 Sep 2014 08:36:11 GMT
ETag: "542130fb-5cd4456"
Age: 5
Content-Length: 97338454
Connection: keep-alive

Thanks !!
Shahzaib

On Thu, Sep 25, 2014 at 7:39 PM, Reinis Rozitis <r at roze.lv> wrote:

> 3 clients requested for test.mp4 (file size is 4mb) --> nginx --> file not
>> existed (proxy_store) --> varnish --> backend (fetch the file from origin).
>> When nginx proxied these three requests subsequently towards the
>> varnish,, despite of filling 4mb of tmp dir it was filled with 12MB which
>> means nginx is proxying all three requests towards the varnish server and
>> creating tmp files as long as the file is not downloaded. (The method was
>> failed)
>>
>
> That is expected, this setup only “guards” the content server.
>
>
>
>  Now varnish also has a flaw to send subsequent requests for same file
>> towards the nginx i.e
>>
>
> It's not a really flaw but default behaviour (different urls mean
> different content/cachable objects), but of course you can implement your
> own scenario:
>
>
> By adding:
>
> sub vcl_recv {
>    set req.url = regsub(req.url, "\?.*", "");
> }
>
> will remove all the the arguments behind ? from the uri when forwarding to
> the content backend.
>
>
> For static content I usually also add something like:
>
> unset req.http.Cookie;
> unset req.http.Accept-Encoding;
> unset req.http.Cache-Control;
>
> to normalise the request and so varnish doesnt try to cache different
> versions of the same object.
>
>
> If you insist on using proxy_store I would probably also add
> proxy_ignore_client_abort on;  ( http://nginx.org/en/docs/http/
> ngx_http_proxy_module.html#proxy_ignore_client_abort ) to the nginx
> configuration. So the requests don't get repeated if the client
> closes/aborts the request early etc.
>
>
> rr
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20140925/7442f05d/attachment-0001.html>


More information about the nginx mailing list