<div dir="ltr">So looks like your saying the best way to do it is to do a separate location and duplicate the fastcgi setup in that location and add the fastcgi_cache stuff. <div><br></div><div>I can work with that, however I came across this example while googling (<a href="https://gist.github.com/magnetikonline/10450786">https://gist.github.com/magnetikonline/10450786</a>) that uses "if" to set a variable which I could use to match on the URL and trigger fastcgi_cache_bypass for everything not matching. Is "if" so toxic that I shouldn't consider doing it this way?</div><div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 23, 2015 at 6:07 PM, Francis Daly <span dir="ltr"><<a href="mailto:francis@daoine.org" target="_blank">francis@daoine.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Tue, Jun 23, 2015 at 04:19:48PM -0400, CJ Ess wrote:<br>
<br>
Hi there,<br>
<span class=""><br>
> - Would I need to define a separate location stanza for the URL I want to<br>
> cache and duplicate all of the fastcgi configuration that is normally<br>
> required? Or is there a way to indicate that of all the fastcgi requests<br>
> only the one matching /xyz is to be cached?<br>
<br>
</span>fastcgi caching is handled by the fastcgi_cache directive, documented<br>
at <a href="http://nginx.org/r/fastcgi_cache" rel="noreferrer" target="_blank">http://nginx.org/r/fastcgi_cache</a><br>
<br>
It is set per-location.<br>
<br>
See also directives like fastcgi_cache_bypass and fastcgi_no_cache.<br>
<br>
It is probably simplest to have on exact-match location for this url<br>
and not worry about the no_cache side of things.<br>
<br>
"all the configuration that is normally required" is typically four lines<br>
-- one "include" of common stuff; one or two extra fastcgi_param values,<br>
and a fastcgi_pass.<br>
<span class=""><br>
> - If multiple request for the same URL arrive at around the same time, and<br>
> the cache is stale, they will all wait on the one request that is<br>
> refreshing the cache, correct? So I should only see one request for the<br>
> cached location per worker per minute on the backend?<br>
<br>
</span>If that's what you want, you can probably configure it.<br>
<br>
<a href="http://nginx.org/r/fastcgi_cache_use_stale" rel="noreferrer" target="_blank">http://nginx.org/r/fastcgi_cache_use_stale</a><br>
<a href="http://nginx.org/r/fastcgi_cache_lock" rel="noreferrer" target="_blank">http://nginx.org/r/fastcgi_cache_lock</a><br>
<span class=""><br>
> - Since my one URI is fairly small, can I indicate that no file backing is<br>
> needed?<br>
<br>
</span>I don't think so. But you can have fastcgi_cache_path set to a ramdisk,<br>
I think.<br>
<span class="HOEnZb"><font color="#888888"><br>
f<br>
--<br>
Francis Daly <a href="mailto:francis@daoine.org">francis@daoine.org</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</font></span></blockquote></div><br></div>