Migrating from Varnish
lagged at gmail.com
Thu Nov 23 15:23:26 UTC 2017
To follow up on the purge implementation, I would like to avoid going
through the entire cache dir for a wildcard request, as the sites I have
stack up over 200k objects. I'm wondering if there would be a clean way of
taking a passive route, through which cache would be
invalidated/"refreshed" by subsequent requests. As in I send a purge
request for https://domain.com/.*, and subsequent requests for cached items
would then fetch the request from the backend, and update the cache. If
that makes any sense..
On Nov 23, 2017 17:00, "Andrei" <lagged at gmail.com> wrote:
> Hi all,
> I've been using Varnish for 4 years now, but quite frankly I'm tired of
> using it for HTTP traffic and Nginx for SSL offloading when Nginx can just
> handle it all. One of the main issues I'm running into with the transition
> is related to cache purging, and setting custom expiry TTL's per
> zone/domain. My questions are:
> - Does anyone have any recent working documentation on supported
> modules/Lua scripts which can achieve wildcard purges as well as specific
> URL purges?
> - How should I go about defining custom cache TTL's for: frontpage,
> dynamic, and static content requests? Currently I have Varnish configured
> to set the ttl's based on request headers which are added in the config
> with regex matches against the host being accessed.
> Any other caveats or suggestions I should possibly know of?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nginx