<div dir="auto">To follow up on the purge implementation, I would like to avoid going through the entire cache dir for a wildcard request, as the sites I have stack up over 200k objects. I'm wondering if there would be a clean way of taking a passive route, through which cache would be invalidated/"refreshed" by subsequent requests. As in I send a purge request for <a href="https://domain.com/.*">https://domain.com/.*</a>, and subsequent requests for cached items would then fetch the request from the backend, and update the cache. If that makes any sense..</div><div class="gmail_extra"><br><div class="gmail_quote">On Nov 23, 2017 17:00, "Andrei" <<a href="mailto:lagged@gmail.com">lagged@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi all,<div><br></div><div>I've been using Varnish for 4 years now, but quite frankly I'm tired of using it for HTTP traffic and Nginx for SSL offloading when Nginx can just handle it all. One of the main issues I'm running into with the transition is related to cache purging, and setting custom expiry TTL's per zone/domain. My questions are:</div><div><br></div><div>- Does anyone have any recent working documentation on supported modules/Lua scripts which can achieve wildcard purges as well as specific URL purges?</div><div><br></div><div>- How should I go about defining custom cache TTL's for: frontpage, dynamic, and static content requests? Currently I have Varnish configured to set the ttl's based on request headers which are added in the config with regex matches against the host being accessed.</div><div><br></div><div>Any other caveats or suggestions I should possibly know of?</div><div><br></div><div>--Andrei</div><div><br></div></div>
</blockquote></div></div>