<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><i class="">stale-while-revalidate</i> is awesome, but it might not be the optimal tool here. It came out of Yahoo!, </div><div class=""> the sixth largest website in the world, who used a small number of caching proxies. In their context  </div><div class="">most content is served hot from cache. A cloud deployment typically means a larger number of VMs </div><div class="">that are each a fraction of  a physical server. Great for fine grained control but a problem for cache hit rates. </div><div class="">So if you have much less traffic than Yahoo spread across a larger number of web servers then your hit rates</div><div class="">will suffer. What hit rates do you see today?</div><div class=""><br class=""></div><div class="">Dynamic scale out isn’t very compatible with caching reverse proxies. </div><div class="">Can you separate the caching reverse proxy functionality from the other functionality</div><div class="">and keep the number of caches constant, whilst scaling out the web servers?</div><div class=""><br class=""></div><div class="">Your give the example of a few hour old page being served because of a scale out event. Is that the most</div><div class=""> common case of cache misses in your context or is it unpopular  pages and quiet times of the day?</div><div class="">Are these also served stale even when your server count is static?</div><div class=""><br class=""></div><div class="">Finally, if the root of the problem is serving very stale content, could you simply delete that content </div><div class="">throughout the day? A script that finds and removes all cached files older than five minutes wouldn’t</div><div class=""> take long to run.</div><div class=""><br class=""></div><div class="">Peter</div><div class=""><br class=""></div><div class=""><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 8 Jul 2017, at 4:28 PM, Joan Tomàs i Buliart <<a href="mailto:joan.tomas@marfeel.com" class="">joan.tomas@marfeel.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
  
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" class="">
  
  <div text="#000000" bgcolor="#FFFFFF" class=""><p class="">Hi Peter,</p><p class=""><br class="">
    </p><p class="">yes, it's true. I will try to explain our problem better.</p><p class="">We provide a mobile solution for newspaper and media groups. With
      this kind of partners, it is easy to have a peak of traffic. We
      prefer to give stale content (1 or 2 minutes stale content, not
      more) instead of block the request for some seconds (the time that
      our tomcat back-end could expend to crawl our customers desktop
      site and generate the new content). As I tried to explain in my
      first e-mail, the <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_background_update" rel="nofollow noreferrer" class="">proxy_cache_background_update</a>
      works ok while the number of servers is fix and the LB in front of
      them does a URI load balancer.</p><p class="">The major problem appears when the servers has to scale up and
      scale down. Imagine that the URL1 is cache by server 1. All the
      request for URL1 are redirected to Server1 by the LB. Suddenly,
      the traffic raise up and a new server is added. The LB will remap
      the request in order to send some URLs to server 2. The URL1 is
      one of this group of URL that goes to server 2. Some hours later,
      the traffic goes down and the server 2 is removed. In this
      situation, the new request that arrive to Server 1 asking for URL1
      will receive the version of some hours before (not some minutes).
      This is what we are trying to avoid.</p><p class="">Many thanks for all your feedback and suggestions,</p><p class=""><br class="">
    </p><p class="">Joan<br class="">
    </p>
    <div class="moz-signature">
      <meta http-equiv="content-type" content="text/html; charset=utf-8" class="">
      <title class="">Joan Tomàs-Buliart</title>
      <div class="moz-cite-prefix">On 08/07/17 15:30, Peter Booth wrote:<br class="">
      </div>
    </div>
    <blockquote type="cite" cite="mid:D0B2CE77-DD4E-4D0E-BE48-5AF0EBF8B4C2@me.com" class="">
      <meta http-equiv="content-type" content="text/html; charset=utf-8" class="">
      <div class="">Perhaps it would help if, rather than focus on the specific
        solution that you are wanting, you instead explained your
        specific problem and business context?</div>
      <div class=""><br class="">
      </div>
      <div class="">What is driving your architecture? Is
        it about protecting a backend that doesn't scale or more about
        reducing latencies?</div>
      <div class=""><br class="">
      </div>
      <div class="">How many different requests are there
        that might be cached? What are the backend calls doing? How do
        cached objects expire? How long does a call to the backend
        take? </div>
      <div class="">Why is it OK to return a stale
        version of X to the first client but not OK to return a stale
        version to a second requester?</div>
      <div class=""><br class="">
      </div>
      <div class="">Imagine a scenario where two
        identical requests arrive from different clients and hit
        different web servers. Is it OK for both requests to be
        satisfied with a stale resource?</div>
      <div class=""><br class="">
      </div>
      <div class="">It's very easy for us to make
        incorrect assumptions about all of these questions because of
        our own experiences.</div>
      <div class=""><br class="">
      </div>
      <div class="">Peter<br class="">
        <br class="">
        Sent from my iPhone</div>
      <div class=""><br class="">
        On Jul 8, 2017, at 9:00 AM, Joan Tomàs i Buliart <<a href="mailto:joan.tomas@marfeel.com" moz-do-not-send="true" class="">joan.tomas@marfeel.com</a>>
        wrote:<br class="">
        <br class="">
      </div>
      <blockquote type="cite" class="">
        <div class="">
          <meta http-equiv="Content-Type" content="text/html;
            charset=utf-8" class="">
          Thanks Owen!<br class="">
          <br class="">
          We considered all the options on these 2 documents but, on our
          environment in which is important to use
          stale-while-revalidate, all of them have, at least, one of
          these drawbacks: or it adds a layer in the fast path to the
          content or it can't guarantee that one request on a stale
          content will force the invalidation off all the copies of this
          object.<br class="">
          <br class="">
          That is the reason for which we are looking for a "background"
          alternative to update the content.<br class="">
          <br class="">
          Many thanks in any case,<br class="">
          <br class="">
          Joan<br class="">
          <br class="">
          On 07/07/17 16:04, Owen Garrett wrote:<br class="">
          <blockquote type="cite" cite="mid:2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com" class="">
            <meta http-equiv="Content-Type" content="text/html;
              charset=utf-8" class="">
            There are a couple of options described here that you could
            consider if you want to share your cache between NGINX
            instances:
            <div class=""><br class="">
            </div>
            <div class=""><a href="https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/" class="" moz-do-not-send="true">https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/</a> describes
              a sharded cache approach, where you load-balance by URI
              across the NGINX cache servers.  You can combine your
              front-end load balancers and back-end caches onto one tier
              to reduce your footprint if you wish</div>
            <div class=""><br class="">
            </div>
            <div class=""><a href="https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/" class="" moz-do-not-send="true">https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/</a> describes
              an alternative HA (shared) approach that replicates the
              cache so that there’s no increased load on the origin
              server if one cache server fails.<br class="">
              <div class=""><br class="webkit-block-placeholder">
              </div>
              <div class="">It’s not possible to share a cache across
                instances by using a shared filesystem (e.g. nfs).</div>
              <div class="">
                <div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-ligatures: normal; font-variant-position: normal; font-variant-caps: normal; font-variant-numeric: normal; font-variant-alternates: normal; font-variant-east-asian: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; line-height: normal;" class=""><br class="Apple-interchange-newline">
                  ---<br class="Apple-interchange-newline">
                  <a href="mailto:owen@nginx.com" class="" moz-do-not-send="true">owen@nginx.com</a></div>
                <div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-ligatures: normal; font-variant-position: normal; font-variant-caps: normal; font-variant-numeric: normal; font-variant-alternates: normal; font-variant-east-asian: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; line-height: normal;" class="">Skype:
                  owen.garrett</div>
                <div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-ligatures: normal; font-variant-position: normal; font-variant-caps: normal; font-variant-numeric: normal; font-variant-alternates: normal; font-variant-east-asian: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; line-height: normal;" class="">Cell: +44 7764
                  344779</div>
              </div>
              <br class="">
              <div class="">
                <blockquote type="cite" class="">
                  <div class="">On 7 Jul 2017, at 14:39, Peter Booth
                    <<a href="mailto:peter_booth@me.com" class="" moz-do-not-send="true">peter_booth@me.com</a>>
                    wrote:</div>
                  <br class="Apple-interchange-newline">
                  <div class="">
                    <meta http-equiv="content-type" content="text/html;
                      charset=utf-8" class="">
                    <div dir="auto" class="">
                      <div class="">You could do that but it would be
                        bad. Nginx' great performance is based on
                        serving files from a local Fisk and the behavior
                        of a Linux page cache. If you serve from a
                        shared (nfs) filsystem then every request is
                        slower. You shouldn't slow down the common case
                        just to increase cache hit rate.<br class="">
                        <br class="">
                        Sent from my iPhone</div>
                      <div class=""><br class="">
                        On Jul 7, 2017, at 9:24 AM, Frank Dias <<a href="mailto:frank.dias@prodea.com" class="" moz-do-not-send="true">frank.dias@prodea.com</a>>
                        wrote:<br class="">
                        <br class="">
                      </div>
                      <blockquote type="cite" class="">
                        <div class="">
                          <meta http-equiv="Content-Type" content="text/html; charset=utf-8" class="">
                          <div dir="auto" class="">Have you thought
                            about using a shared file system for the
                            cache. This way all the nginx 's are looking
                            at the same cached content.</div>
                          <div class="gmail_extra"><br class="">
                            <div class="gmail_quote">On Jul 7, 2017 5:30
                              AM, Joan Tomàs i Buliart <<a href="mailto:joan.tomas@marfeel.com" class="" moz-do-not-send="true">joan.tomas@marfeel.com</a>>
                              wrote:<br type="attribution" class="">
                              <blockquote class="quote" style="margin:0
                                0 0 .8ex;border-left:1px #ccc
                                solid;padding-left:1ex">
                                <div class=""><font class="" size="2"><span style="font-size:10pt" class="">
                                      <div class="">Hi Lucas<br class="">
                                        <br class="">
                                        On 07/07/17 12:12, Lucas Rolff
                                        wrote:<br class="">
                                        > Instead of doing round
                                        robin load balancing why not do
                                        a URI based <br class="">
                                        > load balancing? Then you
                                        ensure your cached file is only
                                        present on a <br class="">
                                        > single machine behind the
                                        load balancer.<br class="">
                                        <br class="">
                                        Yes, we considered this option
                                        but it forces us to deploy and
                                        maintain <br class="">
                                        another layer (LB+NG+AppServer).
                                        All cloud providers have round
                                        robin <br class="">
                                        load balancers out-of-the-box
                                        but no one provides URI based
                                        load <br class="">
                                        balancer. Moreover, in our
                                        scenario, our webservers layer
                                        is quite <br class="">
                                        dynamic due to scaling up/down.<br class="">
                                        <br class="">
                                        Best,<br class="">
                                        <br class="">
                                        Joan<br class="">
_______________________________________________<br class="">
                                        nginx mailing list<br class="">
                                        <a href="mailto:nginx@nginx.org" class="" moz-do-not-send="true">nginx@nginx.org</a><br class="">
                                        <a href="http://mailman.nginx.org/mailman/listinfo/nginx" class="" moz-do-not-send="true">http://mailman.nginx.org/mailman/listinfo/nginx</a><br class="">
                                      </div>
                                    </span></font></div>
                              </blockquote>
                            </div>
                            <br class="">
                          </div>
                          This message is confidential to Prodea unless
                          otherwise indicated or apparent from its
                          nature. This message is directed to the
                          intended recipient only, who may be readily
                          determined by the sender of this message and
                          its contents. If the reader of this message is
                          not the intended recipient, or an employee or
                          agent responsible for delivering this message
                          to the intended recipient:(a)any dissemination
                          or copying of this message is strictly
                          prohibited; and(b)immediately notify the
                          sender by return message and destroy any
                          copies of this message in any form(electronic,
                          paper or otherwise) that you have.The delivery
                          of this message and its information is neither
                          intended to be nor constitutes a disclosure or
                          waiver of any trade secrets, intellectual
                          property, attorney work product, or
                          attorney-client communications. The authority
                          of the individual sending this message to
                          legally bind Prodea is neither apparent nor
                          implied,and must be independently verified. </div>
                      </blockquote>
                      <blockquote type="cite" class="">
                        <div class=""><span class="">_______________________________________________</span><br class="">
                          <span class="">nginx mailing list</span><br class="">
                          <span class=""><a href="mailto:nginx@nginx.org" class="" moz-do-not-send="true">nginx@nginx.org</a></span><br class="">
                          <span class=""><a href="http://mailman.nginx.org/mailman/listinfo/nginx" class="" moz-do-not-send="true">http://mailman.nginx.org/mailman/listinfo/nginx</a></span></div>
                      </blockquote>
                    </div>
                    _______________________________________________<br class="">
                    nginx mailing list<br class="">
                    <a href="mailto:nginx@nginx.org" class="" moz-do-not-send="true">nginx@nginx.org</a><br class="">
                    <a class="moz-txt-link-freetext" href="http://mailman.nginx.org/mailman/listinfo/nginx" moz-do-not-send="true">http://mailman.nginx.org/mailman/listinfo/nginx</a></div>
                </blockquote>
              </div>
              <br class="">
            </div>
            <br class="">
            <fieldset class="mimeAttachmentHeader"></fieldset>
            <br class="">
            <pre wrap="" class="">_______________________________________________
nginx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:nginx@nginx.org" moz-do-not-send="true">nginx@nginx.org</a>
<a class="moz-txt-link-freetext" href="http://mailman.nginx.org/mailman/listinfo/nginx" moz-do-not-send="true">http://mailman.nginx.org/mailman/listinfo/nginx</a></pre>
          </blockquote>
          <br class="">
        </div>
      </blockquote>
      <blockquote type="cite" class="">
        <div class=""><span class="">_______________________________________________</span><br class="">
          <span class="">nginx mailing list</span><br class="">
          <span class=""><a href="mailto:nginx@nginx.org" moz-do-not-send="true" class="">nginx@nginx.org</a></span><br class="">
          <span class=""><a href="http://mailman.nginx.org/mailman/listinfo/nginx" moz-do-not-send="true" class="">http://mailman.nginx.org/mailman/listinfo/nginx</a></span></div>
      </blockquote>
      <br class="">
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br class="">
      <pre wrap="" class="">_______________________________________________
nginx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:nginx@nginx.org">nginx@nginx.org</a>
<a class="moz-txt-link-freetext" href="http://mailman.nginx.org/mailman/listinfo/nginx">http://mailman.nginx.org/mailman/listinfo/nginx</a></pre>
    </blockquote>
    <br class="">
  </div>

_______________________________________________<br class="">nginx mailing list<br class=""><a href="mailto:nginx@nginx.org" class="">nginx@nginx.org</a><br class="">http://mailman.nginx.org/mailman/listinfo/nginx</div></blockquote></div><br class=""></div></div></body></html>