Need SSL state to be visible behind a double nginx proxy

Nick Pearson nick.pearson at
Thu Nov 6 17:26:01 MSK 2008

I just wanted to let everyone know that I tried the method Rob Shultz
suggested, and it works.  In case it helps someone in the future, I'll sum
up the problem and solution here.
My setup and requirements were as follows:

   - custom CMS implemented with Rails
   - single app instance serves many websites
   - each website uses http and https
   - each website has an SSL certificate for its own domain, so each needs
   its own dedicated IP
   - hosting company policy limits me to five IP addresses per slice
   - each site has its own static file directory
   - CMS makes heavy use of caching (each site has its own cache directory)

The problem was that in order to realize decent economies of scale, I needed
to be able to host far more than five websites on my slice.  In addition,
for reasons of cost and convenience, I only wanted to maintain a single
instance of the Rails app rather than running a copy of it on multiple
servers.  The IP address limit of five per slice was a killer.

The solution to the problem was to set up nginx on multiple front-end slices
which would each proxy requests to nginx on a single back-end slice, which
would itself then proxy those requests to the Rails app.  I got this set up,
and it worked great, except for one thing.  Because the requests to the
back-end nginx are always received from the front-end nginx over http (even
when the client request to the front-end nginx is over https), the Rails app
was unable to tell whether the client's original request was over http or
https.  (I couldn't simply proxy from the front-end nginx directly to the
back-end Rails app due to the app's caching requirements.)

Per Rob's suggestion, I configured two server directives per site on the
back-end nginx -- one for http and one for https.  However, since the
back-end nginx wouldn't be talking to clients directly, it does not need to
listen on ports 80 or 443.  Instead, it can listen on arbitrary port numbers
as long as the front-end nginx instances know what those port numbers are.
 As such, the two server directives for the website on the back-end
nginx can listen on ports 4000 (for http) and 4001 (for https).  The server
directives for on the front-end nginx will proxy requests for (port 80) to the back-end nginx on port 4000, and it will
proxy requests for (port 443) to the back-end nginx on port

The significance of listening on multiple ports is that the back-end nginx
can tell the Rails app that requests to port 4000 server were originally
made over http and that requests to port 4001 were originally made over
https.  I'll attempt to illustrate here (this won't look right without a
fixed-width font).

                     front-end                      back-end             +----------+                  +-----------+
request ---http---> | port 80  | ---port:4000---> | port 4000 |
---proto:http--->  +-----------+
       \            |          |                  |           |
       | Rails app |
        \--https--> | port 443 | ---port:4001---> | port 4001 |
---proto:https-->  +-----------+
                    +----------+                  +-----------+

Only one front-end slice is shown, and it is shown only for one site, but
this should give you an idea of how this can be expanded.  The front-end
slice can (in my case) be expanded to host five sites, and more slices can
be added as needed.

Of course, to ensure the back-end server isn't tricked, it's necessary to
make sure the server is set up to listen on ports 4000 and 4001 only from
the front-end servers.

If anyone else runs into a similar IP limitation or has some other need to
proxy http and https traffic through two instances of nginx, I hope this
helps you out.


On Thu, Oct 30, 2008 at 4:05 PM, Nick Pearson <nick.pearson at>wrote:

> Yes, and that's my plan exactly.  The only reason I need to listen on two
> separate ports for each site is that each site caches its content
> independently, which means that nginx has to be able to look for the cached
> content and server that up without ever touching Rails.  So, for two sites
> to be able to each have a cached index.html file (as well as static image
> files), I have to have a site-specific path in each server directive.
> For instance, consider the following:
>     server {
>       listen  80;
>       server_name  *;  # needs to be site-specific
>       root  /var/www/site-a;
>       location / {
>         # serve static files
>         if (-f $document_root$uri.html) {
>           rewrite (.*) $1.html break;
>           break;
>         }
>         # serve cached pages directly
>         if (-f
> $document_root/../../../current/tmp/cache/content/site-a/$uri.html) {
>           rewrite (.*)
> $document_root/../../../current/tmp/cache/content/site-a/$1.html break;
>         }
>       }
>     }
> I realize I could set the root to "/var/www" (and drop the "/site-a"), then
> use the $host or $http_host variable in my static/cache paths, but my CMS
> supports * vhosts, which can't be represented on the file
> system.  If I drop the * vhost support, then I could have
> paths like /var/www/ with symlinks pointing to it (like
> /var/www/ -> /var/www/
> Even if I could figure out a good way to represent this on the file system,
> the CMS (and my nginx config for serving static and cached content) supports
> serving different files for a request to the same site based on the
> requested host.  This is useful (and is actually being used) for a company
> with multiple locations that wants a site tailored to each location.  For
> instance, when you request, you see the home page with the
> address and phone number for the company's primary location in the header.
> Requesting shows the exact same home page except that the
> header now has the address and phone number for the company's secondary
> location.  Similarly, a slightly different logo image can be served for
>, even though both images are at /images/logo.gif.  As such,
> simply symlinking /var/www/ to point to /var/www/site-a.comwould break this functionality.
> I still think the original solution will work -- I'll just have to have two
> server directives on the back-end nginx for each site (one for http, and one
> for https).  This isn't a problem, as this is how it works now -- only now,
> the backend nginx uses server_name to choose the proper server directive
> whereas with the new solution it will use an internal IP and port number to
> do the same thing.
> Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list