nginx and self-healing S3 static hosting

Jean-Philippe skateinmars at skateinmars.net
Sat Jul 26 14:05:15 MSD 2008


Ian M. Evans a écrit :
> I've recently been thinking about hosting some or all of our static 
> files (especially images) on Amazon's S3. The recent multi-hour outage 
> has many asking how to create redundancy or self-healing static serving. 
>  On the nginx side my question is a two-parter:
> 
> 1) Let's say you created a CNAME so that media.example.com would point 
> to your S3 bucket. What would the location rewrite be so that a request 
> for any static file would be redirected to media.example.com?

You can use a regular expression (a location instead of an if may be 
cleaner) :
if ($request_filename !~ 
/(javascripts|css|images|robots\.txt|.*\.html|.*\.xml) )
{
//rewrite
}

> 
> 2) Is it possible to wrap this in an IF wrapper? My thinking is this: 
> People write a php (python, whatever) script that checks for a 1 byte 
> file in S3. Have it run in cron, say, every 5-10 minutes. If it can't 
> grab the file (S3's down) it writes a file locally. If nginx detects 
> that file it serves the static files locally. If, 5 minutes later, the 
> script deletes the file, nginx goes back to serving from media.example.com
> 
> I know this isn't proper nginx syntax, but something like this:
> 
> if (-f /usr.local/nginx/htdocs/s3down) {
> //serve static files locally
> } else {
> //serve static files from media.example.com
> }
> 

There is no else statement. You may want to use an include to a file 
that you will change with your script depending on the state of the service.

However this rewrite system does not seem clean. I think it would be 
better to simply use dns and subdomains for your static files, and 
change the subdomain used when s3 is down.

> Thanks for any ideas.
> 





More information about the nginx mailing list