Caching + Error_code inefficiences

Maxim Dounin mdounin at mdounin.ru
Tue Aug 25 20:46:14 MSD 2009


Hello!

On Tue, Aug 25, 2009 at 12:19:03PM -0400, icqheretic wrote:

> Hi, Max! I can see that working. I'll see if the overhead incurred with the extra layer is less than the cost of calling the fastcgi over and over.  My guess is yes. Some question come to mind for the extra layer:
> 
> 1) Is it better / faster to use a socket for the proxy_pass to itself? Can nginx be set up to listen to a unix socket?

No, nginx doesn't support listen on unix sockets.

You may optimize things a bit by using separate ip/port with 
explicit bind, e.g.

    server {
        listen 127.0.0.1:8081 default bind;

        fastcgi_pass ...
        fastcgi_cache ...
    }

to avoid gethostname() call and virtual host resolving, see

http://wiki.nginx.org/NginxHttpCoreModule#listen

for details.

> 2) Would I gain much benefit by using an upstream keepalive module for the managing the connections to the same server? Would your ngx_http_upstream_keepalive add-on do the trick? 

Upstream keepalive currently works only with memcached out of the 
box.  There are experimental patches floating around to support 
keepalive connections for fastcgi, but not (yet) for http. 

Maxim Dounin

> Perphaps in the long term, a clone to error_pages that caches along the way would make sense. Maybe an add-on that I'll take up when I'm less busy.
> 
> Thanks for the reply. Nginx is a great piece of work!
> 
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,5143,5168#msg-5168
> 
> 





More information about the nginx mailing list