Forwarding Requests to Multiple Upstream Servers?
andan andan
armdan20 at gmail.com
Wed Jul 9 14:33:33 MSD 2008
2008/7/9 Igor Clark <igor at pokelondon.com>:
> Hi there,
>
> Somewhat off-topic for nginx, but I'm really interested to hear more about
> this. I recently tried this out by sharing a PHP code base between 2
> Do you (or anyone) have any thoughts on whether what I was doing just isn't
> well suited to NFS sharing, whether it was possibly related to the caching
> stuff, or whether if I'd been able to spend more time tuning the NFS
> configuration I might have been able to get lower CPU usage? I do seem to
> hear of people doing what I wanted to do (obviously it's better to have the
> code in one place and not have to update in multiple places if possible) so
> I'm sure there must be ways to get it to work; quite possibly my NFS
> configuration was naïve ...
We are sharing static files over NFS with three nginx. The server side
is heavily tuning by default, because it's served from a NAS/NAS (EMC
Celerra). In the client side is very important mount partitions with
async mode and use a MTU of 9000 (also known as "Jumbo Frames"),
remember other common tricks: noatime and nodirtime, increase rsize
and wsize, etc.
Our performance is very impressive, every nginx move over 30MB/s with
a load average less than one. Every server has 16 GB of RAM, the OS
(Linux) is caching all them. The OS also is tuning, specially several
tcp/ip kernel values, max open file descriptors, but this tricks are
relative to nginx and clients.
Of course, note that working with php is very different than working
with simply static files. The "dynamic side" is served from apache and
PHP, but also shared over NFS. In a future we will test nginx +
fastcgi.
Hope this helps.
BR.
More information about the nginx
mailing list