NFS Document root (was: Re: Forwarding Requests to Multiple Upstream Servers?)

Thanos Chatziathanassiou tchatzi at arx.net
Wed Jul 9 14:42:18 MSD 2008


Igor Clark wrote:
> Hi there,
> 
> Somewhat off-topic for nginx, but I'm really interested to hear more 
> about this. I recently tried this out by sharing a PHP code base between 
> 2 application servers over NFS, NFS server on app server 1 and client on 
> app server 2. I found that as soon as I added app server 2 (nfs client) 
> into the nginx upstream list, the load on the app server immediately and 
> dramatically increased. I assumed it was something to do with 
> insufficiently aggressive NFS caching and tried various tweaks on the 
> mount and export options including the sync settings, but it didn't 
> really make any difference. When I switched the app server 2 to use 
> local PHP files instead, the load dropped immediately.
> 
> Our application was built on our PHP framework which uses a lot of 
> include files, hence we were using APC opcode cache to minimise 
> interpreter time. I guessed these factors might have been big 
> contributors to the load as PHP would have been checking modification 
> times on a lot of files, and then APC was probably doing more checks.
> 
> Do you (or anyone) have any thoughts on whether what I was doing just 
> isn't well suited to NFS sharing, whether it was possibly related to the 
> caching stuff, or whether if I'd been able to spend more time tuning the 
> NFS configuration I might have been able to get lower CPU usage? I do 
> seem to hear of people doing what I wanted to do (obviously it's better 
> to have the code in one place and not have to update in multiple places 
> if possible) so I'm sure there must be ways to get it to work; quite 
> possibly my NFS configuration was naïve ...
I'm not sure, we use an NFS mounted directory shared between 4 web servers.
Our exported directory is sitting on hardware RAID10 (4x15000 RPM 
Ultra320 disks on an LSI megaraid with 512mb battery backed up cache) 
with the following export options:
/opt/shared/htdocs x.x.x.x/255.255.255.0(rw,no_subtree_check,sync)
clients mount it with options
rw,hard,intr,udp,noatime,rsize=8192,wsize=8192,async,auto
(on bonded gigabit ethernet)

Each of the 4 web servers uses it for an apache+mod_perl document root, 
as well as nginx (for the static files).

Latency is tolerable though certainly higher than local filesystem, read 
performance is OK, write performance is lousy, even if (or perhaps 
because) you get NFS locking working properly.

Everything pretty much works, but directories with heavy read/write 
activity suffer (ie Apache::Session directories).
Some benchmarks (somewhat old - with plain apache+mod_perl only):
three runs of ``ab -n10000 c10''

Test 1: no sessions, local file
Requests/sec:		480.34	478.35	481.22

Test 2: no sessions, file served on nfs
Requests/sec:		475.26	472.72	472.41

Test 3: local sessions using tmpfs (AKA shm fs)
Requests/sec:		122.68	120.15	112.74

Test 4: sessions on nfs mounted device (ext3)
Requests/sec:		21.87	22.32	21.57

Test 5: sessions on nfs mounted ramdisk (ext2)
Requests/sec:		94.96	88.75	108.80

We use sessions on a very small subset of files served so that loss is 
acceptable.
YMMV, of course.

> 
> Thanks,
> Igor
> 
> On 9 Jul 2008, at 10:08, Tit Petric wrote:
> 
>> Does nginx support forwarding specific request types (POST request 
>> only for example), to a specific backend?
>>
>> Handling file propagation from one "master" backend to the other nodes 
>> would be easier than to have it come to a random backend.
>>
>> For the original poster, I would recommend using a NFS server to share 
>> files between the various backends. Keep in mind that NFS would be 
>> slower than a local file system, so I would advise keeping a 
>> file&directory index in the database, to avoid some basic problems. As 
>> far as reading and writing files goes, I've had very little problems 
>> over the years with a setup that uses NFS extensively.
>>
>> Best regards,
>> Tit
>>
>> Igor Sysoev wrote:
>>> On Wed, Jul 09, 2008 at 09:15:25AM +0200, Sven C. Koehler wrote:
>>>
>>>
>>>> I am wondering whether it's possible from within an nginx module to
>>>> forward a request to multiple upstream servers....  AFAIU nginx's
>>>> existing infrastructure does normally only support sending requests to
>>>> one upstream server.  In my example I'd like to update data that 
>>>> resides
>>>> on multiple servers if it's a POST request and want to send back 
>>>> only the
>>>> response of the first upstream server.
>>>>
>>>
>>> No, nginx does not support it.
>>>
>>>
>>>
> 
> 
> 
> 
> 
> 






More information about the nginx mailing list