Load balancing PHP via Nginx...

Ilan Berkner iberkner at gmail.com
Thu Sep 3 07:40:53 MSD 2009


The problem that we're experiencing is that our single web server is getting
"flooded" (not in a bad way) with a lot of incoming connections, our site is
growing (yey).  So I'm trying to figure out the best way to accommodate the
growth.  In our case, nginx itself is humming along just fine, but PHP is
choking on requests both due to quantity of requests (it can't process them
fast enough so there's growing queue) as well as due to delays in database
(I think).  So I'm upgrading the database and looking to add another web
server box both to off load some of the load, as well as serve as a backup
box, etc.
What I don't know how to do "well" yet, is how to maintain the code base the
same across the 2 boxes (will using a shared directory work?)

Using Nginx to do simple load balancing I think is the right way to go for
us for now, obviously I'll have to deal with session management issues
primarily which can either be handled via a shared memcache configuration
(can you have 2 memcache servers running that share the same space? -- or on
a single server) or via the database.

Any suggestions that anyone has to help things along would be greatly
appreciated.

I just love how easy Nginx is in terms of configuration and how fast it is,
really that hasn't been the issue.  I now have to get PHP to process things
more quickly :-).

On Wed, Sep 2, 2009 at 11:20 PM, Jeffrey 'jf' Lim <jfs.world at gmail.com>wrote:

> On Thu, Sep 3, 2009 at 11:10 AM, Ilan Berkner <iberkner at gmail.com> wrote:
>
>> Maybe I'm not explaining myself correctly, maybe your suggestions are the
>> right way to go, but I see a lot of nginx examples such as this:
>> upstream phpproviders {
>>         server 127.0.0.1:3000;
>>         server 127.0.0.1:3001;
>>         server 127.0.0.1:3002;
>>     }
>>
>>
> :) yeah, that works fine. I just saw the phrase "additional (fastcgi)
> requests" - and immediately thought u meant to refer to a priority system...
> (ie. where "all requests go to this box. Until it's loaded. Then send the
> additional requests to that other box!")
>
>
>
>> In this example, different port numbers are used, but you can use
>> different ip addresses.
>>
>> inside the location / tag you would specify:
>>
>> proxy_pass http://phpproviders
>>
>> nginx in the simplest (default mode) would round robin the requests.
>>
>> Is this not a good type of methodology?
>>
>>
> it depends really on what you want/need. If you want a simple setup, this
> could do. And if there is nothing requiring you to stick each request to any
> particular server (since you have session management in memcache; assuming
> you have enough memory, and dont have to forcibly retire sessions ahead of
> their intended expiry time!!!), then this could very well work for you.
>
> -Jeff
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20090902/ca43edcf/attachment.html>


More information about the nginx mailing list