fphp-fastcgi and nginx problem

Ian Hobson ian at ianhobson.co.uk
Thu Dec 3 16:59:08 MSK 2009


Rob Schultz wrote:
> On Dec 1, 2009, at 7:00 AM, Ian Hobson wrote:
>
>   
>> My thinking.
>>
>> 1) Alter the server code so that it looks once, and replies. It will reply with many more "null" returns, but it will handle each request in a fraction of a second. The queue will disappear.
>>
>> 2) Alter the client code, so that it delays longer - say 3 seconds - after getting a heatbeat update before requesting the next.
>>
>> 3) Have the send shorten this delay - perhaps the reply will trigger the next heatbeat request - so that your posts come back quickly.
>>
>> 4) If the heatbeat is in progress when a send is requested, delay the send until the heatbeat's reply is recieved.
>>
>> Question - will this work? Why? Why not?
>>
>> Question - is point 4 necessary?
>>
>> Input and ideas gratefully recieved.
>>
>> Regards
>>
>> Ian
>>     
>
> I am not sure of the details but i think you need to handle all the control and timing on the client side and let the php processes do what they are intended to do and that is find new chat messages and return them immediatly. Because having the 4 sec wait in the php process locks that process up for that entire 4 secs.. Thus if you only have 3 processes you can only have 3 reqs at any given time and it will be locked up for 4 secs each. I think best would be just return what is there and if nothing is there then have the client expecting this and handle it in the client. 
>
>   
Hi Rob,

Thanks for your input. I have actually made changes for points 1 to 3 
and we managed to test it with 7 users. It responded very well.

I don' think point 4 is necessary. If the browser loads a page with 
multiple images, the images come down in parallel, so parallel calls 
must work. The only problem may be parallel requests to php, but I can 
think of no reason that should fail.

Anyway, my fingers are crossed as I type (what an image!) There is a 
sales demo of the system going on this afternoon. We shall see what 
happens!

Regards

Ian





More information about the nginx mailing list