E3-1240 with 32GB Ram - Unable to set the optimal value for the server

steve at greengecko.co.nz steve at greengecko.co.nz
Wed Oct 18 07:35:02 UTC 2017


You can't say that. Which fpm model are you using? dynamic, ondemand? Makes a huge difference.

If you have a memory leak, ensure your workers are killed on a regular basis. 

Don't want you near my servers for sure!

Steve

October 18, 2017 1:17 PM, "Peter Booth"  wrote:
Agree, 
I work as performance architect , specializing in improving the performance 
of trading applications and high traffic web sites. When I first began tuning 
Apache (and then nginx) I realized the the internet was full of “helpful suggestions” 
about why you should set configuration X to this number.  
What took me more than ten years to learn was that 95% of these tips are 
useless, because they are ideas that made sense in one setting 
at one time that got copied and passed down year after year without people 
understanding them. So be skeptical about anything that anyone suggests, including me. 
Regarding one of these settings: max_requests. 
This is a very old setting inherited from apache that originally allowed you to 
recycle apache worker processes after they had handled N requests. 
Why would you do this? 
If your worker included a module that leaked memory then over time worker processes 
would grow in size - this setting would mean that each worker would get killed eventually, 
at different time, so a new fresh process could be started to take its place, avoiding a 
catastrophe where all of your your workers consume all the memory on the host. 
In 2017 you can absolutely set iit to zero, provided you keep track of the size of your processes. 
In fact we can confirm this idea just from your top output. 
ID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
3135 nginx 20 0 393108 12112 3832 R 7.3 0.0 10:30.35 php-fpm: pool www 
6839 nginx 20 0 392804 11832 3828 R 7.3 0.0 10:37.57 php-fpm: pool www  
See how process 3135 has been running fro 10 minutes 30 seconds whilst process 6839 has been running for 10 minutes 37 seconds. 
But the longer running process has a smaller resident set size (RSS = memory in use) 
If you look at all of the lines its easier to see that there is no trend of memory increasing over time: 
 NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}' | head -15 | awk '{print $2}' | average -M 11852 NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}' | tail -15 | awk '{print $2}' | average -M 11876     
On Oct 17, 2017, at 12:39 AM, agriz  wrote: 

Sir reading some info, i guess i cant tell any number blindly without test.

I think
I can first try to give these values

max_children = 100
start server = 34
spareserver min and max = 20 & 50

We have around 20GB free Ram all the time. Why can't we use it for php-fpm?
Are those values safe to check?

But max_requests, why should we limit the numbers to 500 or 2500? Why cant
we have unlimited? What is wrong in setting it to zero?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276907#msg-276907 (https://forum.nginx.org/read.php?2,276892,276907#msg-276907)

_______________________________________________
nginx mailing list
nginx at nginx.org (mailto:nginx at nginx.org)
http://mailman.nginx.org/mailman/listinfo/nginx (http://mailman.nginx.org/mailman/listinfo/nginx)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20171018/3f1eb896/attachment.html>


More information about the nginx mailing list