@<span dir="ltr">jerome</span><br><br>> no i mean memory used by the kernel to cache local files in order not<br>
> to access file on hardrive. see output of free, the column "cached"<br>
<br>I already have that Cache it seems. Following is my output of Free command.<br><br><span style="font-family: courier new,monospace;">root@rtblogs:~# free</span><br style="font-family: courier new,monospace;"><span style="font-family: courier new,monospace;"> total used free shared buffers cached</span><br style="font-family: courier new,monospace;">
<span style="font-family: courier new,monospace;">Mem: 1451740 483524 968216 0 61192 283196</span><br style="font-family: courier new,monospace;"><span style="font-family: courier new,monospace;">-/+ buffers/cache: 139136 1312604</span><br style="font-family: courier new,monospace;">
<span style="font-family: courier new,monospace;">Swap: 262136 0 262136</span><br><br>I must thanks linode for this. But this always reduced my free memory by huge amount.<br>I guess such situations force us to spend more time in deciding on what to cache and how much to cache. <br>
We have too many levels where we can cache and good thing about them is that they all happily live together.<br><br><span class="gI"><span class="go">@jabberuser</span></span><br>>I recommend have few more workers than number of cores. On one of my<br>
>overloaded dual core server I changed 2 worksers to 6, this reduced a<br>
>lot IOwait and fixed latency problem when using site which is on this<br>
>server. Play a bit with it to found best for you number of workers.<font color="#888888"></font><br><br>Thanks for suggestions.<br>I may go you way just in case fastcgi_cache in nginx improves my performance.<br>In that case I will reduce value of <span id=":46g" class="hP">PHP_FCGI_CHILDREN</span> to offset it.<br>
<br>-Rahul<br>