<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Jun 3, 2013, at 10:13 AM, Belly wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div><blockquote type="cite"><blockquote type="cite">What is the best setting for my situation?<br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I would recommend using "fastcgi_max_temp_file_size 0;" if you <br></blockquote><blockquote type="cite">want to disable disk buffering (see [1]), and configuring some <br></blockquote><blockquote type="cite">reasonable number of reasonably sized fastcgi_buffers. I would <br></blockquote><blockquote type="cite">recommend starting tuning with something like 32 x 64k buffers.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">[1] <a href="http://nginx.org/r/fastcgi_max_temp_file_size">http://nginx.org/r/fastcgi_max_temp_file_size</a><br></blockquote><blockquote type="cite"><br></blockquote><br>I read about fastcgi_max_temp_file_size, but I'm a bit afraid of.<br>fastcgi_max_temp_file_size 0; states that data will be transfered<br>synchronously. What does it mean exactly? Is it faster/better than disk<br>buffering? Nginx is built in an asynchronous way. What happens if a worker<br>will do a synchronous job inside an asynchronous one? Will it block the<br>event loop? <br></div></blockquote></div><div><br class="webkit-block-placeholder"></div><div>
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="orphans: 2; text-indent: 0px; widows: 2; -webkit-text-decorations-in-effect: none; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="orphans: 2; text-indent: 0px; widows: 2; -webkit-text-decorations-in-effect: none; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="orphans: 2; text-indent: 0px; widows: 2; -webkit-text-decorations-in-effect: none; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>It's always been my understanding that in this context, "synchronously" means that nginx is proxying the data from php/fcgi to the client in real time. </div></div></div></span></div></span></div></span></div></span></div></span></span></div><div><br></div><div>This sounds like a typical problem of application load balancing. <br><div><br></div><div>The disk buffering / temp files allows for nginx to immediately "slurp" the entire response from the backend process, and then serves the files to the downstream client. This has the advantage of allowing you to immediately re-use the fcgi process for dynamic content – slow or hangup connections downstream won't tie up your pool of fcgi/apache processes. </div><div><br></div><div>restated with blocking - the temp files allow for blocking within nginx instead of php ( nginx can handle 10k connections, php is limited to the number of processes ). by removing the tempfiles, blocking will happen within php instead.</div><div><br></div><div>my advice would be to use URL partitioning to segment this type of behavior. I would only allow specific URLs to have no tmp files , and I would proxy them back to a different pool of fcgi (or apache) servers running with a tweaked config. this would allow the blocking activity from the routes serving large files to not affect the "global" pool of php processes. </div><div><br></div><div>i would also look into periodic reloads of nginx, to see if that frees things up. if so, that might be a simpler/more elegant solution.</div><div><br></div><div>I encountered problems like this about 10years ago with mod_perl under apache. The aggressive code optimizations and memory/process management were tailored to making the application work very well – but did not play nice with the rest of the box. The fix was to keep a low number of max_requests , and move to a "vanilla + mod_perl apache" system. Years later, nginx became the vanilla apache. </div><div><br></div><div>similar issues like this happen to people in the python and ruby communities as well – more expensive or intensive routes are often sectioned off and dispatched to a different pool of servers , so their workload doesn't affect the rest of requests. </div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div></body></html>