Best possible configuration for file upload

Yichun Zhang (agentzh) agentzh at gmail.com
Sat Mar 1 04:09:34 UTC 2014


Hello!

On Wed, Feb 26, 2014 at 2:41 AM, snarapureddy wrote:
> We are using nginx for file uploads instead of directing to the backend
> servrs. Used lua openresty module to get the data in chunks in write it to
> local disk. File size could vary from few KB's to 10MB.
>

Ensure you're interleaving disk writes and network reads. The
recommended way is to read a chunk from the network and write it
immediately to the file system and again and again.

> We are tuning worker process, connections, accept_mutex off etc, but if we
> cuncerrently upload files some of the connections were very slow.
>
> Chuck size is 4096.

You can try a bit larger chunk sizes like 16KB or 32KB.

> CPU utilization is very minimal. We are running 10
> worker processes, but most of the cases sam process is handling multiple
> connections and they are becoming slow.

This sounds like a good analysis candidate for the off-CPU flame graph tool:

    https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt-off-cpu

This can help us identify exactly what the bottleneck is in your nginx.

Also, measuring the epoll loop blocking latency distribution could be
insightful too:

    https://github.com/agentzh/stapxx#epoll-loop-blocking-distr

Try running more nginx worker processes if file IO syscalls are
blocking your workers too much.

BTW, you may get more and faster responses if you post such questions
on the openresty-en mailing list:
https://groups.google.com/group/openresty-en

Best regards,
-agentzh



More information about the nginx mailing list