Strategies for large-file upload?

Moshe Katz moshe at
Tue Feb 8 16:08:54 UTC 2022

Our "large" files are usually closer to 1 GB than 10-100 GB, but the
general idea should be the same.

We tried using (docs at, but we had a hard
time getting it to work properly, and it might not be compatible with newer
nginx versions. (I don't remember why we didn't try using the project that
was forked from.) We considered a similar project but then we decided that we
did not want to use an nginx extension because we were worried about future
maintainability (as you can see, none of those projects have been updated
for a while.)

We are currently using the tus resumable update protocol -
- they have client and server implementations available in most common
programming languages.

You might also consider using something like MinIO, Ceph, or anything else
that provides an Amazon-S3-compatible API (which includes multi-part
upload), but those need an S3-compatible upload tool so it's a lot more
work to allow uploads from the browser. (You could look at or for
more about that, but I haven't tried them.)

On Tue, Feb 8, 2022 at 9:24 AM Alva Couch <ACouch at> wrote:

> I’m new to this list but have been running an NGINX community stack
> including NGINX community/Gunicorn/Django for several years in production.
> My site is a science data repository. We have a need to accept very large
> files as uploads: 10 gb and above, with a ceiling of 100 gb or so.
> What strategies have people found to be successful for uploading very
> large files? Please include both “community” and “plus” solutions.
> Thanks
> Alva L. Couch
> Senior Architect of Data Services, CUAHSI
> _______________________________________________
> nginx mailing list -- nginx at
> To unsubscribe send an email to nginx-leave at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list