running nginx upload progress module in a cluster [on multiple nodes]?

Mikhail Mazursky ash2kk at gmail.com
Sat Aug 14 07:18:21 MSD 2010


2010/8/14 Scott Trudeau <scott.trudeau at gmail.com>:
>
> Hi,
> I'm experimenting with the nginx upload progress module for a system I'm
> building.  At some point, I expect I'll need a cluster of "upload" nodes
> (currently planning to use the upload and upload progress modules) in order
> to handle all incoming uploads.  The problem I'm expecting to have w/ the
> upload progress module is, if I'm running a cluster of nodes w/ a shared IP
> (and/or behind a load balancer), is how to ensure "progress" requests are
> routed to the same host as the upload POST, (assuming POSTs always go to the
> same domain, e.g., uploads.example.com).
> I'm currently planning to run this system on EC2 and am considering the
> following options, but thought I'd ask here to see if anyone has solved this
> problem; has other, better ideas; or might correct my assumptions -- since I
> don't particularly like any of these options.
> Option 1: use Elastic Load Balancer with session stickiness
> Problem: ELB is time-based so it is possible for the session to "unstick" in
> the middle of an upload; higher time expiration settings could mitigate (but
> not prevent) this but work against the load balancers ability to balance
> traffic
> Option 2: Distribute requests amongst nodes using a hash on
> the X-Progress-ID value and subdomains (e.g., hash ids to a value a-z, map
> [a-z].uploads.example.com evenly across N nodes; always post to the
> subdomain as a hash of X-Progress-ID)
> Problem: Spreads posts evenly, but complicates client logic and can't easily
> intelligently balance requests based on current load
> Option 4: Make the app pick a host (client must ask where to POST before
> POSTing upload)
> Problem: pushes load balancing work into the server app & makes the client
> do more work
> Option 3: RYO load balancer (build (or configure) a custom load balancer
> that can run on EC2 instances and route requests intelligently)
> Problem: Seems like overkill
> Anyway -- thanks for any insights/thoughts.
> Scott

Some time ago we faced the same problem and i decided to use the
"Option 2" approach. We have 2 HAProxy frontends and 3 nginx backends
in our upload cluster. So far so good. And about what complicated
client logic are you talking about?



More information about the nginx mailing list