[PATCH 00 of 15] Serve all requests from single tempfile
Roman Arutyunyan
arut at nginx.com
Tue Feb 8 11:16:35 UTC 2022
Hi,
On Mon, Feb 07, 2022 at 01:27:15PM +0100, Jiří Setnička via nginx-devel wrote:
> Hello,
>
> > Thanks for sharing your work. Indeed, nginx currently lacks a good solution
> > for serving a file that's being downloaded from upstream. We tried to address
> > this issue a few years ago. Our solution was similar to yours, but instead
> > of sharing the temp file between workers, we moved the temp file to its
> > destination right after writing the header. A new bit was added to the header
> > signalling that this file is being updated.
> >
> > The biggest issue with this kind of solutions is how we wait for updates in
> > a file. We believe that polling a file with a given time interval is not a
> > perfect approach, even though nginx does that for cache locks.
>
> polling is done only on the ngx_http_file_cache_tf_node_t struct in the
> shared memory (see patch 09 of 15, where c->length is updated from
> c->tf_node->length and then this length is compared with
> c->body_sent_bytes), not on the file itself. Length in the tf_node is
> updated with each write from the primary request (see patch 05 of 15).
>
> It is better than polling individual files, but I agree it is still polling,
> which isn't great.
>
> > [...]
> > Another approach would be to create an
> > inter-worker messaging system for signalling file updates.
>
> We were thinking about creating something like that but we buried this idea
> because it seems quite complex to do it right and reliable. And polling the
> tf_node in the shared memory (with very low proxy_cache_tempfile_loop) works
> sufficiently good.
>
>
> > It's good to know the solution works for you. Please keep us posted about
> > future improvements especially the ones which would avoid polling and decrease
> > complexity.
>
> We would be happy to get this patch to the mainline nginx in the future, so
> that all nginx users could benefit from it.
>
> We will be thinking about avoiding polling and implementing some
> inter-worker messaging, but it may be some time, because it seems quite
> complex. Could you share some hints about how do you thing it would be best
> to implement it in the worker's event loop?
Even though I have some ideas, they need to be checked first. I can only say
this should be a core nginx feature.
> Jiří Setnička
> CDN77
>
> _______________________________________________
> nginx-devel mailing list -- nginx-devel at nginx.org
> To unsubscribe send an email to nginx-devel-leave at nginx.org
--
Roman Arutyunyan
More information about the nginx-devel
mailing list