Happy to see that we are not the only place where we find open to be a problem. I took a look at your module and while the approach is sound, I decided the complexity doesn't seem to be worth it. Hope to see some numbers of how your module works at scale.
On Wed, Aug 8, 2018 at 12:44 PM, Eran Kornblau email@example.com wrote:
- The code bypasses open file cache, and uses a direct call in the http cache code instead. While it might be ok in your setup, it looks like an incomplete solution from the generic point of view. A better solution would be to introduce a generic interface in ngx_open_cached_file() to allow use of thread pools.
A small comment on this - I wrote such an implementation a while ago in my module. We've been using it on production for the last ~3 years.
(There's a lot of code there that was copied from ngx_open_file_cache.c, I did that since those functions are static, and didn't want to change nginx core. If implemented as part of nginx core, all this duplication can be avoided...)
In my implementation, I added a function similar to ngx_open_cached_file - (ngx_async_open_cached_file), that gets a few extra params -
- The thread pool
- The thread task (optional) - this was done in order to reuse the task
when opening multiple files in a single request 3. Callback + context - invoked once the async open completes
This function first checks the cache, if there's a hit, it returns synchronously (NGX_OK). Otherwise, it posts a task that does ngx_open_and_stat_file (NGX_AGAIN). When the task completes, the main nginx thread updates the cache, and invokes the user callback.
Hope this helps...
nginx-devel mailing list firstname.lastname@example.org http://mailman.nginx.org/mailman/listinfo/nginx-devel