hi,
i understand that NGX_AGAIN is returned when a chain could not be send
because more data cannot be buffered on that socket.
I need to understand the following: in my case, when i receive a request, i
start a timer every 10ms and send out some data, then i create a new timer
every10ms until i decide to finish sending out data (video frames).
But if in some triggered callback by the timer the
ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send
that chain as soon as the socket becomes available again. But after that
happens, how can i restore my timer cycle ?
thnks.
J.Z.
Hello!
Akos Gyimesi reported a request hang (downstream connections stuck in
the CLOSE_WAIT state forever) regarding use of proxy_cache_lock in
subrequests.
The issue is that when proxy_cache_lock_timeout is reached,
ngx_http_file_cache_lock_wait_handler calls
r->connection->write->handler() directly, but
r->connection->write->handler is (usually) just
ngx_http_request_handler, which simply picks up r->connection->data,
which is *not* necessarily the current (sub)request, so the current
subrequest may never be continued nor finalized, leading to an
infinite request hang.
The following patch fixes this issue for me. Comments welcome!
Thanks!
-agentzh
--- nginx-1.4.3/src/http/ngx_http_file_cache.c 2013-10-08
05:07:14.000000000 -0700
+++ nginx-1.4.3-patched/src/http/ngx_http_file_cache.c 2013-10-26
14:47:56.184041728 -0700
@@ -432,6 +432,7 @@ ngx_http_file_cache_lock_wait_handler(ng
ngx_uint_t wait;
ngx_msec_t timer;
ngx_http_cache_t *c;
+ ngx_connection_t *conn;
ngx_http_request_t *r;
ngx_http_file_cache_t *cache;
@@ -471,7 +472,10 @@ wakeup:
c->waiting = 0;
r->main->blocked--;
- r->connection->write->handler(r->connection->write);
+
+ conn = r->connection;
+ r->write_event_handler(r);
+ ngx_http_run_posted_requests(conn);
}
Hello,
I am interested in getting support for Weak ETags into the mainline.
There was some discussion previously in here previously that developed
a quick patch to add support. What additional functionality would be
required and what steps should be followed to get weak etag
functionality added to nginx? I am willing to do the work, I just need
some help with heading in the right direction.
Thank you,
-Aaron Peschel
Hello!
On Mon, Jul 29, 2013 at 10:11 AM, Maxim Dounin wrote:
> Additionally, doing a full merge of all free blocks on a free
> operation looks too much. It might be something we want to do on
> allocation failure, but not on a normal path in
> ngx_slab_free_pages(). And/or something lightweight may be done
> in ngx_slab_free_pages(), e.g., checking if pages following pages
> we are freeing are free too, and merging them in this case.
>
I'd propose an alternative patch taking the second approach, that is,
merging adjacent free pages (for both the previous and next blocks) in
ngx_slab_free_pages(). This approach has the following advantages:
1. It can effectively distribute the merging computations across all
the page free operations, which can prevent potential frequent and
long stalls when actually running out of large enough free blocks
along the "free" list that is already very long for large zones (which
usually consists of tons of one-page blocks upon allocation
failures).
2. it can also make multi-page allocations generally faster because
we're merging pages immediately when we can and thus it's more likely
to find large enough free blocks along the (relatively short) free
list for ngx_slab_alloc_pages().
The only downside is that I have to introduce an extra field
"prev_slab" (8-byte for x86_64) in ngx_slab_page_t in my patch, which
makes the slab page metadata a bit larger.
Feedback welcome!
Thanks!
-agentzh
# HG changeset patch
# User Yichun Zhang <agentzh(a)gmail.com>
# Date 1399870567 25200
# Sun May 11 21:56:07 2014 -0700
# Node ID 93614769dd4b6df8844c3c43c6a0b3f83bfa6746
# Parent 48c97d83ab7f0a3f641987fb32ace8af7720aefc
Core: merge adjacent free slab pages to ameliorate fragmentation from
multi-page blocks.
diff -r 48c97d83ab7f -r 93614769dd4b src/core/ngx_slab.c
--- a/src/core/ngx_slab.c Tue Apr 29 22:22:38 2014 +0200
+++ b/src/core/ngx_slab.c Sun May 11 21:56:07 2014 -0700
@@ -111,6 +111,7 @@
ngx_memzero(p, pages * sizeof(ngx_slab_page_t));
pool->pages = (ngx_slab_page_t *) p;
+ pool->npages = pages;
pool->free.prev = 0;
pool->free.next = (ngx_slab_page_t *) p;
@@ -118,6 +119,7 @@
pool->pages->slab = pages;
pool->pages->next = &pool->free;
pool->pages->prev = (uintptr_t) &pool->free;
+ pool->pages->prev_slab = 0;
pool->start = (u_char *)
ngx_align_ptr((uintptr_t) p + pages *
sizeof(ngx_slab_page_t),
@@ -626,9 +628,16 @@
if (page->slab >= pages) {
if (page->slab > pages) {
+ /* adjust the next adjacent block's "prev_slab" field */
+ p = &page[page->slab];
+ if (p < pool->pages + pool->npages) {
+ p->prev_slab = page->slab - pages;
+ }
+
page[pages].slab = page->slab - pages;
page[pages].next = page->next;
page[pages].prev = page->prev;
+ page[pages].prev_slab = pages;
p = (ngx_slab_page_t *) page->prev;
p->next = &page[pages];
@@ -652,6 +661,7 @@
p->slab = NGX_SLAB_PAGE_BUSY;
p->next = NULL;
p->prev = NGX_SLAB_PAGE;
+ p->prev_slab = 0;
p++;
}
@@ -672,7 +682,7 @@
ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page,
ngx_uint_t pages)
{
- ngx_slab_page_t *prev;
+ ngx_slab_page_t *prev, *p;
page->slab = pages--;
@@ -686,6 +696,53 @@
page->next->prev = page->prev;
}
+ /* merge the next adjacent free block if it is free */
+
+ p = &page[page->slab];
+ if (p < pool->pages + pool->npages
+ && !(p->slab & NGX_SLAB_PAGE_START)
+ && p->next != NULL
+ && (p->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)
+ {
+ page->slab += p->slab;
+
+ /* remove the next adjacent block from the free list */
+
+ prev = (ngx_slab_page_t *) p->prev;
+ prev->next = p->next;
+ p->next->prev = p->prev;
+
+ /* adjust the "prev_slab" field in the next next adjacent block */
+ if (p + p->slab < pool->pages + pool->npages) {
+ p[p->slab].prev_slab = page->slab;
+ }
+
+ ngx_memzero(p, sizeof(ngx_slab_page_t));
+ }
+
+ if (page->prev_slab) {
+ /* merge the previous adjacent block if it is free */
+
+ p = page - page->prev_slab;
+ if (!(p->slab & NGX_SLAB_PAGE_START)
+ && p->next != NULL
+ && (p->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)
+ {
+ /* p->slab == page->prev_slab */
+
+ p->slab += page->slab;
+ ngx_memzero(page, sizeof(ngx_slab_page_t));
+
+ /* adjust the "prev_slab" field in the next adjacent block */
+ if (p + p->slab < pool->pages + pool->npages) {
+ p[p->slab].prev_slab = p->slab;
+ }
+
+ /* skip adding "page" to the free list */
+ return;
+ }
+ }
+
page->prev = (uintptr_t) &pool->free;
page->next = pool->free.next;
diff -r 48c97d83ab7f -r 93614769dd4b src/core/ngx_slab.h
--- a/src/core/ngx_slab.h Tue Apr 29 22:22:38 2014 +0200
+++ b/src/core/ngx_slab.h Sun May 11 21:56:07 2014 -0700
@@ -19,6 +19,8 @@
uintptr_t slab;
ngx_slab_page_t *next;
uintptr_t prev;
+ uintptr_t prev_slab;
+ /* number of pages for the previous adjacent block */
};
@@ -31,6 +33,8 @@
ngx_slab_page_t *pages;
ngx_slab_page_t free;
+ ngx_uint_t npages;
+
u_char *start;
u_char *end;
If a content phase handler needs to wait on some potentially delayed
result, my understanding is that it should return NGX_DONE so that it is
called again.
I've been reading through the eval, echo, and http_limit_req modules to see
how to integrate an nginx_add_timer event prior to returning NGX_DONE. A
short timer event seems reasonable, because the content phase handler isn't
waiting on some other event type (subrequest result, timeout, etc). The
timer event seems fairly straight-forward -- configure the event in a
request context and set the event handler, data and log.
I don't really want my timer event handler to do anything -- rather I just
want the same content phase handler that had previously returned NGX_DONE
to run again. In that case, should my timer event handler actually do
anything at all? Is there a best practice for this -- i.e. have it point
to the write_event_handler(), call ngx_http_core_run_phases() or
ngx_http_run_posted_requests(), etc?
Thank you,
Ben
Hi,
I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.
The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
128000
87040
70656
62464
58368
54272
50176
46080
41984
37888
33792
29696
This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some "no memory" messages.
I put the code in attach.
Thanks,
Wandenberg
Hey,
I solved my 'problem' with using post_action after proxy_pass. Thank you.
On Fri, May 30, 2014 at 3:00 PM, <nginx-devel-request(a)nginx.org> wrote:
> Send nginx-devel mailing list submissions to
> nginx-devel(a)nginx.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> or, via email, send a message with subject or body 'help' to
> nginx-devel-request(a)nginx.org
>
> You can reach the person managing the list at
> nginx-devel-owner(a)nginx.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of nginx-devel digest..."
>
>
> Today's Topics:
>
> 1. handler before proxy-pass (Donatas Abraitis)
> 2. Re: handler before proxy-pass (Maxim Dounin)
> 3. Re: [PATCH] append "-gzip" to etags processed by
> gzip_filter_module + not modified support (Andrew Williams)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 29 May 2014 15:05:15 +0300
> From: Donatas Abraitis <donatas.abraitis(a)gmail.com>
> To: nginx-devel(a)nginx.org
> Subject: handler before proxy-pass
> Message-ID:
> <CAPF+HwUu2r8eOALA46=
> LWmOS0XQTVYPDeurBkNpAn63--V_O7Q(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello,
>
> I want to add a custom 'handler', which would update ngx_os_argv[0] on
> request. It should be like this:
>
> root 18601 0.0 0.0 62272 3812 ? Ss 07:00 0:00 nginx:
> master process /opt/nginx/bin/nginx -c /opt/nginx/etc/nginx.conf
> web 18602 0.0 0.0 70372 7904 ? S 07:00 0:00 \_
> 183.54.68.10 [test.domain.com] GET /ok.php HTTP/1.1??Host
>
> It works if using only nginx for static content, but if having proxy_pass
> in location directive it doesn't work.
>
> Code snippet is like this:
>
> {
> ...
> sprintf(title, "%s [%s] %s", r->connection->addr_text.data,
> r->headers_in.server.data, r->request_line.data);
> memcpy(ngx_os_argv[0], title, MAXBUF);
> return NGX_DECLINED;
> }
>
> Question is, how to solve this problem to update ngx_os_argv[0] if using
> together with proxy_pass? Seems it bypass my 'handler' if using proxy_pass.
> Maybe there is some kind of sequence of loading modules?
>
> --
> Donatas
>