<div dir="ltr">Theres actually an example using the memcached module (and echo module to get the request body). The main advantage to the lua module is if you wanted to do something custom (e.g processing) or transmit it over a different protocol not available as an extension.</div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, May 26, 2014 at 7:17 PM, Paulo Silva <span dir="ltr"><<a href="mailto:pauloasilva@gmail.com" target="_blank">pauloasilva@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On Mon, May 26, 2014 at 9:28 AM, SplitIce <<a href="mailto:mat999@gmail.com">mat999@gmail.com</a>> wrote:<br>
> Yes, connecting with SOCK_NONBLOCK shouldnt block. I don't believe this is<br>
> mentioned previously. If your code blocks (e.g blocking connect or blocking<br>
> send) then it would reduce nginx throughput substantially. Thats the point I<br>
> was trying to make.<br>
><br>
<br>
</div>Yap, I didn't mention SOCK_NONBLOCK. Sorry about that.<br>
<br>
Although I'm working on a prototype to benchmark, my concern is about<br>
socket() and connect() calls on every nginx filter execution. How<br>
would you manage this?<br>
<div class=""><br>
> Have you investigated srcache<br>
> (<a href="https://github.com/openresty/srcache-nginx-module/" target="_blank">https://github.com/openresty/srcache-nginx-module/</a>)? srcache_store could be<br>
> used without srcache_fetch to store only. Your store handler could be a<br>
> proxy_pass call to pass a restful server or whatever you use for storing the<br>
> HTML responses.<br>
><br>
<br>
</div>I didn't know about this module. Thanks, I will have a deep look on it.<br>
Do you think that I can pick the proxy_pass response and store it on<br>
memcache easily?<br>
<div class=""><br>
> Perhaps a better solution would be to use the nginx lua extension instead<br>
> (less of a hack).<br>
><br>
<br>
</div>Do you think that lua extension has advantages regarding an nginx filter module.<br>
<br>
Sorry about so many questions but at the beginning, nginx is a all<br>
mystery, and you'll worried to do not break its performance.<br>
<br>
Thanks a bunch,<br>
<div class="HOEnZb"><div class="h5"><br>
> Regards,<br>
> Mathew<br>
><br>
><br>
> On Mon, May 26, 2014 at 6:18 PM, Paulo Silva <<a href="mailto:pauloasilva@gmail.com">pauloasilva@gmail.com</a>> wrote:<br>
>><br>
>> On Mon, May 26, 2014 at 9:09 AM, SplitIce <<a href="mailto:mat999@gmail.com">mat999@gmail.com</a>> wrote:<br>
>> > As in blocking send and connect? I don't know the specifics of Unix<br>
>> > Sockets,<br>
>> > but don't they block when the buffer fills (I know FIFO queues do)?<br>
>> ><br>
>><br>
>> Sorry, I don't fully understand your question.<br>
>> I was expecting that with the SOCK_NONBLOCK it would not block.<br>
>><br>
>> What would be your approach?<br>
>> Do you know about any nginx internal mechanism to accomplish this goal<br>
>> (get the upstream response body out of nginx)?<br>
>><br>
>> ><br>
>> > On Mon, May 26, 2014 at 9:22 AM, Paulo Silva <<a href="mailto:pauloasilva@gmail.com">pauloasilva@gmail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> Hi,<br>
>> >> I'm not sure whether I will face problems with other filters modifying<br>
>> >> the response body after mine, but for know I'm comfortable as I can<br>
>> >> rebuild the full response body just iterating buffers chains.<br>
>> >><br>
>> >> As I said before I'm using nginx as reverse proxy and my main goal is<br>
>> >> to pass the upstream (proxy_pass) response to another local process<br>
>> >> (relative to nginx).<br>
>> >><br>
>> >> I am benchmarking Unix sockets and Shared memory as IPC.<br>
>> >> I did it already for Unix Sockets and with my prototype the nginx<br>
>> >> "performance" dropped for half the number of requests per second. Of<br>
>> >> course I'm doing something really bad.<br>
>> >><br>
>> >> Is it OK to use socket/connect/send from inside an nginx module?<br>
>> >><br>
>> >> I would be glad to hear from you.<br>
>> >> Thanks,<br>
>> >><br>
>> >> On Fri, May 23, 2014 at 2:50 PM, Paulo Silva <<a href="mailto:pauloasilva@gmail.com">pauloasilva@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > Because I don't have deep knowledge of nginx internal and I can not<br>
>> >> > find a proper resource about it, the best I can do and with what I am<br>
>> >> > comfortable is with body_filter.<br>
>> >> ><br>
>> >> > Do you think I can notice whether all other 3rd party module filters<br>
>> >> > finish modifying the ngx_chain_t *in ?<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > On Fri, May 23, 2014 at 2:41 PM, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
>> >> > wrote:<br>
>> >> >> Hello!<br>
>> >> >><br>
>> >> >> On Fri, May 23, 2014 at 02:17:27PM +0100, Paulo Silva wrote:<br>
>> >> >><br>
>> >> >>> Hi,<br>
>> >> >>><br>
>> >> >>> On Fri, May 23, 2014 at 12:58 PM, Maxim Dounin <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>><br>
>> >> >>> wrote:<br>
>> >> >>> > Hello!<br>
>> >> >>> ><br>
>> >> >>> > On Fri, May 23, 2014 at 11:57:20AM +0100, Paulo Silva wrote:<br>
>> >> >>> ><br>
>> >> >>> >> there is other option than modify the auto/modules file?<br>
>> >> >>> >><br>
>> >> >>> >> According to my goal (capture the full request response body) I<br>
>> >> >>> >> would<br>
>> >> >>> >> say that my module must run right before the postpone.<br>
>> >> >>> ><br>
>> >> >>> > Before the postpone filter you'll get subrequest bodies in your<br>
>> >> >>> > filter, which is probably not what you want (the postpone filter<br>
>> >> >>> > is to glue subrequest together, correctly ordered).<br>
>> >> >>> ><br>
>> >> >>> >> Am I supposed to modify the auto/modules like follows?<br>
>> >> >>> >><br>
>> >> >>> >> if [ $HTTP_POSTPONE = YES ]; then<br>
>> >> >>> >> HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES<br>
>> >> >>> >> $HTTP_POSTPONE_FILTER_MODULE"<br>
>> >> >>> >> HTTP_SRCS="$HTTP_SRCS $HTTP_POSTPONE_FILTER_SRCS"<br>
>> >> >>> >> fi<br>
>> >> >>> >><br>
>> >> >>> >> # insert my module here!<br>
>> >> >>> >><br>
>> >> >>> >> if [ $HTTP_SSI = YES ]; then<br>
>> >> >>> >> have=NGX_HTTP_SSI . auto/have<br>
>> >> >>> >> HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES<br>
>> >> >>> >> $HTTP_SSI_FILTER_MODULE"<br>
>> >> >>> >> HTTP_DEPS="$HTTP_DEPS $HTTP_SSI_DEPS"<br>
>> >> >>> >> HTTP_SRCS="$HTTP_SRCS $HTTP_SSI_SRCS"<br>
>> >> >>> >> fi<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> I did check my modules config file and I did realize that it is<br>
>> >> >>> >> "queued" as HTTP_AUX_FILTER_MODULES. There are different queues<br>
>> >> >>> >> for<br>
>> >> >>> >> core modules and addons?<br>
>> >> >>> ><br>
>> >> >>> > The HTTP_AUX_FILTER_MODULES is a generic queue, and it's the<br>
>> >> >>> > only one currently officially supported for 3rd party modules.<br>
>> >> >>> ><br>
>> >> >>> > If you want your filter to be called right before/after postpone<br>
>> >> >>> > filter, it should be relatively safe to put it into the<br>
>> >> >>> > HTTP_POSTPONE_FILTER_MODULE variable though (and may be with some<br>
>> >> >>> > additional checks to make sure the postpone filter is enabled, or<br>
>> >> >>> > just a code to enable it unconditionally).<br>
>> >> >>> ><br>
>> >> >>><br>
>> >> >>> And this is also valid when compiling nginx with the --add-module<br>
>> >> >>> flag?<br>
>> >> >>> How does config file look like?<br>
>> >> >>><br>
>> >> >>> My knowledge is restricted to Emiller's Guide To Nginx Module<br>
>> >> >>> Development (<a href="http://www.evanmiller.org/nginx-modules-guide.html" target="_blank">http://www.evanmiller.org/nginx-modules-guide.html</a>)<br>
>> >> >>> and a<br>
>> >> >>> few debugging hours.<br>
>> >> >><br>
>> >> >> Uhm, looking again into auto/modules I think I was wrong, and<br>
>> >> >> modifying the HTTP_POSTPONE_FILTER_MODULE variable won't work<br>
>> >> >> (added module config scripts are executed later on), you should<br>
>> >> >> modify HTTP_FILTER_MODULES variable instead, and put your module<br>
>> >> >> into a proper position.<br>
>> >> >><br>
>> >> >> Note that the "config" file of a module is just a shell script,<br>
>> >> >> and you are free to do more or less anything there.<br>
>> >> >><br>
>> >> >> --<br>
>> >> >> Maxim Dounin<br>
>> >> >> <a href="http://nginx.org/" target="_blank">http://nginx.org/</a><br>
>> >> >><br>
>> >> >> _______________________________________________<br>
>> >> >> nginx-devel mailing list<br>
>> >> >> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
>> >> >> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > Paulo A. Silva<br>
>> >> > <a href="http://tech.pauloasilva.com" target="_blank">http://tech.pauloasilva.com</a><br>
>> >> > <a href="http://linkedin.com/in/devpauloasilva/" target="_blank">http://linkedin.com/in/devpauloasilva/</a><br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Paulo A. Silva<br>
>> >> <a href="http://tech.pauloasilva.com" target="_blank">http://tech.pauloasilva.com</a><br>
>> >> <a href="http://linkedin.com/in/devpauloasilva/" target="_blank">http://linkedin.com/in/devpauloasilva/</a><br>
>> >><br>
>> >> _______________________________________________<br>
>> >> nginx-devel mailing list<br>
>> >> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
>> >> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
>> ><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > nginx-devel mailing list<br>
>> > <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
>> > <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
>><br>
>><br>
>><br>
>> --<br>
>> Paulo A. Silva<br>
>> <a href="http://tech.pauloasilva.com" target="_blank">http://tech.pauloasilva.com</a><br>
>> <a href="http://linkedin.com/in/devpauloasilva/" target="_blank">http://linkedin.com/in/devpauloasilva/</a><br>
>><br>
>> _______________________________________________<br>
>> nginx-devel mailing list<br>
>> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
>> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> nginx-devel mailing list<br>
> <a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
<br>
<br>
<br>
--<br>
Paulo A. Silva<br>
<a href="http://tech.pauloasilva.com" target="_blank">http://tech.pauloasilva.com</a><br>
<a href="http://linkedin.com/in/devpauloasilva/" target="_blank">http://linkedin.com/in/devpauloasilva/</a><br>
<br>
_______________________________________________<br>
nginx-devel mailing list<br>
<a href="mailto:nginx-devel@nginx.org">nginx-devel@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
</div></div></blockquote></div><br></div>