Setting memcache keys in eval block

agentzh agentzh at gmail.com
Thu Feb 11 12:35:28 MSK 2010


On Wed, Feb 10, 2010 at 6:45 PM, Markus Jelsma <markus at buyways.nl> wrote:
>
> How can i stay up to date for such a feature if it were to be implemented in
> the - hopefully nearby - future?
>

Yes, hopefully in the near future :) I'm busy with the ngx_array_var
module development as well as the ngx_srcache module atm. Maybe I'll
have some spare time to hack that in after these modules are out.

But there's a workaround that you can try out *now*. Please read on.

I do have a personal fork of the ngx_eval module here:

    http://github.com/agentzh/nginx-eval-module

Currently it has support for arbitrary content handlers as well as
output filters. Here's some quick examples that you *may* be
interested in:

    location /echo {
        eval_subrequest_in_memory off;
        eval $a {
            echo_before_body BEFORE;
            echo THIS;
        }
        echo '[$a]';
    }

Then GET /echo yields

    [BEFORE
    THIS]

asssuming you configured the ngx_echo module *after* the ngx_eval
module such that the echo_before_body filter runs before ngx_eval's.

This also means that you can take advantage of the echo_location_async
or echo_location directives provided by ngx_echo to do your multiple
eval's in a single location. Like this:


    location /echo {
        eval_subrequest_in_memory off;
        eval $union {
            echo_location_async /memc1;
            echo 'XXXX';
            echo_location_async /memc2;
        }
        if ($union ~ '(.*)XXXX\n(.*)') {
             set $res1 $1;
             set $res2 $2;
             ...
        }
    }
    location /memc1 {
          memcached_pass ...;
    }
    location /memc2 {
          memcached_pass ...;
    }

This should be more efficient than the internal proxying approach.
Feel free to do some benchmark on your side to confirm this ;)

Cheers,
-agentzh

P.S. I've sent a pull request to Valery Kholodkov in the hope to get
my patch for ngx_eval merged into the mainstream. But no reply yet.
Sigh.



More information about the nginx mailing list