Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

agentzh agentzh at gmail.com
Wed Mar 16 09:48:19 MSK 2011


On Tue, Mar 15, 2011 at 11:13 PM, Antoine Bonavita (personal)
<antoine.bonavita at gmail.com> wrote:
> Hi agentzh,
>
> I managed to migrate all my tests from my original python approach to
> using Test::Nginx. I guess this is good news.

Yay!

> However, I must say some
> of it was a bit painful.
>
> The main thing is probably a lack of documentation/examples on the
> data sections accepted by Test::Nginx.

I must agree that Test::Nginx lacks documents and tutorials :P

> Especially for people who are
> not familiar with Test::Base (like me). I understand you don't want to
> duplicate the work done by the guys at Test::Base but at least
> pointers to some useful tricks like filters and --- ONLY would help
> the beginners.
>

*nod*

> As a side note to this, I don't see any benefit in having the
> "request_eval" section. To me (at least in the tests I wrote)
> "request_eval" can be replaced by "request eval" (so, applying eval to
> the data). May be you should get rid of the _eval versions or maybe
> I'm missing something....
>

I myself seldom or never use that. I cannot remember why I did
introduce that. From the code, it indeed can be replaced by the "eval"
filter.

> I actually wrote a few posts on the migration to Test::Nginx:
> * http://www.nginx-discovery.com/2011/03/day-32-moving-to-testnginx.html
> * http://www.nginx-discovery.com/2011/03/day-33-testnginx-pipelinedrequests.html
>
> Another thing that annoyed me is the use of shuffle to "on" by
> default. I find it more misleading than anything else (especially on
> your first runs).
>

Oh, I'm sorry about that. The assumption that I originally made was
that we should try our best to make our test cases independent of each
other. Exceptions was supposed to be rare (and indeed so in our test
suites), hence the no_shuffle() switch and the default "shuffle on"
behavior.

> After going through this exercise (and learning quite a few things in
> the process), the things that I really think should be improved are:
> * Being able to share one config amongst multiple tests.

This is usually done this way:

    use Test::Nginx::Socket;

    plan tests => 2 * blocks();

    our $config = <<_EOC_;
    location /foo {
       ...
    }
    _EOC_

    run_tests();

    __DATA__

    === TEST 1:
    --- config eval: $::config
    --- request: GET /foo?bar=1
    --- response_body_like: xxxx

    === TEST 2:
    --- config eval: $::config
    --- request: PUT /foo?baz=3
    --- response_body
    yyyyy

> * Being able to run multiple requests in one test. The
> pipelined_requests use the same connection which might not be what I
> want. I was thinking of something more natural like : send request 1,
> wait for response 1, check response 1, send request 2, wait for
> response 2, check response 2, etc.
>

There is a repeat_each() mechanism. Does it fits your need? It fires
the same request multiple times that you specify to the same nginx
instance.

For example:

    use Test::Nginx::Socket;

    repeat_each(3);

    plan tests => repeat_each() * 2 * blocks();

    run_tests();

    __DATA__

    ...

The "pipelined_requests" stuff is for testing HTTP 1.1 pipelining
support and what you describe is kinda like HTTP 1.1 keepalive testing
or something more like the repeat_each() stuff that I've demonstrated
above?

> Of course, I am willing to help with these improvements but I do not
> want to start running all over the place without discussing it with
> you as I'm likely to miss out something really big.
>

Yeah, such discussions are really great and I very much appreciate it ;)

Cheers,
-agentzh



More information about the nginx-devel mailing list