Hello,
I have a fresh new installation of nginx on ubuntu 8.04 (installed via
apt-get, I believe it is 0.5.x something)
In my nginx.conf file I have gzip turned on, now I wanted to do a basic test
and serving a static html file, but the file does not come compressed at
all.
Here's the snippet of configuration, I tried variations on this without
success
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
gzip_http_version 1.1;
gzip_comp_level 6;
Does anyone have any idea why nothing is compressed?
I've used this http://www.gidnetwork.com/tools/gzip-test.php and others to
test the results.
Thank you so much!
Can anyone recommend a good log analysis tool that is a good match for nginx? I'd like something that is flexible, where I can tell it which fields in the format are for which parts of the data (i.e. user agent is field 6) etc. Thanks! Note that I am looking for a SERVER SIDE tool to process the nginx logs (JavaScript based analytics tools will not work for this particular application).
Hello,
I have some problems running (perl) cgi scripts with nginx.
I followed the config at http://wiki.codemongers.com/NginxSimpleCGI
Perl scripts work very well as long as they use method=get in forms. If
they use method=post, it seems no input is passed to the cgi.
I saw in perl-fcgi.pl that there was some lines: "if
(($req_params{'REQUEST_METHOD'} eq 'POST') && ($req_len != 0) ){ "
Which should pass the info
Has someone an idea of what's happening?
Regards
Hi All
I'm trying to move a bunch of rails apps from an apache/fastcgi
platform to nginx/mongrel_cluster.
What I want is something like this;
http://my.server/app1 --> mongrel_cluster1
http://my.server/app2 --> mongrel_cluster2
I can do this in nginx with something like this;
...
upstream mongrel_cluster1 {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
upstream mongrel_cluster2 {
server 127.0.0.1:4001;
server 127.0.0.1:4002;
}
server {
...
location /app1 {
proxy_pass http://mongrel_cluster1;
break;
}
location /app2 {
proxy_pass http://mongrel_cluster2;
break;
}
...
But, the problem is that I'm now hitting my rails apps with paths like
this;
/app1/controller/method
/app2/controller/method
...when the apps want the paths to be just /controller/method
So, I want to use something like this to remove the 'app1' part;
rewrite ^/app1/(.*)$ /$1 permanent;
But, that seems to override the proxy_pass directive, because if I put
that in my location blocks, I just get 404 errors, and it's not
allowed in my upstream blocks.
I know I could use virtual hosts, with a different subdomain for each
app, but that would be this;
http://app1.my.server --> mongrel_cluster1
http://app2.my.server --> mongrel_cluster2
...which is not what I want.
Is there any way to achieve what I want, using nginx?
Thanks in advance for any help.
David
When I post when a JavaScript file, "405 not allowed" error will appear.
if use proxy, it can work.
error_page 405 =200 @405;
location @405 {
root /htdocs;
proxy_pass http://localhost:8080;
}
but I do not want to use proxy, just want to use Nginx, can to achieve it?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2414,2414#msg-2414
Hi All!
I'm new on this nice discussion list but I'm a long time lurker.
I'm working on a very specific module for Nginx: a complete CMS. It may
sounds strange since upstream servers and scripting languages are the
norm for the CMS, but I want (and need) speed.
So far I've done many things without too much trouble, but I'm a little
bit stuck with the processing of events in Nginx. I would like to
process a specific event which is not connection related but created by
one of my worker threads.
I hope this short example will be clear:
// My specific Nginx http handler
int my_http_handler (ngx_request_t * r)
{
if (r->my_state == 0) // first step: initiate the work to do
{
r->my_state ++ ;
my_sendmsg (myqueue,r) ; // send a message to worker
return NGX_AGAIN ; // please call me back when done
}
else // second step: results are ready
{
// produce xhtml output from results
return NGX_OK ; // finished
}
}
// The worker running on a specific thread
void my_http_worker (void * arg)
{
ngx_http_request_t * r ;
ngx_event_t * ev ;
while (1)
{
my_recvmsg (myqueue,r) ;
// processing the request
// ...
// wake up my_http_handler
ngx_post_event (ev, (ngx_event_t * *) & ngx_posted_events) ;
}
}
But I don't know how to fill the ngx_event_t (in particular the
handlers) in order to call again my_http_handler on Nginx's context.
I believe it's possible to do so from what I've seen, but Nginx's code
is not so easy to enter on (it's not a critic).
Sorry for this (first) long message but I think it would be nice to be
able to develop clean and non blocking modules for Nginx.
BTW forgive my English, I'm French ;-)
Patch creates limit_rate_after directive:
limit_rate_after 1m;
limit_rate 100k;
The directive limits speed only after the first part was sent.
--
Igor Sysoev
http://sysoev.ru/en/
Hello -
I'm using the nginx.conf file below to try to run phpMyAdmin with SSL
and FastCGI in a subdirectory (eg, mydom.myvpshost.com/phpmyadmin).
It works except after I hit GO on the phpMyAdmin login screen - when the
rewrite rule drops the "phpmyadmin" from the middle of the URL and the
browser displays "404 Not Found - nginx/0.6.33" - then if I add
"phpmyadmin" back in the middle of the rewritten URL it works fine for
the rest of the phpMyAdmin session.
server {
listen 443;
server_name mydom.myvpshost.com;
ssl on;
ssl_certificate /etc/ssl/certs/myssl.crt;
ssl_certificate_key /etc/ssl/private/myssl.key;
access_log /usr/local/nginx/logs/phpmyadmin.access_log;
error_log /usr/local/nginx/logs/phpmyadmin.error_log;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers
ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location /phpmyadmin/ {
rewrite ^/phpmyadmin(/.*)$ $1 break;
index index.php;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
root /home/myname/sources/phpmyadmin/;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
include /usr/local/nginx/conf/fastcgi_params.phpmyadmin;
}
}
server {
listen 80;
server_name mydom.myvpshost.com;
location / {
rewrite ^/phpmyadmin(.*) https://mydom.myvpshost.com/phpmyadmin$1
permanent;
}
}
So I can display the phpMyAdmin login screen (and get the self-signed
SSL certificate dialog the first time around), by going here:
http://mydom.myvpshost.com/phpmyadmin
I enter the Username and Password and hit GO, and then the browser shows
a new URL like this:
https://mydom.myvpshost.com/index.php?token=290761b728bd2bfac2953354fbf3e9bb
So it redirected from http to https, and it also dropped the
"phpmyadmin" part in the middle, because of the rewrite rule. Since the
root directive is /home/myname/sources/phpmyadmin/ (the location of
phpMyAdmin on my server), I thought this would work - but it doesn't.
I'm actually able to manually fix this by altering this URL in the
browser just this one time, inserting the "phpmyadmin" part in the
middle, like this:
https://mydom.myvpshost.com/phpmyadmin/index.php?token=28f650a617ac1ae9b184…
>From then on everything works fine for the rest of the phpMyAdmin
session.
I know this problem wouldn't be happening if I used a vhost in the URL
(eg, phpmyadmin.mydom.myhost.com) - but in this case I want to use a
subdir in the URL (eg, mydom.myhost.com/phpmyadmin).
I've been sitting here for two days pulling my hair out trying to get
this right. Can anyone tell me what's wrong with my nginx.conf file
here?
Thanks.
- Stefan Scott
--
Posted via http://www.ruby-forum.com/.
Hi Igor,
I've recently started using the new limit_req module in nginx 0.7 to
try to throttle requests to the API of our web service.
We've been having some issues in that 'delayed' requests seem to be
returned with a 503 HTTP response header, but with the correct body
(the limit_req is under a 'location' block with proxy_pass).
The configuration is:
limit_req_zone $binary_remote_addr zone=api_one:10m rate=1r/s;
location /services/rest {
limit_req zone=api_one burst=5;
proxy_pass http://rtm_api;
...
}
My intention is to limit requests to an average of 1 request a second,
burstable to 5 requests, and supporting delaying requests until
they're within that 1 request / second threshold.
Is the 503 response a known issue?
Also, could you tell me how to test this particular configuration to
make sure it's correct?
I've tried ab/http_load/siege, etc.
Thanks for your time!
Regards,
Omar