<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hi Francis,</div>
<div class="moz-cite-prefix">thank you for the update <br>
</div>
<div class="moz-cite-prefix">
<ol>
<li>documentation<br>
I've read up and down the documentation and other sources. I
could not find anything saying that if you run the nginx
binary, you run a new instance of nginx, independent of any
other. Where is it mentioned that nginx is multi instance
capable application only indirectly in <a
moz-do-not-send="true"
href="https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/#upgrading-to-a-new-binary-on-the-fly">Upgrading
To a New Binary On The Fly in Starting, Stopping, and
Restarting NGINX</a> and <a moz-do-not-send="true"
href="https://www.digitalocean.com/community/tutorials/how-to-upgrade-nginx-in-place-without-dropping-client-connections">How
To Upgrade Nginx In-Place Without Dropping Client
Connections</a>. I found rather sources telling me to
install it twice as here: <a moz-do-not-send="true"
href="https://www.linuxhelp.com/how-to-install-multiple-nginx-instances-in-same-server-on-centos-6/">How
to install Multiple Nginx instances in same Server</a><br>
<br>
</li>
<li>command line<br>
Slowly I being understanding reading <a
moz-do-not-send="true"
href="https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/">Starting,
Stopping, and Restarting NGINX</a> and <a
moz-do-not-send="true"
href="https://docs.nginx.com/nginx/admin-guide/basic-functionality/runtime-control/">Controlling
NGINX Processes at Runtime</a>. If you run multiple
instances you use the signal -s with a reference to the
desired PID otherwise it will pick the default on what is <font
face="Courier New, Courier, monospace">/var/run/nginx.pid</font>
, isn't?<br>
e.g. I have a master and two user nginx instances relaoding
each of them would be:<br>
<font face="Courier New, Courier, monospace">master: nginx -s
reload <br>
user1: nginx -s reload "pid /var/run/nginx_user1.pid"<br>
user2: nginx -s reload "pid /var/run/nginx_user2.pid"</font><br>
or by its PIDs listed by <font face="Courier New,
Courier,
 monospace">ps aux -P | grep nginx<br>
</font><font face="Courier New, Courier, monospace">master:
nginx -s reload <br>
user1: nginx -s reload PID_NoOfUser1Instance<br>
user2: nginx -s reload </font><font face="Courier
New,
 Courier, monospace">PID_NoOfUser1Instance<br>
</font><br>
Making sure that I understood correctly <font
face="Courier
 New, Courier, monospace">nginx -g "pid
/var/run/nginx.pid; worker_processes `sysctl -n hw.ncpu`;</font><tt>"</tt>
means to replace the number of workers by the count of CPUs in
nginx master process with the process id given in <font
face="Courier New, Courier, monospace">/var/run/nginx.pid</font>,
doesn't it?<br>
<font face="Courier New, Courier, monospace"><br>
</font></li>
<li>root and non-root<br>
done <span class="moz-smiley-s1"><span>:-)</span></span><br>
<br>
</li>
<li> all in all there two layers of isolation<br>
I didn't mean the nginx does not do the e.g. php
interpretation, you just tell nginx where to find the php
interpreter, what will do the job and feeds the content
through a CGI.<br>
That what you described is more or less what I meat but I'm
bit confused as you say, that if the main nginx is compromised
the hacker could gain root privileges? Is this also possible
after the main process changed its user after the binding of
the ports >1024?<br>
After all, I just want to have a clear picture of what is at
risk when some part of the entire server environment is
compromised and how can I minimize the risk by isolation of
all involved parts.<br>
I agree that not many of us will be a target to hack into
nginx and if someone tries to hack our servers there are
properly weaker parts as the so often mentioned "badly
written PHP" scripts. All that I want is to have a good idea
about the risks to put me in a position to do a proper effort
vs. risk assessment.<br>
I reckon the information gathered so far put me in a quite
good position, doesn't it? <br>
</li>
</ol>
<p>thx</p>
<p>Stefan<br>
</p>
</div>
<div class="moz-cite-prefix">On 17.10.2018 22:59, Francis Daly
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:20181017205943.i3rl2aewksyibwyo@daoine.org">
<pre class="moz-quote-pre" wrap="">On Tue, Oct 16, 2018 at 09:23:33PM +0200, Stefan Müller wrote:
Hi there,
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">1. documentation
Is there any additional document for the -c command. I find only:
1. <a class="moz-txt-link-freetext" href="http://nginx.org/en/docs/switches.html">http://nginx.org/en/docs/switches.html</a>
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">That page indicates that "-c" means that this nginx instance uses this
named config file instead of the compiled-in default one. I'm not sure
what other documentation is possible.
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap=""> but none of them says that it will start an independent instances of
nginx.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">Every time you run the nginx binary, you run a new instance of nginx,
independent of any other.
Not all command-line argument combinations to nginx mean that it should
run a web server that never exits; many combinations mean "do this one
thing and exit".
The one command-line argument that is used to interact with an
already-running nginx -- -s -- does that interaction by sending a signal
to the appropriate process-id, in exactly the same way that the "kill"
or "pkill" binaries would. The new short-term nginx instance is still
independent of the already-running one, in the same way that "kill"
would be independent of it.
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">2. command line
I assume, that the command line parameters refer to a single
instance environment. How do I use the command line parameters for a
specific instance? Is it like this nginx -V "pid
/var/run/nginx-user1.pid"?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">Command-line parameters are used when you start a new instance of
nginx. They do not refer to any other instance (with the one "-s"
exception mentioned above).
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">3. root and non-root
only the master / proxy server instance need root access in order to
bind to ports <1024 and change its user-id to the one defined in
the|user <a class="moz-txt-link-rfc2396E" href="https://nginx.org/en/docs/ngx_core_module.html#user"><https://nginx.org/en/docs/ngx_core_module.html#user></a>|
directive in the main context of its .conf file.
The other / backend instances don't have to be started as root as
they don't need to bind to ports, they communicate via UNIX sockets
so all permission are managed by the user account management.
That is the same, what you said, isn't it?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">Yes.
An nginx that wants to "listen" on a place that only root can "listen"
on, or write to places that only root can write to, needs to start as
root. Any other nginx does not need to. (Again, excepting "-s".)
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">4. all in all there two layers of isolation
1. dynamic content provide such as PHP
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">nginx does not "do" PHP. Or much in the way of "dynamic" content (in
the sense that it seems to be meant here).
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap=""> each "virtual host" / server{} blocks has its own PHP pool. So
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">That's not an nginx thing, but it is a thing that you can configure if
you want to.
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap=""> the user for pool server{}/user1/ cannot see the pool
server{}/user2/. If /user1/ gets hacked, the hacker won't get
immidate acceass to /user2/ or the nginx master process, correct?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">"PHP" is (probably) run under a fastcgi server, as whatever system user
you want that service to run as. You can run multiple fastcgi servers
if you want to, each under their own user account.
nginx can be a client of that server if you use "fastcgi_pass" in your
nginx config that points to the server.
You have nginx-main, starts as root then switches to userM. "The browser"
is a client of this server.
You have (multiple) nginx-userN, runs as userN. nginx-main is a client
of this server.
You have (multiple) php-userP, runs as userP. nginx-userN is a client
of this server.
What specific threat model are you concerned with here?
When a thing gets hacked, is that thing running under the control of root,
or userM, or one of the multiple userNs, or one of the multiple userPs?
The access the outsider gains will (presumably) not exceed the access
that that user has. "root" will be able to access lots of things. "userM"
must be able to access the "userN" listening socket. "userN" must be
able to access the matching "userP" listening socket. "userP" must be
able to access the php files.
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap=""> 2. independent instances of nginx.
In case the master process is breach for what ever reason, the
hacker cannot see the other serves as long as he won't get root
privileges of the machine and there is the same exploit in the
other servers, correct?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">Again, I'm unsure what threat model you are concerned about here.
If someone breaks the main nginx to get "root" access, they have root
access. If they break the main nginx to get "userM" access, they will
be able to access nginx-userN (because userM can do that). Is that
enough access to also break nginx-userN so that they can get "userN"
access? (nginx-main and nginx-userN run the same binary. Is the breakage
due to configuration, which might be different between the two? Or due
to something inherent, which will be the same between the two?)
Without trying to be dismissive: if someone can break nginx to gain user
access, it is unlikely to be you or me that they will be attacking first.
I think that based on history, "badly written CGI scripts" (which in
this case corresponds to "badly written PHP") is the most likely way
that web things will be broken. In this design, that PHP runs under
the control of the fastcgi server, as a user userP. If that happens,
the outsider will have access as userP to do whatever the PHP script
and fastcgi server allow them to do. nginx is not involved except as
(probably) an initial pass-through tunnel.
If userP has access to turn off your fridge or reconfigure your nginx-main
or send a million emails or read secret files on your filesystem, then
the outsider will probably have access to do those things too.
Only you can decide what level of risk you're happy with.
Good luck with it,
f
</pre>
</blockquote>
</body>
</html>