Expert needed to Tune Nginx For Best Performance
targon at technologist.com
targon at technologist.com
Tue Nov 19 02:58:55 UTC 2019
There’s a guy on Upwork (seems to me MIA) who advertises a service that covers my requirements:
Web Performance Expert / Professional Linux Systems Engineer
As a systems engineer and PHP developer, I have developed a lot of skills in the area of performance optimization. Web performance is extremely important for your visitors as well as for how your rank in the major search engines.
My approach to helping clients consists of the following:
- Optimise loading of external assets by following best modern practices
- Analyse the Linux server configuration, this means CPU, Memory and disk performance, Apache / Nginx configuration, PHP configuration, etc.
- Profile the web application to detect the parts of the code that take up the most CPU time (like for example slow SQL queries), use the most memory, are waiting for disk I/O the longest, or transferring the most data over the network
- Finally, I write a quick report to the client with my findings, and upon agreement, I start optimising the code, system configuration, etc. in order to bring down the time it takes for the page to load.
- Caching can be implemented (like in memcached, redis, ...) and PHP OpCache can be configured and deployed
- SSL settings can be fine-tuned to get a good rating on SSL Labs, keeping in mind the visitors and their browsers.
> On 16 Nov 2019, at 17:26, j94305 <nginx-forum at forum.nginx.org> wrote:
> Optimizing for production is not simply an optimization of one component,
> e.g., NGINX.
> This is also about your security model, the application architecture and
> scaling abilities.
> If you simply have static files to be served, place them into a memory-based
> file system and you'll serve them blindingly fast - in theory. Actual
> performance will depend on the locations of your clients, with their
> latencies and bandwidths, so the approach may be not to accelerate one NGINX
> server location, but rather have a geographic distribution to serving
> content at the edge.
> If you need to serve to lower-bandwidth clients, gzip compression may be
> essential for latencies. If you have clients with broadband capabilities,
> you may want to save the extra CPU cycles for other tasks.
> If your security model requires complex signature verification on every
> request, you may need significantly more CPU power in there, compared to
> when you simply handle easily-verifiable cookies (which were assigned
> through a more compute-intensive calculation and verification scheme - but
> only once per session). OIDC-based schemes come to my mind.
> If you have different types of loads, it is sensible to separate them.
> Static files will be served by one cluster of servers, dynamic content will
> be served by another. Having auto-scaling groups on the application side ,
> you can scale well, assuming state-less micro-services there.
> If there is a broadly varying spectrum of load situations, you may want to
> consider clusters of NGINX, possibly with auto-scaling, to handle loads more
> effectively. Optimizing single NGINX instances may not what will really
> boost performance enough.
> So, my point is: optimizing any application environment for production use
> is not just a matter of nifty directives to speed up NGINX. It's a question
> of optimizing the architecture AND its components, including the application
> services. I've seen massive speed-ups just by changing the application into
> a set of state-less micro-services - focus on "state-less".
> While you will find people who help you with NGINX optimization in a given
> scenario, this may not be what you really need to optimize your entire
> application, including NGINX.
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286223,286224#msg-286224
> nginx mailing list
> nginx at nginx.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nginx