Handling ~4000 Concurrent Connections in Apache

logo

 

A new blog post, a new direction aside InfoSec, back to the fundamentals.

This is a paper/small research I did for a task I was given by the CTO of the ISP I am interning at. Despite it being very beginner, I thought it wouldn’t hurt to share it with others. Here is the original Handling ~4000 concurrent connections in Apache .PDF file. If you like the idea, I can also post about the streaming protocols highlighting the differences between them, about cPanel/WHM configuration, Cacti configuration, etc.

I’ve divided everything I could think of into groups:

1. MPM (Multi-Processing Module) tuning for more concurrent connections:


The MPMs are used to change the basic functionality of the web server. The MPM you use is responsible for the entire HTTP session, starting from listening on the network, taking requests in, and handling those requests.


* Prefork MPM: Every request gets its own (memory-separated) process.
* Worker MPM: Multi-threaded Apache, uses threads instead of processes, is generally faster than prefork and might use less memory.

* Event MPM: threaded like the Worker MPM, but is designed to allow more requests to beserved simultaneously by passing off some processing work to supporting threads, freeing up the main threads to work on new requests.


As PHP is not thread-safe, the common suggestion is to install Apache with the “prefork” MPM. Although, the server can be better customized for the needs of the particular site, for example, sites that need a great deal of scalability can choose to use a threaded MPM like worker or event, while sites requiring stability or compatibility with older software can use the prefork MPM.


* The optimized configuration of “Worker MPM” to serve a maximum of 4025 clients:
{

ServerLimit 161 # Declares the maximum number of running Apache processes
StartServers 2 # The number of processes to start initially when starting the Apache daemon
MaxRequestWorkers 4025 # The maximum number of simultaneous client connections (maximum amount of requests that can be served simultaneously)
MinSpareThreads 25 # The minimum number of idle threads available to handle request spikes
MaxSpareThreads 75 # The maximum number of idle threads
ThreadsPerChild 25 # How many threads can be created per server process
MaxConnectionsPerChild 10000 # Defines the number of connections that a process can handle during its lifetime. This can be used to prevent possible Apache memory leaks (if set to 0 the lifetime is infinite)

}


* Note: any request going past the “MaxRequestWorkers” limit gets queued. If this is set too low, connections sent to queue eventually time-out, and if set too high, it causes the memory to start swapping (depending on the available free RAM resources).


A shell script can be used to determine an average amount of memory consumed by one Apache process, to set a suitable “MaxRequestWorkers” value depending on the available RAM the server has, leaving RAM for the rest of the processes too:

ps -ylC apache2 | awk '{x += $8;y += 1} END {print "Apache Memory Usage (MB): "x/1024; print "Average Proccess Size (MB): "x/((y-1)*1024)}'


For example, if the worst case scenario has 4025 simultaneous client connections (requests) are made, and if each thread needs 10 MB of RAM, in addition to saving 512 MB of free RAM to the rest of the processes, server’s minimum requirements of RAM to survive such hit and serve all clients without any 
down time and no connections being neither timed out nor facing a lag spike because of swapping, will be as follows:
(4025 * 10 + 512) / 1024 ~= a minimum of 39.81 GB of RAM is needed before applying other enhancements.


2.
Resources:


* Enough RAM must be provided to the server to handle the worst case scenario in mind, as RAM is the most significant resource that will be affected in our case.

* Extra Bandwidth (especially in the upstream line) should be given to the server, although that depends on the type of files/objects the server hosts, to ensure stable download speeds on the client side, even in the high-load times.

* Using fast-storage solutions like SSDs over HDDs does help in terms of performance,
especially if a database is set-up in place (to speed up dynamic Web applications).


3.
Security-Related Approaches:

Aims at filtering the requests coming to the server, which results in lowering the load on it, and finally freeing up resources to serve more concurrent legit requests, instead of malicious ones. Can be achieved by defending against DDoS attacks majorly:


* Instruct the router to drop packets from IPs that are obvious sources of attack.
* Using ModSecurity, a web application firewall that blocks unwanted traffic, SQL injection attempts, and provides virtual patching features.
* Using mod_evasive (a module for Apache), blocks the requester if the number of concurrent requests for a page exceeds a specified threshold.
* Using Fail2ban, which scans log files and bans IPs that show malicious signs, according to a list of regular expressions.


4.
Extra Steps


* Using “mod_cache_disk” Apache’s module, which implements a disk based storage manager to cache files, and that significantly boosts the performance.

* Using a Memcached Server: to save I/O traffic, enhance performance, lower the load on the DB server mainly, and lower clients’ waiting times.

* Distribute the requests among a group of servers, in the presence of a load balancer.

* Load Balancers: using “mod_proxy_balancer” Apache module, or other solutions like
“LoadMaster”, which delivers not just load balancing services, but also:
        HTTP Reverse Proxy
        A number of traffic and routing optimization algorithms
        Image caching (reducing web server load)

        Content caching
        Compression
        Content Switching and Rewriting
        SSL Encryption/Decryption (further reducing server load)
        Single Sign On (preventing users having to login when being switched between                   servers)
        GEO and DNS failover
        Cookie Persistence

* Reverse Proxy Server (gateway server): Apache or other solutions can be used, which
increases the number of concurrent legit requests that can be handled by the backend server(s) by:
        * Increasing security, blocking DDoS, so more legit requests get served, and less
  malicious requests get backend servers’ resources and time.
        * Compressing server responses before returning them to the client, which frees up
  bandwidth for the backend servers to serve more clients at a time at higher speeds.
        * SSL decryption and encryption are done by the reverse proxy server, which frees           up resources on the backend servers, resulting in serving more content.
        * Caching, the proxy server takes over repeated requests and handles them itself             instead of forwarding the request to the backend, reducing the load on the backend               servers.

* Additional Apache configuration tips to tune its performance:
        * Removing unused modules to save memory.
        * Placing the logs, and cache on separate physical disks.
        * Using persistent connections, by setting “KeepAlive on”.

Feel free to advise, recommend, or criticize me on Twitter (@BaraSec) or in the comments section below.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s