HAProxy For Load Blancing And Protecting Apache
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world's most visited ones. Over the years it has become the de-facto standard opensource load balancer, is now shipped with most mainstream Linux distributions, and is often deployed by default in cloud platforms. 

Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the net, such as below :

version 1.5 : the most featureful version, supports SSL, IPv6, keep-alive, DDoS protection, etc...
version 1.4 : the most stable version for people who don't need SSL. Still provides client-side keep-alive

version 1.3 : the old stable version for companies who cannot upgrade for internal policy reasons. More Info Here
tar xvfz haproxy-1.5-dev7.tar.gz
$ cd haproxy-1.5-dev7
Now we have to compile the installation file, we are taking example of centos.
make install
Now make a new directory and copy haproxy configuration file there
mkdir /etc/haproxy
cd /etc/haproxy
vi haproxy.cfg
change the ip address below and copy it to haproxy.cfg
maxconn 20000        # count about 1 GB per 20000 connections
pidfile /var/run/
stats socket /var/run/haproxy.stat mode 600

mode http
maxconn 19500        # Should be slightly smaller than global.maxconn.
timeout client 60s   # Client and server timeout must match the longest
timeout server 60s   # time we may wait for a response from the server.
timeout queue  60s   # Don't queue requests too long if saturated.
timeout connect 4s   # There's no reason to change this one.
timeout http-request 5s    # A complete request may never take that long.
# Uncomment the following one to protect against nkiller2. But warning!
# some slow clients might sometimes receive truncated data if last
# segment is lost and never retransmitted :
# option nolinger
option http-server-close
option abortonclose
balance roundrobin
option forwardfor    # set the client's IP in X-Forwarded-For.
option tcp-smart-accept
option tcp-smart-connect
retries 2

frontend public

# table used to store behaviour of source IPs
stick-table type ip size 200k expire 5m store gpc0,conn_rate(10s)

# IPs that have gpc0 > 0 are blocked until the go away for at least 5 minutes
acl source_is_abuser src_get_gpc0 gt 0
tcp-request connection reject if source_is_abuser

# connection rate abuses get blocked
acl conn_rate_abuse  sc1_conn_rate gt 30
acl mark_as_abuser   sc1_inc_gpc0  gt 0
tcp-request connection track-sc1 src
tcp-request connection reject if conn_rate_abuse mark_as_abuser

default_backend apache

backend apache
# set the maxconn parameter below to match Apache's MaxClients minus
# one or two connections so that you can still directly connect to it.
stats uri /haproxy?stats
server srv maxconn 254

# Enable the stats page on a dedicated port (8811). Monitoring request errors
# on the frontend will tell us how many potential attacks were blocked.
listen stats
# Uncomment "disabled" below to disable the stats page :
# disabled
bind       :8811
stats uri /
In the above file replace to with your server ip address.
Change your Apache port to 8181 as in configuration file we are using that server srv maxconn 254.
In WHM goto Tweak Settings and find Apache non-SSL IP/port and change it to 8181.
Restart apache
/etc/init.d/apache2 restart
Start haproxy
haproxy -f /etc/haproxy/haproxy.cfg
Now we have to check if its working. Go to your stats page to see

Replace serverip with your server ip used in configuration file and you will see full result generated by haproxy

About Author:

I am a Linux Administrator and Security Expert with this site i can help lot's of people about linux knowladge and as per security expert i also intersted about hacking related news.TwitterFacebook

Newer Post
Older Post


  1. I was suggested this website by my cousin.
    I am not sure whether this post is written by him as nobody else know such detailed about my problem.
    You are amazing! Thanks!

  2. Thank you for sharing your thoughts. I truly appreciate
    your efforts and I am waiting for your next post thank you
    once again.