如何配置一個(gè)負(fù)載均衡(Loadbalanced)的
高可用性(High-Availability,HA)Apache集群(Cluster)
How To Set Up A Loadbalanced High-Availability Apache Cluster
2007-8-31
1.0
falko time<ft[at] falkotimme [dot] com>
2006426
This tutorial shows how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently.
本教程說明了如何配置一個(gè)提供高可用性(High-Availability,簡稱HA)的雙節(jié)點(diǎn)(two-node)的Apache網(wǎng)絡(luò)服務(wù)器(web sever,例如:JBoss)集群。在我們創(chuàng)造一個(gè)負(fù)載均衡的Apache集群前,兩個(gè)Apache節(jié)點(diǎn)(例如:JBoss)之間是分開引入請(qǐng)求的。因?yàn)槲覀儾幌胴?fù)載均衡器變成另一個(gè)“單點(diǎn)故障”,所以我們也必須為負(fù)載均衡器提供高可用性。所以我們的負(fù)載均衡器事實(shí)上將由兩個(gè)負(fù)載均衡器節(jié)點(diǎn)組成,它們彼此利用心跳(heartbeat)監(jiān)控對(duì)方,假如一個(gè)負(fù)載均衡器失效,那么另一個(gè)負(fù)載均衡器就會(huì)接管服務(wù)。
The advantage of using a load balancer compared to using round robin DNS is that it takes care of the load on the web server nodes and tries to direct requests to the node with less load, and it also takes care of connections/sessions. Many web applications (e.g. forum software, shopping carts, etc.) make use of sessions, and if you are in a session on Apache node 1, you would lose that session if suddenly node 2 served your requests. In addition to that, if one of the Apache nodes goes down, the load balancer realizes that and directs all incoming requests to the remaining node which would not be possible with round robin DNS.
用負(fù)載均衡器比用DNS循環(huán)的優(yōu)勢(shì)在于它會(huì)照顧在Web服務(wù)器(例如:JBoss)節(jié)點(diǎn)上的負(fù)載,并設(shè)法把請(qǐng)求轉(zhuǎn)到負(fù)載較小的節(jié)點(diǎn)上,并且它還照顧連接/會(huì)話(connections/sessions)。很多Web應(yīng)用程序(例如:軟件論壇,購物車等)使用會(huì)話,如果你只在Apache節(jié)點(diǎn)(例如:JBoss)一上創(chuàng)建會(huì)話,那么當(dāng)節(jié)點(diǎn)二突然為你的請(qǐng)求提供服務(wù)的時(shí)候你將丟失這個(gè)會(huì)話。
除此之外,如果Apache節(jié)點(diǎn)(例如:JBoss節(jié)點(diǎn))中的任何一個(gè)節(jié)點(diǎn)down掉了(如:關(guān)機(jī),停止服務(wù)等),負(fù)載均衡器認(rèn)識(shí)到此問題后,那它就會(huì)指揮所有傳來的請(qǐng)求到其余正常工作的節(jié)點(diǎn)上,這未必在可能存在的方面先進(jìn)DNS循環(huán)。
For this setup, we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.
這種集群結(jié)構(gòu),我們需要有四個(gè)節(jié)點(diǎn)(兩個(gè)Apache節(jié)點(diǎn)和兩個(gè)負(fù)載均衡器節(jié)點(diǎn))和五個(gè)IP地址:每一個(gè)節(jié)點(diǎn)和一個(gè)虛擬IP地址將分擔(dān)由負(fù)載均衡器節(jié)點(diǎn)傳入的HTTP請(qǐng)求。
I will use the following setup here:
· Apache node 1: webserver1.example.com (webserver1) - IP address: 192.168.0.101; Apache document root: /var/www
· Apache node 2: webserver2.example.com (webserver2) - IP address: 192.168.0.102; Apache document root: /var/www
· Load Balancer node 1: loadb1.example.com (loadb1) - IP address: 192.168.0.103
· Load Balancer node 2: loadb2.example.com (loadb2) - IP address: 192.168.0.104
· Virtual IP Address: 192.168.0.105 (used for incoming requests)
這里我將用以下設(shè)置:
l Apache節(jié)點(diǎn)一(例如:裝有JBoss的服務(wù)器):webserver1.example.com (webserver1)-IP地址:
192.168.0.101;Apache文檔根路徑:/var/www
l Apache節(jié)點(diǎn)二:webserver2.example.com (webserver2)-IP地址:
192.168.0.102;Apache文檔根路徑:/var/www
l 負(fù)載均衡器節(jié)點(diǎn)一:loadb1.example.com (loadb1) – IP地址: 192.168.0.103
l 負(fù)載均衡器節(jié)點(diǎn)二:loadb2.example.com (loadb2) – IP地址: 192.168.0.104
l 虛擬IP地址:192.168.0.105 (用于外部請(qǐng)求的IP地址)
In this tutorial I will use Debian Sarge for all four nodes. I assume that you have installed a basic Debian installation on all four nodes, and that you have installed Apache on webserver1 and webserver2, with /var/www being the document root of the main web site.
在這個(gè)指南里我將使用Debian Sarge適用于所有的四個(gè)節(jié)點(diǎn)。我假設(shè)你已經(jīng)在所有的四個(gè)節(jié)點(diǎn)上安裝了一個(gè)基礎(chǔ)的Debian安裝,而且你已經(jīng)在webserver1和webserver2上安裝了Apache(例如:JBoss),以/var/www的名字創(chuàng)建了web站點(diǎn)主文檔目錄。
I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
首先我想說的是,這不是配置這樣一個(gè)系統(tǒng)的唯一方法。有許多方法可以達(dá)到這種目標(biāo),但是我接受這種方法。我不做任何保證這種方式會(huì)為你工作!
1 Enable IPVS On The Load Balancers
在負(fù)載均衡器上啟用IPVS
First we must enable IPVS on our load balancers. IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.
首先我們必須啟用IPVS在我們的負(fù)載均衡器上。IPVS(IP虛擬服務(wù)器)在Linux Kernel內(nèi)部實(shí)現(xiàn)傳輸層(transport-layer)負(fù)載均衡,所以叫四層交換(Layer-4 switching)。
loadb1/loadb2:
echo ip_vs_dh >> /etc/modules
echo ip_vs_ftp >> /etc/modules
echo ip_vs >> /etc/modules
echo ip_vs_lblc >> /etc/modules
echo ip_vs_lblcr >> /etc/modules
echo ip_vs_lc >> /etc/modules
echo ip_vs_nq >> /etc/modules
echo ip_vs_rr >> /etc/modules
echo ip_vs_sed >> /etc/modules
echo ip_vs_sh >> /etc/modules
echo ip_vs_wlc >> /etc/modules
echo ip_vs_wrr >> /etc/modules
Then we do this:
然后我們這樣做:
loadb1/loadb2:
modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
If you get errors, then most probably your kernel wasn't compiled with IPVS support, and you need to compile a new kernel with IPVS support (or install a kernel image with IPVS support) now.
如果你得到錯(cuò)誤,那么很有可能是你的kernel編譯時(shí)沒有IPVS支持,那么你需要馬上編譯一個(gè)新的kernel支持IPVS(或者在安裝一個(gè)支持IPVS的kernel鏡像)。
2 Install Ultra Monkey On The Load alancers
在負(fù)載均衡器上安裝Ultra Monkey
Ultra Monkey is a project to create load balanced and highly available services on a local area network using Open Source components on the Linux operating system; the Ultra Monkey package provides heartbeat (used by the two load balancers to monitor each other and check if the other node is still alive) and ldirectord, the actual load balancer.
Ultra Monkey是一個(gè)方案,創(chuàng)建在一個(gè)局域網(wǎng)使用的對(duì)于Linux操作系統(tǒng)的負(fù)載均衡和高可用性服務(wù)的開放源碼組件;Ultra Monkey包提供心跳(heartbeat ,用于兩個(gè)負(fù)載均衡器監(jiān)視和檢查對(duì)方,如果另一個(gè)節(jié)點(diǎn)是活著的)和指揮器(ldirectord),實(shí)現(xiàn)負(fù)載均衡。
To install Ultra Monkey, we must edit /etc/apt/sources.list now and add these two lines (don't remove the other repositories):
安裝Ultra Monkey,我們必須馬上編輯 /etc/apt/sources.list 文件,增加兩條線路(不要?jiǎng)h除其他的repositories):
loadb1/loadb2:
deb [url]http://www.ultramonkey.org/download/3/[/url] sarge main
deb-src [url]http://www.ultramonkey.org/download/3[/url] sarge main
Afterwards we do this:
然后我們這樣做:
loadb1/loadb2:
and install Ultra Monkey:
接著安裝Ultra Monkey:
loadb1/loadb2:
apt-get install ultramonkey
If you see this warning:
如果你看見這樣的警告:
libsensors3 not functional
It appears that your kernel is not compiled with sensors support. As a
result, libsensors3 will not be functional on your system.
If you want to enable it, have a look at "I2C Hardware Sensors Chip
support" in your kernel configuration.
you can ignore it.
你可以忽略它。
During the Ultra Monkey installation you will be asked a few question. Answer as follows:
在安裝Ultra Monkey 的時(shí)候會(huì)問你一些問題。回答如下:
Do you want to automatically load IPVS rules on boot?
<-- No
Select a daemon method.
<-- none
3 Enable Packet Forwarding On The Load Balancers
在負(fù)載均衡器上激活包轉(zhuǎn)發(fā)
The load balancers must be able to route traffic to the Apache nodes. Therefore we must enable packet forwarding on the load balancers. Add the following lines to /etc/sysctl.conf:
負(fù)載均衡器必須能夠在通信線路上與Apache節(jié)點(diǎn)通信。因此我們必須在負(fù)載均衡器上激活包轉(zhuǎn)發(fā)。在/etc/sysctl.conf文件中增加如下項(xiàng):
loadb1/loadb2:
# Enables packet forwarding
net.ipv4.ip_forward = 1
Then do this:
然后這樣做:
loadb1/loadb2:
配置心跳和指揮器Now we have to create three configuration files for heartbeat. They must be identical on loadb1 and loadb2!
現(xiàn)在我們不得不為心跳創(chuàng)建三個(gè)配置文件。它們?cè)?/span>loadb1 and loadb2上必須完全一樣。
loadb1/loadb2:
logfacility local0 bcast eth0 # Linux mcast eth0 225.0.0.1 694 1 0 auto_failback off node loadb1 node loadb2 respawn hacluster /usr/lib/heartbeat/ipfail apiauth ipfail gid=haclient uid=hacluster |
Important: As nodenames we must use the output of
重要的是:節(jié)點(diǎn)名字,我們必須用它來輸出。
loadb1/loadb2:
loadb1 \ ldirectord::ldirectord.cf \ LVSSyncDaemonSwap::master \ IPaddr2::192.168.0.105/24/eth0/192.168.0.255 |
The first word is the output of
第一個(gè)單詞是輸出
on loadb1, no matter if you create the file on loadb1 or loadb2! After IPaddr2 we put our virtual IP address 192.168.0.105.
對(duì)于loadb1,在loadb1和loadb2上不論你是否船艦這個(gè)文件!在IPadr2后面我們放置我們的虛擬IP地址192.168.0.105
loadb1/loadb2:
auth 3 3 md5 somerandomstring |
somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
somerandomstring 是一個(gè)密碼,在loadb1和loadb2上的兩個(gè)心跳守護(hù)線程用它來鑒別對(duì)方。在這使用你們自己的字符串。你可以在三個(gè)鑒定機(jī)制中方式中選擇。我使用MD5,因?yàn)樗亲畎踩摹?/span>
/etc/ha.d/authkeys should be readable by root only, therefore we do this:
這個(gè)文件應(yīng)該最好只有root用戶能操作,因此我們要這樣做:
loadb1/loadb2:
chmod 600 /etc/ha.d/authkeys
ldirectord is the actual load balancer. We are going to configure our two load balancers (loadb1.example.com and loadb2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. To make it work, we must create the ldirectord configuration file /etc/ha.d/ldirectord.cf which again must be identical on loadb1 and loadb2.
指揮器是實(shí)際的負(fù)載均衡者。我們將要在主動(dòng)/被動(dòng)(active/passive)設(shè)置中配置我們的兩個(gè)負(fù)載均衡器(loadb1.example.com and loadb2.example.com),這意味著有一個(gè)活躍(起作用)的負(fù)載均衡者,另一個(gè)是熱備份。如果這個(gè)活躍的均衡者這失效了,那么另一個(gè)熱備份的均衡應(yīng)該變成活躍的。達(dá)到這樣一個(gè)工作目的,我們必須創(chuàng)建指揮器配置文件 /etc/ha.d/ldirectord.cf ,這次在loadb1 和 loadb2上也必須保持相同。
loadb1/loadb2:
vi /etc/ha.d/ldirectord.cf
checktimeout=10 checkinterval=2 autoreload=no logfile="local0" quiescent=yes
virtual=192.168.0.105:80 real=192.168.0.101:80 gate real=192.168.0.102:80 gate fallback=127.0.0.1:80 gate service=http request="ldirector.html" receive="Test Page" scheduler=rr protocol=tcp checktype=negotiate |
In the virtual= line we put our virtual IP address (192.168.0.105 in this example), and in the real= lines we list the IP addresses of our Apache nodes (192.168.0.101 and 192.168.0.102 in this example). In the request= line we list the name of a file on webserver1 and webserver2 that ldirectord will request repeatedly to see if webserver1 and webserver2 are still alive. That file (that we are going to create later on) must contain the string listed in the receive= line.
在這個(gè) virtual= 線上,我們放置我們的虛擬IP地址(這個(gè)例子是192.168.0.105),然后在 real= 線上我們列出我們的Apache節(jié)點(diǎn)(這個(gè)例子上是:192.168.0.101 是 192.168.0.102)的IP地址。在這個(gè) request= 我們列出在webserver1和威爾伯server上一個(gè)文件的名字,指揮器將重復(fù)請(qǐng)求去看webserver1和webserver2是否都是活著的。這個(gè)文件(我們將在稍后創(chuàng)建)必須包含在 receive= 線上列出的字符串。
Afterwards we create the system startup links for heartbeat and remove those of ldirectord because ldirectord will be started by the heartbeat daemon:
然后我們創(chuàng)建這個(gè)系統(tǒng)對(duì)于心跳的啟動(dòng)環(huán)節(jié),移除那些指揮器,因?yàn)橹笓]器將被心跳守護(hù)線程啟動(dòng):
loadb1/loadb2:
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
Finally we start heartbeat (and with it ldirectord):
最后我們啟動(dòng)心跳(和對(duì)于它的指揮器)
loadb1/loadb2:
/etc/init.d/ldirectord stop
/etc/init.d/heartbeat start
5 Test The Load Balancers 試負(fù)載均衡器
Let's check if both load balancers work as expected:
讓我們來檢測(cè)兩個(gè)負(fù)載均衡者是否會(huì)像預(yù)期的那樣工作:
loadb1/loadb2:
The active load balancer should list the virtual IP address (192.168.0.105):
活躍的負(fù)載均衡者將列出虛擬IP地址是(192.168.0.105):
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:40:18:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0 |
The hot-standby should show this:
熱備份的負(fù)載均衡者將顯示:
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:50:e3:3a brd ff:ff:ff:ff:ff:ff inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0 |
loadb1/loadb2:
ldirectord ldirectord.cf status
Output on the active load balancer:
在活動(dòng)的負(fù)載均衡器上輸出:
ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1455 |
Output on the hot-standby:
在熱備份上輸出:
ldirectord is stopped for /etc/ha.d/ldirectord.cf |
loadb1/loadb2:
Output on the active load balancer:
在活躍的負(fù)載均衡器上輸出:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.105:80 rr
-> 192.168.0.101:80 Route 0 0 0
-> 192.168.0.102:80 Route 0 0 0
-> 127.0.0.1:80 Local 1 0 0 |
Output on the hot-standby:
在熱備份上輸出:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn |
loadb1/loadb2:
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
Output on the active load balancer:
在活動(dòng)的負(fù)載均衡器上輸出:
master running
(ipvs_syncmaster pid: 1591) |
Output on the hot-standby:
在熱備份上輸出:
If your tests went fine, you can now go on and configure the two Apache nodes.
如果你的測(cè)試是美好的,你現(xiàn)在可以繼續(xù)配置兩架Apache節(jié)點(diǎn)。
6 Configure The Two Apache Nodes 配置兩個(gè)Apache節(jié)點(diǎn)
Finally we must configure our Apache cluster nodes webserver1.example.com and webserver2.example.com to accept requests on the virtual IP address 192.168.0.105.
最后我們必須配置我們的Apache集群節(jié)點(diǎn)webserver1.example.com and webserver2.example.com 接受在虛擬IP地址192.168.0.105的請(qǐng)求。
webserver1/webserver2:
Add the following to /etc/sysctl.conf:
在/etc/sysctl.conf 文件中加入如下內(nèi)容:
webserver1/webserver2:
# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1
# When an arp request is received on eth0, only respond if that address is
# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_ignore = 1
# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2
# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request. If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address. This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_announce = 2 |
Then run this:
然后運(yùn)行:
webserver1/webserver2:
Add this section for the virtual IP address to /etc/network/interfaces:
在/etc/network/interfaces 文件中增加虛擬IP地址的部分:
webserver1/webserver2:
vi /etc/network/interfaces
auto lo:0
iface lo:0 inet static
address 192.168.0.105
netmask 255.255.255.255
pre-up sysctl -p > /dev/null |
Then run this:
然后運(yùn)行:
webserver1/webserver2:
Finally we must create the file ldirector.html. This file is requested by the two load balancer nodes repeatedly so that they can see if the two Apache nodes are still running. I assume that the document root of the main apache web site on webserver1 and webserver2 is /var/www, therefore we create the file /var/www/ldirector.html:
最后我們必須創(chuàng)建這個(gè) ldirector.html 文件這個(gè)文件被兩個(gè)負(fù)載平衡器節(jié)點(diǎn)重復(fù)的請(qǐng)求,如果這兩個(gè)Apache節(jié)點(diǎn)都是運(yùn)行的,所以它們能看見。我架設(shè)在webserver1 和 webserver2上的Apahce web站點(diǎn)的文檔主目錄是/var/www,因此我們要在創(chuàng)建/var/www/ldirector.html文件:
webserver1/webserver2:
vi /var/www/ldirector.html
7 Further Testing 進(jìn)一步測(cè)試
You can now access the web site that is hosted by the two Apache nodes by typing [url]http://192.168.[/url]0.105 in your browser.
Now stop the Apache on either webserver1 or webserver2. You should then still see the web site on [url]http://192.168.0.105[/url] because the load balancer directs requests to the working Apache node. Of course, if you stop both Apaches, then your request will fail.
現(xiàn)在停止任何一臺(tái)webserver,webserver1或者webserver2。你應(yīng)該仍然可以訪問web站點(diǎn)[url]http://192.168.0.105[/url],因?yàn)樨?fù)載均衡器指揮請(qǐng)求到工作著的Apache節(jié)點(diǎn)上。當(dāng)然,如果你停止兩臺(tái)Apache,那么你的請(qǐng)求將會(huì)失敗。
Now let's assume that loadb1 is our active load balancer, and loadb2 is the hot-standby. Now stop heartbeat on loadb1:
現(xiàn)在讓我們架設(shè)這個(gè)loadb1是我們活躍的負(fù)載均衡器,loadb2是熱備份的。現(xiàn)在在loadb1上停止心跳:
loadb1:
/etc/init.d/heartbeat stop
Wait a few seconds, and then try [url]http://192.168.0.105[/url] again in your browser. You should still see your web site because loadb2 has taken the active role now.
Now start heartbeat again on loadb1:
現(xiàn)在再一次在loadb1上啟動(dòng)心跳:
loadb1:
/etc/init.d/heartbeat start
loadb2 should still have the active role. Do the tests from chapter 5 again on loadb1 and loadb2, and you should see the inverse results as before.
Loadb2仍將保持活躍狀態(tài)的角色。在loadb1和loadb2上再一次做測(cè)試,從第五章的步驟開始,你也應(yīng)該看到這個(gè)翻轉(zhuǎn)的結(jié)果和前面的一樣。
If you have also passed these tests, then your loadbalanced Apache cluster is working as expected. Have fun!
如果你也通過了這些測(cè)試,那么你的負(fù)載均衡Apache集群的工作是預(yù)期的。正確完工!
8 Further Reading 進(jìn)一步閱讀
This tutorial shows how to loadbalance two Apache nodes. It does not show how to keep the files in the Apache document root in sync or how to create a storage solution like an NFS server that both Apache nodes can use, nor does it provide a solution how to manage your MySQL database(s). You can find solutions for these issues here:
本指南演示了如何配置兩個(gè)Apache節(jié)點(diǎn)的負(fù)載均衡。它并沒有說明如何在apahce的根目錄保持文件同步,或者如何創(chuàng)建一個(gè)像NFS服務(wù)器一樣的存儲(chǔ)方案供給兩個(gè)Apahce節(jié)點(diǎn)使用,也沒有提供一個(gè)怎樣管理你的MySQL數(shù)據(jù)庫的解決方案。你可以找到這些問題的解決方案在如下地址: