红旗Linux:VMware5安装集群LVS实例解析
一、基于NAT的LVS的安装与配置 1. 硬件需求和网络拓扑2.软件:
、编译和安装ipvsadm
ln -s /usr/src/linux-2.4.30 /usr/src/linux tar -zxvf ipvsadm-1.21-11.tar.gz
cd ipvsadm-1.21-11
make all
make install
然后运行:ipvsadm ——version命令,应该有下面的内容输出:
ipvsadm v1.21 2004/02/23 (compiled with popt and IPVS v1.0.12)
4. 配置LVS
(1)、在202.99.59.110上:
echo "1" /proc/sys/net/ipv4/ip_forward
echo "0" /proc/sys/net/ipv4/conf/all/send_redirects
echo "0" /proc/sys/net/ipv4/conf/default/send_redirects
echo "0" /proc/sys/net/ipv4/conf/eth0/send_redirects
echo "0" /proc/sys/net/ipv4/conf/eth1/send_redirects
清除ipvsadm表:
/sbin/ipvsadm -C
使用ipvsadm安装LVS服务
#add http to VIP with rr sheduling
/sbin/ipvsadm -A -t 202.99.59.110:80 -s rr
增加第一台realserver:
#forward http to realserver 192.168.10.1 using LVS-NAT (-m), with weight=1
/sbin/ipvsadm -a -t 202.99.59.110:80 -r 192.168.10.1:80 -m -w 1
增加第二台realserver:
#forward http to realserver 192.168.10.2 using LVS-NAT (-m), with weight=1
/sbin/ipvsadm -a -t 202.99.59.110:80 -r 192.168.10.2:80 -m -w 1
(2)、realserver配置
在192.168.10.1(realserver1)和192.168.10.2(realserver2)上分别将其网关设置为192.168.10.254,并分别启动apache服务。
在客户端使用浏览器多次http://202.99.59.110/,然后再202.99.59.110上运行ipvsadm命令,应该有类似下面的输出:
IP Virtual Server version 1.0.12 (size=4096)Prot LocalAddress:Port Scheduler Flags - RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 202.99.59.110:http rr - 192.168.10.1:http Masq 1 0 33 - 192.168.10.2:http Masq 1 0 33
从上面的结果可以看出,我们的LVS服务器已经成功运行。
红旗Linux:VMware5安装集群LVS实例解析
</p> 二、基于直接路由(DR)的LVS的配置1.硬件需求和网络拓扑:
2.安装软件:
在director(202.99.59.109)上安装上面的方法安装内核和治理软件。
3. 配置LVS
(1)、在202.99.59.109上:
修改内核运行参数,即修改/etc/sysctl.conf文件的内容如下:net.ipv4.ip_forward = 0 net.ipv4.conf.all.send_redirects = 1net.ipv4.conf.default.send_redirects = 1net.ipv4.conf.eth0.send_redirects = 1然后执行下面的命令是对内核修改的参数立即生效:sysctl -p配置VIP地址:/sbin/ifconfig eth0:0 202.99.59.110 broadcast 202.99.59.110 netmask 255.255.255.255 up/sbin/route add -host 202.99.59.110 dev eth0:0清除ipvsadm表:/sbin/ipvsadm -C使用ipvsadm安装LVS服务:/sbin/ipvsadm -A -t 192.168.1.110:http -s rr增加realserver:#forward http to realserver using direct routing with weight 1/sbin/ipvsadm -a -t 192.168.1.110:http -r 192.168.1.12 -g -w 1/sbin/ipvsadm -a -t 192.168.1.110:http -r 192.168.1.12 -g -w 1
VIP=192.168.8.11
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
echo "1" /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" /proc/sys/net/ipv4/conf/lo/arp_annou nce
echo "1" /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" /proc/sys/net/ipv4/conf/all/arp_announce
sysctl –p
假如有多个realserver直接添加就可以了,之后启动此脚本就可以了。
测试:分别启动realserver上的httpd服务
在realserver1 执行 echo "This is realserver1" /var/www/html/index.html
在realserver2 执行 echo "This is realserver2" /var/www/html/index.html
打开IE浏览器输入http://192.168.8.11 应该可以分别看到:This is realserver1 和 This is realserver2.
二、配置基于隧道模式Lvs集群
1.配置LVS directorserver 脚本
#vi TunLVS
#!/bin/sh
VIP=192.168.8.11
RIP1=192.168.8.5
RIP2=192.168.8.6
/etc/rc.d/init.d/functions
case "$1" in
start)
echo "Start Lvs of DirectorServer"
#set vip server
/sbin/ifconfig tunl0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev tunl0
#clear IPVS table
/sbin/ipvsadm -C
#set lvs
/sbin/ipvsadm -A -t $VIP:80 -s rr
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -i
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -i
#Run Lvs
/sbin/ipvsadm
;;
stop)
echo "Close Lvs DirectorServer "
ifconfig tunl0 down
/sbin/ipvsadm -C
;;
)
echo "Usage: $0 "
exit 1
esac
2. 配置realserver
#
#!/bin/sh
VIP=192.168.8.11
/etc/rc.d/init.d/functions
case "$1" in
start)
echo "tunl port starting"
/s bin/ifconfig tunl0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev tunl0
echo "1" /proc/sys/net/ipv4/ip_forward
echo "1" /proc/sys/net/ipv4/conf/tunl0/arp_ignore
echo "2" /proc/sys/net/ipv4/conf/tunl0/arp_announce
echo "1" /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p
;;
红旗Linux:VMware5安装集群LVS实例解析
</p> stop)echo "tunl port closing"
ifconfig tunl0 down
echo "1" /proc/sys/net/ipv4/ip_forward
echo "0" /proc/sys/net/ipv4/conf/all/arp_announce
;;
)
echo "Usege: $0 "
exit 1
esac
此脚本分别在realserver上执行,目的使realserver忽略arp响应,并设定vip.
三、配置基于高可用Lvs+heartbeat
确定LVS使用DR或/tun模式,请对照上面的配置,本例使用DR模式
1.配置LVS directorserver 脚本
#!/bin/sh
VIP=192.168.8.11
RIP1=192.168.8.6
RIP2=192.168.8.5
/etc/rc.d/init.d/functions
case "$1" in
start)
echo "start LVS of DirectorServer"
#Set the Virtual IP Address
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev eth0:1
#Clear IPVS Table
/sbin/ipvsadm -C
#Set Lvs
/sbin/ipvsadm -A -t $VIP:80 -s rr
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
#Run Lvs
/sbin/ipvsadm
;;
stop)
echo "close LVS Directorserver"
/sbin/ipvsadm -C;;
)
echo "Usage: $0 "
exit 1
esac
2. realserver端同样使用上面的配置文件就可以。
3.安装heartbeat
3.1 安装
tar -zxvf libnet.tar.gz
cd libnet
。/configure
make
make install
groupadd -g 694 haclient
useradd -u 694 -g haclient hacluster
tar zxf heartbeat-1.99.4.tar.gz
cd heartbeat-1.99.4
。/ConfigureMe configure
make
make install
cp doc/ha.cf doc/haresources doc/authkeys /etc/ha.d/
cp ldirectord/ldirectord.cf /etc/ha.d/
红旗Linux:VMware5安装集群LVS实例解析
</p> 3.2配置主文件/etc/ha.d/ha.cflogfile /var/log/ha-log
keepalive 2
deadtime 60
warntime 10
initdead 120
udpport 694
bcast eth0 # Linux
auto_failback on
ping_group group1 192.168.8.2 192.168.8.3
respawn root /usr/lib/heartbeat/ipfail
apiauth ipfail gid=root uid=root
hopfudge 1
use_logd yes
node test7
node test8
crm on
3.3资源文件/etc/ha.d/ haresources
test7 192.168.8.11 httpd
设置test7为主节点,集群服务器的ip地址为192.168.8.11 集群服务有httpd
3.4认证文件:
checktimeout=3
checkinterval=1
fallback=127.0.0.1:80
autoreload=yes
logfile="/var/log/ldirectord.log"
quiescent=yes
# Sample for an http virtual service
virtual=192.168.8.11:80
real=192.168.8.6:80 gate
real=192.168.8.5:80 gate
fallback=127.0.0.1:80 gate
service=http
request="index.html"
receive="Test Page"
protocol =tcp
checktype=negotiate
checkport=80
在每个Real Server的中添加监控页:
echo "Test Page" /var/www/html/index.html
修改/etc/ha.d/haresources
test7 192.168.8.11 ipvsadm ldirectord httpd
现在可以在主节点启动heartbeat
/etc/init.d/heartbeat start
并在备份节点启动heartbeat
/etc/init.d/heartbeat start
测试:关闭主节点,备份节点将自动接管directorserver服务。
页:
[1]