http://kb.linuxvirtualserver.org/api.php?action=feedcontributions&user=Wensong&feedformat=atomLVSKB - User contributions [en]2024-03-28T16:53:00ZUser contributionsMediaWiki 1.26.2http://kb.linuxvirtualserver.org/wiki?title=IPVS_FULLNAT_and_SYNPROXY&diff=43833IPVS FULLNAT and SYNPROXY2012-08-09T10:42:58Z<p>Wensong: tidy up introduction</p>
<hr />
<div>== Introduction==<br />
<br />
'''FullNAT: A new packet forwarding method for IPVS, other than DR/NAT/TUNNEL'''<br />
<br />
The main principle is as follows: the module introduces local ip address (IDC internal ip address, lip), IPVS translates cip-vip to/from lip-rip, in which lip and rip both are IDC internal ip address, so that LVS load balancer and real servers can be in different vlans, and real servers only need to access internal network. See [[LVS/FNAT | Virtual Server via Full NAT]] for more information.<br />
<br />
'''SYNPROXY: Defence module against synflooding attack'''<br />
<br />
The main principle: based on tcp syncookies, please refer to http://en.wikipedia.org/wiki/SYN_cookies;<br />
<br />
<br />
This FullNAT and SYNPROXY code for IPVS in Linux kernel 2.6.32 was written by Jiaming Wu at taobao.com, Jian Chen at 360.cn, and Shunmin Zhu at taobao.com, with some advising from Wensong Zhang at taobao.com. The code was affected by ideas of the source NAT and SYNPROXY version that was hard coded to IPVS in Linux kernel 2.6.9 by Wen Li, Yan Tian, Jian Chen, Yang Yi, Yaoguang Sun, Fang Han, Ying liu and Jiaming Wu at baidu.com in 2009.<br />
<br />
<br />
The FullNAT and SYNPROXY support were added to keepalived/ipvsadm by Jiajun Chen and Ziang Chen at taobao.com. <br />
<br />
Please note that FullNAT and SYNPROXY only had limited testing.<br />
<br />
<br />
== Document ==<br />
<br />
[[Media:LVS操作手册.zip]]<br />
<br />
[[Media:lvs-fullnat-synproxy-doc.zip]]<br />
<br />
== Download ==<br />
<br />
[[Media:lvs-fullnat-synproxy.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz]]<br />
<br />
== Building ==<br />
<br />
1. LVS Kernel<br />
<br />
1.1 get kernel rpm from redhat<br />
wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/kernel-2.6.32-220.23.1.el6.src.rpm<br />
<br />
1.2 get kernel source code from rpm<br />
vim ~/.rpmmacros;<br />
add:<br />
%_topdir /home/pukong/rpms<br />
%_tmppath /home/pukong/rpms/tmp<br />
%_sourcedir /home/pukong/rpms/SOURCES<br />
%_specdir /home/pukong/rpms/SPECS<br />
%_srcrpmdir /home/pukong/rpms/SRPMS<br />
%_rpmdir /home/pukong/rpms/RPMS<br />
%_builddir /home/pukong/rpms/BUILD<br />
cd /home/pukong;<br />
mkdir rpms;<br />
mkdir rpms/tmp;<br />
mkdir rpms/SOURCES;<br />
mkdir rpms/SPECS;<br />
mkdir rpms/SRPMS;<br />
mkdir rpms/RPMS;<br />
mkdir rpms/BUILD;<br />
rpm -ivh kernel-2.6.32-220.23.1.el6.src.rpm;<br />
cd /home/pukong/rpms/SPECS;<br />
rpmbuild -bp kernel.spec;<br />
<br />
then you can find kernel source code in /home/pukong/rpms/BUILD.<br />
<br />
1.3 add lvs patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp lvs-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<lvs-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
<br />
Or you can directly get source code from linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz;<br />
<br />
1.4 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
2. RealServer Kernel (TOA)<br />
<br />
2.1 get kernel source code, the same as step 1.1 and 1.2;<br />
<br />
2.2 add toa patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp toa-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<toa-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
Or you can get source code directly from linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz;<br />
<br />
2.3 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
3. LVS Tools (keepalived/ipvsadm/quaage)<br />
cd /home/pukong;<br />
cp lvs-tools.tar.gz ./; // lvs-tools.tar.gz is in lvs-fullnat-synproxy.tar.gz <br />
tar xzf lvs-tools.tar.gz;<br />
<br />
3.1 keepalived install<br />
cd /home/pukong/tools/keepalived;<br />
./configure --with-kernel-dir="/lib/modules/`uname -r`/build";<br />
make;<br />
make install;<br />
<br />
3.2 ipvsadm install<br />
cd /home/pukong/tools/ipvsadm;<br />
make;<br />
make install;<br />
<br />
3.3 quaage install<br />
cd /home/pukong/tools/quagga;<br />
./configure --disable-ripd --disable-ripngd --disable-bgpd --disable-watchquagga --disable-doc --enable-user=root --enable-vty-group=root --enable-group=root --enable-zebra --localstatedir=/var/run/quagga<br />
make<br />
make;<br />
make install;<br />
<br />
[[Category:IPVS]]<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS&diff=43822IPVS2012-08-08T09:21:54Z<p>Wensong: </p>
<hr />
<div>IPVS (IP Virtual Server) implements transport-layer [[load balancing]] inside the Linux kernel, so called [[Layer-4 switching]]. IPVS running on a host acts as a [[load balancer]] before a cluster of real servers, it can direct requests for TCP/UDP based services to the real servers, and makes services of the real servers to appear as a virtual service on a single IP address.<br />
<br />
== IP Load Balancing Technologies ==<br />
<br />
Since the IP load balancing techniques have good scalability, IPVS extends the TCP/IP stack of the Linux kernel to support three IP load balancing techniques: [[LVS/NAT]], [[LVS/TUN]] and [[LVS/DR]]. The box running IPVS acts as a [[load balancer]] of network connections<br />
from clients who know a single IP address for a service, and distributes them to a set of<br />
servers that actually perform the work.<br />
<br />
* [[LVS/NAT | Virtual Server via Network Address Translation]]<br />
* [[LVS/TUN | Virtual Server via IP Tunneling]]<br />
* [[LVS/DR | Virtual Server via Direct Routing]]<br />
* [[LVS/FNAT | Virtual Server via Full NAT]]<br />
<br />
== Job Scheduling Algorithms ==<br />
<br />
=== Connection Scheduling Algorithms inside the Kernel ===<br />
<br />
IPVS schedules jobs at connection granularity inside the Linux kernel. Here connection is defined as data communication between client socket and server socket, no matter whether it uses TCP or UDP protocol. For scheduling UDP datagrams, IPVS [[load balancer]] records UDP datagram scheduling with configurable timeout, and the default UDP timeout is 300 seconds. Before UDP connection timeouts, all UDP datagrams from the same socket (protocol, ip address and port) will be directed to the same server.<br />
<br />
IPVS has implemented ten connection scheduling algorithms inside the kernel so far:<br />
<br />
* [[Round-Robin Scheduling]] <br />
* [[Weighted Round-Robin Scheduling]] <br />
* [[Least-Connection Scheduling]]<br />
* [[Weighted Least-Connection Scheduling]] <br />
* [[Locality-Based Least-Connection Scheduling]] <br />
* [[Locality-Based Least-Connection with Replication Scheduling]] <br />
* [[Destination Hashing Scheduling]] <br />
* [[Source Hashing Scheduling]] <br />
* [[Shortest Expected Delay Scheduling]] <br />
* [[Never Queue Scheduling]]<br />
<br />
=== Dynamic Feedback Load Balancing Scheduling ===<br />
<br />
The dynamic feedback load balancing scheduling algorithm is to use connection scheduling algorithm and adjust server weight based on dynamic feeback load information of each server, in order to avoid load imbalance among servers.<br />
<br />
* [[Dynamic Feedback Load Balancing Scheduling]]<br />
<br />
== Software ==<br />
<br />
See http://www.linuxvirtualserver.org/software/ipvs.html for downloading IPVS related software.<br />
<br />
[[ipvsadm]] is used to set up, maintain or inspect the IP virtual server table in the Linux kernel. For how to compile ipvsadm on different Linux distributions, check [[compiling ipvsadm on different Linux distributions|this article]] for detailed information.<br />
<br />
== Design and Implementation ==<br />
<br />
* [[IPv6 load balancing]] - the status of IPv6 support in IPVS and examples<br />
* [[IPVS FULLNAT and SYNPROXY]] - a new packet forwarding method, and defense against synflooding attack<br />
* [[IPVS Design]]<br />
* [[IPVS Implementation]]<br />
<br />
== Ongoing Development ==<br />
<br />
=== IPVS On FreeBSD ===<br />
<br />
The [http://dragon.linux-vs.org/~dragonfly/htm/lvs_freebsd.htm LVS On FreeBSD] project was started by Li Wang, the goal is to port IPVS code to FreeBSD. Now, the LVS On FreeBSD module supports the [[LVS/DR]] and [[LVS/TUN]] ip load balancing technologies.<br />
<br />
=== Todo ===<br />
<br />
* [[IPVS Wish List]]<br />
<br />
[[Category:LVS Software]]<br />
[[Category:Load Balancing]]<br />
[[Category:Terms and Abbreviations]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS_FULLNAT_and_SYNPROXY&diff=43821IPVS FULLNAT and SYNPROXY2012-08-08T09:19:47Z<p>Wensong: wording</p>
<hr />
<div>== Introduction==<br />
<br />
'''FULLNAT: A new packet forwarding method for IPVS, other than DR/NAT/TUNNEL'''<br />
<br />
The main principle: the module introduce local ip address (IDC internal ip address, lip), IPVS translate cip-vip to lip-rip, lip and rip both are IDC internal ip address, so LVS-RS is capable of inter-vlan communication, while RS only need to access internal network.<br />
<br />
'''SYNPROXY: Defence module against synflooding attack'''<br />
<br />
The main principle: based on tcp syncookies, please refer to http://en.wikipedia.org/wiki/SYN_cookies;<br />
<br />
<br />
This FullNAT and SYNPROXY code for IPVS in Linux kernel 2.6.32 was written by Jiaming Wu at taobao.com, and Jian Chen at 360.cn, with some advising from Wensong Zhang at taobao.com. The code was affected by ideas of the source NAT and SYNPROXY version that was hard coded to IPVS in Linux kernel 2.6.9 by Wen Li, Yan Tian, Jian Chen, Yang Yi,Yaoguang Sun, Fang Han, Ying liu and Jiaming Wu at baidu.com in 2009.<br />
<br />
<br />
The FULLNAT and SYNPROXY support were added to keepalived/ipvsadm by Jiajun Chen,Ziang Chen and Shunmin Zhu at taobao.com. <br />
<br />
Please note that FULLNAT and SYNPROXY only had limited testing.<br />
<br />
== Document ==<br />
<br />
[[Media:LVS操作手册.zip]]<br />
<br />
[[Media:lvs-fullnat-synproxy-doc.zip]]<br />
<br />
== Download ==<br />
<br />
[[Media:lvs-fullnat-synproxy.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz]]<br />
<br />
== Building ==<br />
<br />
1. LVS Kernel<br />
<br />
1.1 get kernel rpm from redhat<br />
wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/kernel-2.6.32-220.23.1.el6.src.rpm<br />
<br />
1.2 get kernel source code from rpm<br />
vim ~/.rpmmacros;<br />
add:<br />
%_topdir /home/pukong/rpms<br />
%_tmppath /home/pukong/rpms/tmp<br />
%_sourcedir /home/pukong/rpms/SOURCES<br />
%_specdir /home/pukong/rpms/SPECS<br />
%_srcrpmdir /home/pukong/rpms/SRPMS<br />
%_rpmdir /home/pukong/rpms/RPMS<br />
%_builddir /home/pukong/rpms/BUILD<br />
cd /home/pukong;<br />
mkdir rpms;<br />
mkdir rpms/tmp;<br />
mkdir rpms/SOURCES;<br />
mkdir rpms/SPECS;<br />
mkdir rpms/SRPMS;<br />
mkdir rpms/RPMS;<br />
mkdir rpms/BUILD;<br />
rpm -ivh kernel-2.6.32-220.23.1.el6.src.rpm;<br />
cd /home/pukong/rpms/SPECS;<br />
rpmbuild -bp kernel.spec;<br />
<br />
then you can find kernel source code in /home/pukong/rpms/BUILD.<br />
<br />
1.3 add lvs patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp lvs-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<lvs-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
<br />
Or you can directly get source code from linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz;<br />
<br />
1.4 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
2. RealServer Kernel (TOA)<br />
<br />
2.1 get kernel source code, the same as step 1.1 and 1.2;<br />
<br />
2.2 add toa patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp toa-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<toa-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
Or you can get source code directly from linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz;<br />
<br />
2.3 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
3. LVS Tools (keepalived/ipvsadm/quaage)<br />
cd /home/pukong;<br />
cp lvs-tools.tar.gz ./; // lvs-tools.tar.gz is in lvs-fullnat-synproxy.tar.gz <br />
tar xzf lvs-tools.tar.gz;<br />
<br />
3.1 keepalived install<br />
cd /home/pukong/tools/keepalived;<br />
./configure --with-kernel-dir="/lib/modules/`uname -r`/build";<br />
make;<br />
make install;<br />
<br />
3.2 ipvsadm install<br />
cd /home/pukong/tools/ipvsadm;<br />
make;<br />
make install;<br />
<br />
3.3 quaage install<br />
cd /home/pukong/tools/quagga;<br />
./configure --disable-ripd --disable-ripngd --disable-bgpd --disable-watchquagga --disable-doc --enable-user=root --enable-vty-group=root --enable-group=root --enable-zebra --localstatedir=/var/run/quagga<br />
make<br />
make;<br />
make install;<br />
<br />
[[Category:IPVS]]<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS&diff=43820IPVS2012-08-08T08:50:50Z<p>Wensong: </p>
<hr />
<div>IPVS (IP Virtual Server) implements transport-layer [[load balancing]] inside the Linux kernel, so called [[Layer-4 switching]]. IPVS running on a host acts as a [[load balancer]] before a cluster of real servers, it can direct requests for TCP/UDP based services to the real servers, and makes services of the real servers to appear as a virtual service on a single IP address.<br />
<br />
== IP Load Balancing Technologies ==<br />
<br />
Since the IP load balancing techniques have good scalability, IPVS extends the TCP/IP stack of the Linux kernel to support three IP load balancing techniques: [[LVS/NAT]], [[LVS/TUN]] and [[LVS/DR]]. The box running IPVS acts as a [[load balancer]] of network connections<br />
from clients who know a single IP address for a service, and distributes them to a set of<br />
servers that actually perform the work.<br />
<br />
* [[LVS/NAT | Virtual Server via Network Address Translation]]<br />
* [[LVS/TUN | Virtual Server via IP Tunneling]]<br />
* [[LVS/DR | Virtual Server via Direct Routing]]<br />
<br />
== Job Scheduling Algorithms ==<br />
<br />
=== Connection Scheduling Algorithms inside the Kernel ===<br />
<br />
IPVS schedules jobs at connection granularity inside the Linux kernel. Here connection is defined as data communication between client socket and server socket, no matter whether it uses TCP or UDP protocol. For scheduling UDP datagrams, IPVS [[load balancer]] records UDP datagram scheduling with configurable timeout, and the default UDP timeout is 300 seconds. Before UDP connection timeouts, all UDP datagrams from the same socket (protocol, ip address and port) will be directed to the same server.<br />
<br />
IPVS has implemented ten connection scheduling algorithms inside the kernel so far:<br />
<br />
* [[Round-Robin Scheduling]] <br />
* [[Weighted Round-Robin Scheduling]] <br />
* [[Least-Connection Scheduling]]<br />
* [[Weighted Least-Connection Scheduling]] <br />
* [[Locality-Based Least-Connection Scheduling]] <br />
* [[Locality-Based Least-Connection with Replication Scheduling]] <br />
* [[Destination Hashing Scheduling]] <br />
* [[Source Hashing Scheduling]] <br />
* [[Shortest Expected Delay Scheduling]] <br />
* [[Never Queue Scheduling]]<br />
<br />
=== Dynamic Feedback Load Balancing Scheduling ===<br />
<br />
The dynamic feedback load balancing scheduling algorithm is to use connection scheduling algorithm and adjust server weight based on dynamic feeback load information of each server, in order to avoid load imbalance among servers.<br />
<br />
* [[Dynamic Feedback Load Balancing Scheduling]]<br />
<br />
== Software ==<br />
<br />
See http://www.linuxvirtualserver.org/software/ipvs.html for downloading IPVS related software.<br />
<br />
[[ipvsadm]] is used to set up, maintain or inspect the IP virtual server table in the Linux kernel. For how to compile ipvsadm on different Linux distributions, check [[compiling ipvsadm on different Linux distributions|this article]] for detailed information.<br />
<br />
== Design and Implementation ==<br />
<br />
* [[IPv6 load balancing]] - the status of IPv6 support in IPVS and examples<br />
* [[IPVS FULLNAT and SYNPROXY]] - a new packet forwarding method, and defense against synflooding attack<br />
* [[IPVS Design]]<br />
* [[IPVS Implementation]]<br />
<br />
== Ongoing Development ==<br />
<br />
=== IPVS On FreeBSD ===<br />
<br />
The [http://dragon.linux-vs.org/~dragonfly/htm/lvs_freebsd.htm LVS On FreeBSD] project was started by Li Wang, the goal is to port IPVS code to FreeBSD. Now, the LVS On FreeBSD module supports the [[LVS/DR]] and [[LVS/TUN]] ip load balancing technologies.<br />
<br />
=== Todo ===<br />
<br />
* [[IPVS Wish List]]<br />
<br />
[[Category:LVS Software]]<br />
[[Category:Load Balancing]]<br />
[[Category:Terms and Abbreviations]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS_FULLNAT_and_SYNPROXY&diff=43819IPVS FULLNAT and SYNPROXY2012-08-08T08:42:16Z<p>Wensong: </p>
<hr />
<div>== Introduction==<br />
<br />
'''FULLNAT: A new packet forwarding model for IPVS besides DR/NAT/TUNNEL'''<br />
<br />
The main principle: the module introduce local ip address (IDC internal ip address, lip), IPVS translate cip-vip to lip-rip, lip and rip both are IDC internal ip address, so LVS-RS is capable of inter-vlan communication, while RS only need to access internal network.<br />
<br />
'''SYNPROXY: Defence module against synflooding attack'''<br />
<br />
The main principle: based on tcp syncookies, please refer to http://en.wikipedia.org/wiki/SYN_cookies;<br />
<br />
<br />
This FullNAT and SYNPROXY code for IPVS in Linux kernel 2.6.32 was written by Jian Chen at 360.cn, Jiaming Wu at taobao.com, with some advising from Wensong Zhang at taobao.com. The code was affected by ideas of the source NAT and SYNPROXY version that was hard coded to IPVS in Linux kernel 2.6.9 by Wen Li, Yan Tian, Jian Chen, Yang Yi,Yaoguang Sun, Fang Han, Ying liu and Jiaming Wu at baidu.com in 2009.<br />
<br />
<br />
The FULLNAT and SYNPROXY support were added to keepalived/ipvsadm by Jiajun Chen,Ziang Chen and Shunmin Zhu at taobao.com. <br />
<br />
Please note that FULLNAT and SYNPROXY only had limited testing.<br />
<br />
== Document ==<br />
<br />
[[Media:LVS操作手册.zip]]<br />
<br />
[[Media:lvs-fullnat-synproxy-doc.zip]]<br />
<br />
== Download ==<br />
<br />
[[Media:lvs-fullnat-synproxy.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz]]<br />
<br />
== Building ==<br />
<br />
1. LVS Kernel<br />
<br />
1.1 get kernel rpm from redhat<br />
wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/kernel-2.6.32-220.23.1.el6.src.rpm<br />
<br />
1.2 get kernel source code from rpm<br />
vim ~/.rpmmacros;<br />
add:<br />
%_topdir /home/pukong/rpms<br />
%_tmppath /home/pukong/rpms/tmp<br />
%_sourcedir /home/pukong/rpms/SOURCES<br />
%_specdir /home/pukong/rpms/SPECS<br />
%_srcrpmdir /home/pukong/rpms/SRPMS<br />
%_rpmdir /home/pukong/rpms/RPMS<br />
%_builddir /home/pukong/rpms/BUILD<br />
cd /home/pukong;<br />
mkdir rpms;<br />
mkdir rpms/tmp;<br />
mkdir rpms/SOURCES;<br />
mkdir rpms/SPECS;<br />
mkdir rpms/SRPMS;<br />
mkdir rpms/RPMS;<br />
mkdir rpms/BUILD;<br />
rpm -ivh kernel-2.6.32-220.23.1.el6.src.rpm;<br />
cd /home/pukong/rpms/SPECS;<br />
rpmbuild -bp kernel.spec;<br />
<br />
then you can find kernel source code in /home/pukong/rpms/BUILD.<br />
<br />
1.3 add lvs patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp lvs-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<lvs-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
<br />
Or you can directly get source code from linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz;<br />
<br />
1.4 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
2. RealServer Kernel (TOA)<br />
<br />
2.1 get kernel source code, the same as step 1.1 and 1.2;<br />
<br />
2.2 add toa patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp toa-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<toa-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
Or you can get source code directly from linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz;<br />
<br />
2.3 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
3. LVS Tools (keepalived/ipvsadm/quaage)<br />
cd /home/pukong;<br />
cp lvs-tools.tar.gz ./; // lvs-tools.tar.gz is in lvs-fullnat-synproxy.tar.gz <br />
tar xzf lvs-tools.tar.gz;<br />
<br />
3.1 keepalived install<br />
cd /home/pukong/tools/keepalived;<br />
./configure --with-kernel-dir="/lib/modules/`uname -r`/build";<br />
make;<br />
make install;<br />
<br />
3.2 ipvsadm install<br />
cd /home/pukong/tools/ipvsadm;<br />
make;<br />
make install;<br />
<br />
3.3 quaage install<br />
cd /home/pukong/tools/quagga;<br />
./configure --disable-ripd --disable-ripngd --disable-bgpd --disable-watchquagga --disable-doc --enable-user=root --enable-vty-group=root --enable-group=root --enable-zebra --localstatedir=/var/run/quagga<br />
make<br />
make;<br />
make install;<br />
<br />
[[Category:IPVS]]<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS_FULLNAT_and_SYNPROXY&diff=43818IPVS FULLNAT and SYNPROXY2012-08-08T08:24:35Z<p>Wensong: /* Introduction */ wording</p>
<hr />
<div>== Introduction==<br />
<br />
'''FULLNAT: A new packet forwarding model for IPVS besides DR/NAT/TUNNEL'''<br />
<br />
The main principle: the module introduce local ip address (IDC internal ip address, lip), IPVS translate cip-vip to lip-rip, lip and rip both are IDC internal ip address, so LVS-RS is capable of inter-vlan communication, while RS only need to access internal network.<br />
<br />
'''SYNPROXY: Defence module against synflood attack'''<br />
<br />
The main principle: based on tcp syncookies, please refer to http://en.wikipedia.org/wiki/SYN_cookies;<br />
<br />
<br />
This FullNAT and SYNPROXY code for IPVS in Linux kernel 2.6.32 was written by Jian Chen at 360.cn, Jiaming Wu at taobao.com, with some advising from Wensong Zhang at taobao.com. The code was affected by ideas of the source NAT and SYNPROXY version that was hard coded to IPVS in Linux kernel 2.6.9 by Wen Li, Yan Tian, Jian Chen, Yang Yi,Yaoguang Sun, Fang Han, Ying liu and Jiaming Wu at baidu.com in 2009.<br />
<br />
<br />
The FULLNAT and SYNPROXY support were added to keepalived/ipvsadm by Jiajun Chen,Ziang Chen and Shunmin Zhu at taobao.com. <br />
<br />
Please note that FULLNAT and SYNPROXY only had limited testing.<br />
<br />
== Document ==<br />
<br />
[[Media:LVS操作手册.zip]]<br />
<br />
[[Media:lvs-fullnat-synproxy-doc.zip]]<br />
<br />
== Download ==<br />
<br />
[[Media:lvs-fullnat-synproxy.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz]]<br />
<br />
== Building ==<br />
<br />
1. LVS Kernel<br />
<br />
1.1 get kernel rpm from redhat<br />
wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/kernel-2.6.32-220.23.1.el6.src.rpm<br />
<br />
1.2 get kernel source code from rpm<br />
vim ~/.rpmmacros;<br />
add:<br />
%_topdir /home/pukong/rpms<br />
%_tmppath /home/pukong/rpms/tmp<br />
%_sourcedir /home/pukong/rpms/SOURCES<br />
%_specdir /home/pukong/rpms/SPECS<br />
%_srcrpmdir /home/pukong/rpms/SRPMS<br />
%_rpmdir /home/pukong/rpms/RPMS<br />
%_builddir /home/pukong/rpms/BUILD<br />
cd /home/pukong;<br />
mkdir rpms;<br />
mkdir rpms/tmp;<br />
mkdir rpms/SOURCES;<br />
mkdir rpms/SPECS;<br />
mkdir rpms/SRPMS;<br />
mkdir rpms/RPMS;<br />
mkdir rpms/BUILD;<br />
rpm -ivh kernel-2.6.32-220.23.1.el6.src.rpm;<br />
cd /home/pukong/rpms/SPECS;<br />
rpmbuild -bp kernel.spec;<br />
<br />
then you can find kernel source code in /home/pukong/rpms/BUILD.<br />
<br />
1.3 add lvs patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp lvs-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<lvs-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
<br />
Or you can directly get source code from linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz;<br />
<br />
1.4 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
2. RealServer Kernel (TOA)<br />
<br />
2.1 get kernel source code, the same as step 1.1 and 1.2;<br />
<br />
2.2 add toa patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp toa-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<toa-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
Or you can get source code directly from linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz;<br />
<br />
2.3 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
3. LVS Tools (keepalived/ipvsadm/quaage)<br />
cd /home/pukong;<br />
cp lvs-tools.tar.gz ./; // lvs-tools.tar.gz is in lvs-fullnat-synproxy.tar.gz <br />
tar xzf lvs-tools.tar.gz;<br />
<br />
3.1 keepalived install<br />
cd /home/pukong/tools/keepalived;<br />
./configure --with-kernel-dir="/lib/modules/`uname -r`/build";<br />
make;<br />
make install;<br />
<br />
3.2 ipvsadm install<br />
cd /home/pukong/tools/ipvsadm;<br />
make;<br />
make install;<br />
<br />
3.3 quaage install<br />
cd /home/pukong/tools/quagga;<br />
./configure --disable-ripd --disable-ripngd --disable-bgpd --disable-watchquagga --disable-doc --enable-user=root --enable-vty-group=root --enable-group=root --enable-zebra --localstatedir=/var/run/quagga<br />
make<br />
make;<br />
make install;<br />
<br />
[[Category:IPVS]]<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS_FULLNAT_and_SYNPROXY&diff=43817IPVS FULLNAT and SYNPROXY2012-08-08T08:13:45Z<p>Wensong: wording</p>
<hr />
<div>== Introduction==<br />
<br />
'''FULLNAT: A new packet forwarding model for IPVS besides DR/NAT/TUNNEL'''<br />
<br />
The main principle: the module introduce local ip address (IDC internal ip address, lip), IPVS translate cip-vip to lip-rip, lip and rip both are IDC internal ip address, so LVS-RS is capable of inter-vlan communication, while RS only need to access internal network.<br />
<br />
'''SYNPROXY: Defence module against synflood attack'''<br />
<br />
The main principle: based on tcp syncookies, please refer to http://en.wikipedia.org/wiki/SYN_cookies;<br />
<br />
<br />
This FullNAT and SYNPROXY code for IPVS in Linux kernel 2.6.32 was written by Jian Chen at 360.cn, Jiaming Wu at taobao.com, with some advising from Wensong Zhang at taobao.com. The code was affected by ideas of the source NAT and SYNPROXY version that was added to IPVS in Linux kernel 2.6.9 by Wen Li, Yan Tian, Jian Chen, Yang Yi,Yaoguang Sun, Fang Han, Ying liu and Jiaming Wu.<br />
<br />
<br />
The FULLNAT and SYNPROXY support were added to keepalived/ipvsadm by Jiajun Chen,Ziang Chen and Shunmin Zhu at taobao.com. <br />
<br />
Please note that FULLNAT and SYNPROXY only had limited testing.<br />
<br />
== Document ==<br />
<br />
[[Media:LVS操作手册.zip]]<br />
<br />
[[Media:lvs-fullnat-synproxy-doc.zip]]<br />
<br />
== Download ==<br />
<br />
[[Media:lvs-fullnat-synproxy.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz]]<br />
<br />
[[Media:linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz]]<br />
<br />
== Building ==<br />
<br />
1. LVS Kernel<br />
<br />
1.1 get kernel rpm from redhat<br />
wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/kernel-2.6.32-220.23.1.el6.src.rpm<br />
<br />
1.2 get kernel source code from rpm<br />
vim ~/.rpmmacros;<br />
add:<br />
%_topdir /home/pukong/rpms<br />
%_tmppath /home/pukong/rpms/tmp<br />
%_sourcedir /home/pukong/rpms/SOURCES<br />
%_specdir /home/pukong/rpms/SPECS<br />
%_srcrpmdir /home/pukong/rpms/SRPMS<br />
%_rpmdir /home/pukong/rpms/RPMS<br />
%_builddir /home/pukong/rpms/BUILD<br />
cd /home/pukong;<br />
mkdir rpms;<br />
mkdir rpms/tmp;<br />
mkdir rpms/SOURCES;<br />
mkdir rpms/SPECS;<br />
mkdir rpms/SRPMS;<br />
mkdir rpms/RPMS;<br />
mkdir rpms/BUILD;<br />
rpm -ivh kernel-2.6.32-220.23.1.el6.src.rpm;<br />
cd /home/pukong/rpms/SPECS;<br />
rpmbuild -bp kernel.spec;<br />
<br />
then you can find kernel source code in /home/pukong/rpms/BUILD.<br />
<br />
1.3 add lvs patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp lvs-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<lvs-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
<br />
Or you can directly get source code from linux-2.6.32-220.23.1.el6.x86_64.lvs.src.tar.gz;<br />
<br />
1.4 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
2. RealServer Kernel (TOA)<br />
<br />
2.1 get kernel source code, the same as step 1.1 and 1.2;<br />
<br />
2.2 add toa patch<br />
cd /home/pukong/rpms/BUILD/;<br />
cd kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/;<br />
cp toa-2.6.32-220.23.1.el6.patch ./;<br />
patch -p1<toa-2.6.32-220.23.1.el6.patch; // patch is in lvs-fullnat-synproxy.tar.gz <br />
<br />
Or you can get source code directly from linux-2.6.32-220.23.1.el6.x86_64.rs.src.tar.gz;<br />
<br />
2.3 compile and install<br />
make -j16;<br />
make modules_install;<br />
make install;<br />
<br />
3. LVS Tools (keepalived/ipvsadm/quaage)<br />
cd /home/pukong;<br />
cp lvs-tools.tar.gz ./; // lvs-tools.tar.gz is in lvs-fullnat-synproxy.tar.gz <br />
tar xzf lvs-tools.tar.gz;<br />
<br />
3.1 keepalived install<br />
cd /home/pukong/tools/keepalived;<br />
./configure --with-kernel-dir="/lib/modules/`uname -r`/build";<br />
make;<br />
make install;<br />
<br />
3.2 ipvsadm install<br />
cd /home/pukong/tools/ipvsadm;<br />
make;<br />
make install;<br />
<br />
3.3 quaage install<br />
cd /home/pukong/tools/quagga;<br />
./configure --disable-ripd --disable-ripngd --disable-bgpd --disable-watchquagga --disable-doc --enable-user=root --enable-vty-group=root --enable-group=root --enable-zebra --localstatedir=/var/run/quagga<br />
make<br />
make;<br />
make install;<br />
<br />
[[Category:IPVS]]<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Main_Page&diff=42064Main Page2012-06-28T03:54:17Z<p>Wensong: Reverted edits by Newshopjerseys (Talk); changed back to last version by NetOptimizer</p>
<hr />
<div>Welcome to the LVS Knowledge Base. This website documents various information about [http://www.LinuxVirtualServer.org/ LinuxVirtualServer.org] software, application and development. Since this website is a [http://wiki.org/wiki.cgi?WelcomeVisitors wiki], feel free to correct/edit pages or start a brand new page. To begin, please [[Special:Userlogin | log in or create a new account]]. Learn how to [[LVS Knowledge Base:Formatting | edit Wiki pages]], experiment in the [[SandBox]].<br />
<br />
== General ==<br />
<br />
* [[FAQ | Frequently Asked Questions (FAQ)]]<br />
* [[Building Scalable Network Services]]<br />
<br />
== Using LVS ==<br />
<br />
* [[Examples]]<br />
* [[Troubleshooting]]<br />
* [[Performance and Tuning]]<br />
* [[ARP Issues in LVS/DR and LVS/TUN Clusters]]<br />
* [[LVS Cluster Management]]<br />
* [[LVS Deployment]]<br />
<br />
== Development ==<br />
<br />
* [[IPVS]] - an advanced layer-4 [[load balancing]] solution<br />
* [[KTCPVS]] - a layer-7 [[load balancing]] inside the Linux kernel<br />
* [[TCPSP]] - a [[TCP Splicing|TCP splicing]] implementation for the Linux operating system<br />
* [[TCPHA]] - a [[TCP Handoff|TCP handoff]] implementation for the Linux operating system<br />
* [[Net-SNMP-LVS-Module]] - read configuration and some statistics from [[IPVS]] via snmp<br />
* [[SorryServer]] - display Sorry Page when all the servers are not available<br />
<br />
== Wish List ==<br />
<br />
* [[IPVS Wish List]]<br />
* [[KTCPVS Wish List]]<br />
<br />
== Miscellaneous ==<br />
<br />
* [[:Category:Terms and Abbreviations|Glossary of Terms and Abbreviations]]<br />
* [[References]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Main_Page&diff=6007Main Page2011-11-11T06:34:59Z<p>Wensong: Reverted edits by LuellaShelby (Talk); changed back to last version by JPDeckers</p>
<hr />
<div>Welcome to the LVS Knowledge Base. This website documents various information about [http://www.LinuxVirtualServer.org/ LinuxVirtualServer.org] software, application and development. Since this website is a [http://wiki.org/wiki.cgi?WelcomeVisitors wiki], feel free to correct/edit pages or start a brand new page. To begin, please [[Special:Userlogin | log in or create a new account]]. Learn how to [[LVS Knowledge Base:Formatting | edit Wiki pages]], experiment in the [[SandBox]].<br />
<br />
== General ==<br />
<br />
* [[FAQ | Frequently Asked Questions (FAQ)]]<br />
* [[Building Scalable Network Services]]<br />
<br />
== Using LVS ==<br />
<br />
* [[Examples]]<br />
* [[Troubleshooting]]<br />
* [[Performance and Tuning]]<br />
* [[ARP Issues in LVS/DR and LVS/TUN Clusters]]<br />
* [[LVS Cluster Management]]<br />
* [[LVS Deployment]]<br />
<br />
== Development ==<br />
<br />
* [[IPVS]] - an advanced layer-4 [[load balancing]] solution<br />
* [[KTCPVS]] - a layer-7 [[load balancing]] inside the Linux kernel<br />
* [[TCPSP]] - a [[TCP Splicing|TCP splicing]] implementation for the Linux operating system<br />
* [[TCPHA]] - a [[TCP Handoff|TCP handoff]] implementation for the Linux operating system<br />
* [[Net-SNMP-LVS-Module]] - read configuration and some statistics from [[IPVS]] via snmp<br />
* [[SorryServer]] - display Sorry Page when all the servers are not available<br />
<br />
== Wish List ==<br />
<br />
* [[IPVS Wish List]]<br />
* [[KTCPVS Wish List]]<br />
<br />
== Miscellaneous ==<br />
<br />
* [[:Category:Terms and Abbreviations|Glossary of Terms and Abbreviations]]<br />
* [[References]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Compiling_ipvsadm_on_different_Linux_distributions&diff=5873Compiling ipvsadm on different Linux distributions2011-02-09T03:14:29Z<p>Wensong: </p>
<hr />
<div>== Introduction ==<br />
<br />
Since [[IPVS]] interface to user-space is different in different Linux kernel versions, there are different versions of [[ipvsadm]] for IPVS inside the different Linux kernel versions.<br />
<br />
For IPVS inside the Linux kernel 2.6, you need to use [[ipvsadm]] version 1.24 or later.<br />
<br />
For IPVS inside the Linux kernel 2.4, you need to use [[ipvsadm]] 1.21 series.<br />
<br />
For IPVS patch for the Linux kernel 2.2, you need to use [[ipvsadm]] version 1.15.<br />
<br />
== Red Hat Enterprise Linux 3 ==<br />
<br />
Red Hat Enterprise Linux 3 (or CentOS 3) kernel has already included the [[IPVS]] patch, so that we do not need to build the kernel image with IPVS support ourselves.<br />
[root@penguin root]# modprobe ip_vs<br />
[root@penguin root]# cat /proc/net/ip_vs<br />
IP Virtual Server version 1.0.8 (size=65536)<br />
Prot LocalAddress:Port Scheduler Flags<br />
-> RemoteAddress:Port Forward Weight ActiveConn InActConn<br />
<br />
For the IPVS version 1.0.8, we should use the ipvsadm version 1.21-10, which support the Linux kernel 1.0.4 or later.<br />
<br />
On Red Hat Enterprise Linux 3, make sure that kernel-source rpm is installed at system.<br />
[root@penguin src]# rpm -q kernel-source<br />
kernel-source-2.4.21-47.0.1.EL<br />
[root@penguin src]# pwd<br />
/usr/src<br />
[root@penguin src]# ln -s linux-2.4.21-47.0.1.EL linux<br />
<br />
Then, we can build the ipvsadm-1.21-10 rpm in the following command:<br />
rpmbuild --rebuild ipvsadm-1.21-10.src.rpm<br />
<br />
Note that ipvsadm-1.21-11.src.rpm is for IPVS 1.0.12 or later, IPVS 1.0.12 is in the Linux kernel 2.4.28 or later 2.4.xx versions.<br />
<br />
== Red Hat Enterprise Linux 4 ==<br />
<br />
On Red Hat Enterprise Linux 4, make sure that kernel-source rpm is installed at system (as showed at http://kbase.redhat.com/faq/FAQ_85_5109.shtm).<br />
<br />
<pre><br />
# up2date redhat-rpm-config rpm-build kernel-utils<br />
<br />
# up2date --get-source kernel<br />
<br />
# rpm -ivh /var/spool/up2date/kernel*.src.rpm<br />
<br />
# cd /usr/src/redhat/SPECS<br />
<br />
# rpmbuild -bp --target=i686 kernel-2.6.spec<br />
<br />
# cp -a /usr/src/redhat/BUILD/kernel-2.6.9/linux-2.6.9 /usr/src<br />
<br />
# ln -s /usr/src/linux-2.6.9 /usr/src/linux<br />
</pre><br />
<br />
<br />
Then, we can build the ipvsadm-1.24 rpm in the following command:<br />
rpmbuild --rebuild ipvsadm-1.24-6.src.rpm<br />
<br />
And install using:<br />
rpm -ivh /usr/src/redhat/RPMS/i386/ipvsadm-1.24-6.i386.rpm<br />
<br />
== Red Hat Enterprise Linux 5 ==<br />
<br />
== Red Hat Enterprise Linux 6 ==<br />
<br />
== Compiling Error Analysis ==<br />
<br />
=== Cannot find kernel source directory ===<br />
<br />
Here is the compiling error message<br />
<pre><br />
# make<br />
make -C libipvs<br />
make[1]: Entering directory `/var/tmp/ipvsadm-1.24/libipvs'<br />
gcc -Wall -Wunused -Wstrict-prototypes -g -O2 -I/usr/src/linux/include -DHAVE_NET_IP_VS_H -c -o libipvs.o libipvs.c<br />
In file included from libipvs.c:23:<br />
libipvs.h:14:23: net/ip_vs.h: No such file or directory<br />
In file included from libipvs.c:23:<br />
libipvs.h:119: error: syntax error before "fwmark"<br />
libipvs.h:119: warning: function declaration isn't a prototype<br />
libipvs.c:27: error: field `svc' has incomplete type<br />
libipvs.c:28: error: field `dest' has incomplete type<br />
libipvs.c: In function `ipvs_init':<br />
libipvs.c:40: error: invalid application of `sizeof' to incomplete type `ip_vs_getinfo'<br />
libipvs.c:44: error: `IP_VS_SO_GET_INFO' undeclared (first use in this function)<br />
libipvs.c:44: error: (Each undeclared identifier is reported only once<br />
libipvs.c:44: error: for each function it appears in.)<br />
libipvs.c: In function `ipvs_getinfo':<br />
libipvs.c:56: error: invalid application of `sizeof' to incomplete type `ip_vs_getinfo'<br />
libipvs.c:57: error: `IP_VS_SO_GET_INFO' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_version':<br />
libipvs.c:64: error: invalid use of undefined type `struct ip_vs_getinfo'<br />
libipvs.c: In function `ipvs_flush':<br />
libipvs.c:70: error: `IP_VS_SO_SET_FLUSH' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_add_service':<br />
libipvs.c:79: error: `IP_VS_SO_SET_ADD' undeclared (first use in this function)<br />
libipvs.c:79: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_update_service':<br />
libipvs.c:87: error: `IP_VS_SO_SET_EDIT' undeclared (first use in this function)<br />
libipvs.c:87: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_del_service':<br />
libipvs.c:95: error: `IP_VS_SO_SET_DEL' undeclared (first use in this function)<br />
libipvs.c:95: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_zero_service':<br />
libipvs.c:103: error: `IP_VS_SO_SET_ZERO' undeclared (first use in this function)<br />
libipvs.c:103: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_add_dest':<br />
libipvs.c:109: error: dereferencing pointer to incomplete type<br />
libipvs.c:109: error: dereferencing pointer to incomplete type<br />
libipvs.c:112: error: `IP_VS_SO_SET_ADDDEST' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_update_dest':<br />
libipvs.c:119: error: dereferencing pointer to incomplete type<br />
libipvs.c:119: error: dereferencing pointer to incomplete type<br />
libipvs.c:122: error: `IP_VS_SO_SET_EDITDEST' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_del_dest':<br />
libipvs.c:129: error: dereferencing pointer to incomplete type<br />
libipvs.c:129: error: dereferencing pointer to incomplete type<br />
libipvs.c:132: error: `IP_VS_SO_SET_DELDEST' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_set_timeout':<br />
libipvs.c:140: error: `IP_VS_SO_SET_TIMEOUT' undeclared (first use in this function)<br />
libipvs.c:141: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_start_daemon':<br />
libipvs.c:148: error: `IP_VS_SO_SET_STARTDAEMON' undeclared (first use in this function)<br />
libipvs.c:149: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_stop_daemon':<br />
libipvs.c:156: error: `IP_VS_SO_SET_STOPDAEMON' undeclared (first use in this function)<br />
libipvs.c:157: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_get_services':<br />
libipvs.c:166: error: dereferencing pointer to incomplete type<br />
libipvs.c:167: error: invalid application of `sizeof' to incomplete type `libipvs.h'<br />
libipvs.c:167: error: invalid use of undefined type `struct ip_vs_getinfo'<br />
libipvs.c:172: error: dereferencing pointer to incomplete type<br />
libipvs.c:172: error: invalid use of undefined type `struct ip_vs_getinfo'<br />
libipvs.c:174: error: `IP_VS_SO_GET_SERVICES' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_cmp_services':<br />
libipvs.c:189: error: dereferencing pointer to incomplete type<br />
libipvs.c:189: error: dereferencing pointer to incomplete type<br />
libipvs.c:193: error: dereferencing pointer to incomplete type<br />
libipvs.c:193: error: dereferencing pointer to incomplete type<br />
libipvs.c:197: error: dereferencing pointer to incomplete type<br />
libipvs.c:197: error: dereferencing pointer to incomplete type<br />
libipvs.c:201: error: dereferencing pointer to incomplete type<br />
libipvs.c:201: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_sort_services':<br />
libipvs.c:208: error: dereferencing pointer to incomplete type<br />
libipvs.c:208: error: dereferencing pointer to incomplete type<br />
libipvs.c:209: error: invalid application of `sizeof' to incomplete type `libipvs.h'<br />
libipvs.c: In function `ipvs_get_dests':<br />
libipvs.c:218: error: dereferencing pointer to incomplete type<br />
libipvs.c:218: error: invalid application of `sizeof' to incomplete type `libipvs.h'<br />
libipvs.c:218: error: dereferencing pointer to incomplete type<br />
libipvs.c:224: error: dereferencing pointer to incomplete type<br />
libipvs.c:224: error: dereferencing pointer to incomplete type<br />
libipvs.c:225: error: dereferencing pointer to incomplete type<br />
libipvs.c:225: error: dereferencing pointer to incomplete type<br />
libipvs.c:226: error: dereferencing pointer to incomplete type<br />
libipvs.c:226: error: dereferencing pointer to incomplete type<br />
libipvs.c:227: error: dereferencing pointer to incomplete type<br />
libipvs.c:227: error: dereferencing pointer to incomplete type<br />
libipvs.c:228: error: dereferencing pointer to incomplete type<br />
libipvs.c:228: error: dereferencing pointer to incomplete type<br />
libipvs.c:231: error: `IP_VS_SO_GET_DESTS' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_cmp_dests':<br />
libipvs.c:243: error: dereferencing pointer to incomplete type<br />
libipvs.c:243: error: dereferencing pointer to incomplete type<br />
libipvs.c:247: error: dereferencing pointer to incomplete type<br />
libipvs.c:247: error: dereferencing pointer to incomplete type<br />
libipvs.c: In function `ipvs_sort_dests':<br />
libipvs.c:253: error: dereferencing pointer to incomplete type<br />
libipvs.c:253: error: dereferencing pointer to incomplete type<br />
libipvs.c:254: error: invalid application of `sizeof' to incomplete type `libipvs.h'<br />
libipvs.c: At top level:<br />
libipvs.c:259: error: syntax error before "fwmark"<br />
libipvs.c:260: warning: function declaration isn't a prototype<br />
libipvs.c: In function `ipvs_get_service':<br />
libipvs.c:264: error: dereferencing pointer to incomplete type<br />
libipvs.c:270: error: dereferencing pointer to incomplete type<br />
libipvs.c:270: error: `fwmark' undeclared (first use in this function)<br />
libipvs.c:271: error: dereferencing pointer to incomplete type<br />
libipvs.c:271: error: `protocol' undeclared (first use in this function)<br />
libipvs.c:272: error: dereferencing pointer to incomplete type<br />
libipvs.c:272: error: `addr' undeclared (first use in this function)<br />
libipvs.c:273: error: dereferencing pointer to incomplete type<br />
libipvs.c:273: error: `port' undeclared (first use in this function)<br />
libipvs.c:274: error: `IP_VS_SO_GET_SERVICE' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_get_timeout':<br />
libipvs.c:288: error: dereferencing pointer to incomplete type<br />
libipvs.c:293: error: `IP_VS_SO_GET_TIMEOUT' undeclared (first use in this function)<br />
libipvs.c: In function `ipvs_get_daemon':<br />
libipvs.c:309: error: dereferencing pointer to incomplete type<br />
libipvs.c:315: error: `IP_VS_SO_GET_DAEMON' undeclared (first use in this function)<br />
libipvs.c: At top level:<br />
libipvs.c:33: error: storage size of `ipvs_info' isn't known<br />
make[1]: *** [libipvs.o] Error 1<br />
make[1]: Leaving directory `/var/tmp/ipvsadm-1.24/libipvs'<br />
make: *** [libs] Error 2<br />
</pre><br />
<br />
The problem is that compiler cannot find the kernel source directory. The fix is to make a symbolic link of kernel source directory to /usr/src/linux.<br />
<br />
<br />
[[Category:IPVS]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Building_clusterized_proxy_farms_using_LVS&diff=5841Building clusterized proxy farms using LVS2010-10-24T04:40:02Z<p>Wensong: Reverted edits by Wokfel (Talk); changed back to last version by Wensong</p>
<hr />
<div>== Introduction ==<br />
Sometimes, you just need to build a proxy farm with the following needs:<br />
*you need high-availability, because of your SLA.<br />
*you need load balancing, because you're serving numerous Kusers.<br />
*you need caching with Squid.<br />
*you need extra-services over Squid (URL filtering with DansGuardian/SquidGuard, Antivirus softwares, ...)<br />
*you need scalability because you want to be able to grow.<br />
<br />
In these cas, you've got plenty of answers to proceed. [[LVS]] is one of them (probably the best suited anyway).<br />
<br />
But as you're doing extra-services over Squid, the base [[LVS]] [[ldirectord]] tests won't be sufficient.<br />
<br />
This page explains how to make some more advanced tests to be sure some extra-services will be checked too, and the corresponding realserver service will be dropped down if one of the extra service (or Squid itself) is not available.<br />
<br />
It present a proposed patch to [[ldirectord]] to do so.<br />
<br />
== Architecture ==<br />
<br />
The system architecture of web cache cluster using [[LVS]] is illustrated in the following figure.<br />
<br />
[[Image:sl-ha-lb-overview-ultramonkey3.png|center]]<br />
<br />
<br />
* it uses [[LVS]] with Ultramonkey 3 settings for a [http://www.ultramonkey.org/3/topologies/sl-ha-lb-overview.html Streamline High Availability and Load Balancing] setting.<br />
* it's based on the usage for Squid + DansGuardian (but could feet for any Squid + *).<br />
<br />
<br />
== Configuration Example ==<br />
To set up this configuration with [[LVS]], just proceed to [http://www.ultramonkey.org/3/installation.html installation] and [http://www.ultramonkey.org/3/topologies/sl-ha-lb-eg.html configuration] as indicated on the Ultramonkey documentations.<br />
<br />
Then :<br />
* set up your Squid server, listening on port 3128 (example) and using DansGuardian on port 8000<br />
* set up your DansGuardian server, listening to port 8000<br />
* set up DansGuardian to reject (filter) the "http://proxy.testing.net/" URL (adding this domain/url to the blacklists. Be sure the "URL Filtering" will be returned by the proxy in case of filtering this URL.<br />
* patch /usr/sbin/ldirectord with the FIXME patch<br />
* configure your /etc/ha.d/ldirectord.cf with the following tests (example IP addresses):<br />
<br />
# /etc/ha.d/ldirectord.cf<br />
# Virtual Server for Proxy Service and Filtering Service<br />
virtual=192.168.0.100:3128<br />
real=192.168.0.10:3128 gate<br />
real=192.168.0.11:3128 gate<br />
real=192.168.0.12:3128 gate<br />
real=192.168.0.12:3128 gate<br />
service=proxy_http<br />
request="http://proxy.testing.net"<br />
receive="URL Filtering"<br />
protocol=tcp<br />
scheduler=wlc<br />
protocol=tcp<br />
checktype=negotiate<br />
<br />
== Conclusion ==<br />
If you were using http test, you could only be able to check Squid is running. In case of DansGuardian issue, it could be a bad thing. This way, if the filtering engine is down/not responding, the whole Squid realserver will be dropped down for [[LVS]].<br />
<br />
Of course, you must still monitor all process to be warned anything is going wrong. But you will have time to resolve as your users won't notice anything :) Thank's a lot to [[LVS]] again!<br />
<br />
== The ldirectord patch ==<br />
This patch is to be applied to release +ldirectord,v 1.77.2.32+, but is basic enought for you to report it to any newer release of [[ldirectord]].<br />
<br />
--- ldirectord_sav 2005-10-03 08:18:19.000000000 +0200<br />
+++ ldirectord 2005-11-13 14:39:51.000000000 +0100<br />
@@ -236,7 +236,7 @@ checking will take place and no real or <br />
On means no checking will take place and real servers will always be<br />
activated. Default is I<negotiate>.<br />
<br />
-B<service = ftp>|B<smtp>|B<http>|B<pop>|B<nntp>|B<imap>|B<ldap>|B<https>|B<dns>|B<mysql>|B<pgsql>|B<sip>|B<none><br />
+B<service = ftp>|B<smtp>|B<http>|B<proxy_http>|B<pop>|B<nntp>|B<imap>|B<ldap>|B<https>|B<dns>|B<mysql>|B<pgsql>|B<sip>|B<none><br />
<br />
The type of service to monitor when using checktype=negotiate. None denotes<br />
a service that will not be monitored. If the port specfied for the virtual<br />
@@ -866,8 +866,8 @@ sub read_config<br />
}<br />
} elsif ($rcmd =~ /^service\s*=\s*(.*)/) {<br />
lc($1);<br />
- $1 =~ /(\w+)/ && ($1 eq "http" || $1 eq "https" || $1 eq "ldap" || $1 eq "ftp" || $1 eq "none" || $1 eq "smtp" || $1 eq "pop" || $1 eq "imap" || $1 eq "nntp" || $1 eq "dns" || $1 eq "mysql" || $1 eq "pgsql" || $1 eq "sip")<br />
- or &config_error($line, "service must be http, https, ftp, smtp, pop, imap, ldap, nntp, dns, mysql, pgsql, sip, or none");<br />
+ $1 =~ /(\w+)/ && ($1 eq "http" || $1 eq "proxy_http" || $1 eq "https" || $1 eq "ldap" || $1 eq "ftp" || $1 eq "none" || $1 eq "smtp" || $1 eq "pop" || $1 eq "imap" || $1 eq "nntp" || $1 eq "dns" || $1 eq "mysql" || $1 eq "pgsql" || $1 eq "sip")<br />
+ or &config_error($line, "service must be http, https, proxy_http, ftp, smtp, pop, imap, ldap, nntp, dns, mysql, pgsql, sip, or none");<br />
$vsrv{service} = $1;<br />
if($vsrv{service} eq "ftp" and <br />
$vsrv{login} eq "") {<br />
@@ -1635,6 +1635,8 @@ sub ld_main<br />
$$r{num_connects} = 0 if (check_http($v, $r));<br />
# my $req = new HTTP::Request(GET=>"$$r{url}");<br />
# $ua->register($req, \&http_received);<br />
+ } elsif ($$v{service} eq "proxy_http") {<br />
+ $$r{num_connects} = 0 if (check_proxy_http($v, $r));<br />
} elsif ($$v{service} eq "pop") {<br />
$$r{num_connects} = 0 if (check_pop($v, $r));<br />
} elsif ($$v{service} eq "imap") {<br />
@@ -1693,6 +1695,51 @@ sub ld_main<br />
}<br />
}<br />
<br />
+sub check_proxy_http<br />
+{<br />
+ use LWP::UserAgent;<br />
+ use LWP::Debug;<br />
+<br />
+ my ($v, $r) = @_;<br />
+<br />
+ my $proxy_request = substr($$r{request},1);<br />
+<br />
+ my $ua = new LWP::UserAgent();<br />
+<br />
+ $ua->timeout($$v{negotiatetimeout});<br />
+ $ua->proxy(['http', 'ftp'], "http://$$r{server}:$$r{port}/");<br />
+<br />
+ my $h = new HTTP::Headers("Host" => $$v{virtualhost});<br />
+ my $req = new HTTP::Request("$$v{httpmethod}", "$proxy_request", $h);<br />
+<br />
+ my $res;<br />
+ {<br />
+ # LWP makes ungaurded calls to eval<br />
+ # which throw a fatal exception if they fail<br />
+ # Needless to say, this is completely stupid.<br />
+ local $SIG{'__DIE__'} = "DEFAULT";<br />
+ $res = $ua->request($req);<br />
+ }<br />
+<br />
+ my $recstr = $$v{receive};<br />
+<br />
+ if ($res->is_success && (!($recstr =~ /.+/) || $res->content =~ /$recstr/)) {<br />
+ service_set($v, $r, "up");<br />
+ &ld_debug(2, "check_proxy_http: $proxy_request is up\n");<br />
+ return 1;<br />
+ }<br />
+ <br />
+ service_set($v, $r, "down");<br />
+ &ld_debug(3, "Headers " . $res->headers->as_string);<br />
+ &ld_debug(2, "check_proxy_http: $proxy_request is down\n");<br />
+ return 0;<br />
+<br />
+}<br />
<br />
More infos? mailto:christian.avramakis@siemens.com<br />
[[Category:LVS Examples|Cache]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=SandBox&diff=5837SandBox2010-08-26T10:30:44Z<p>Wensong: </p>
<hr />
<div>Just a test<br />
= Heading ? =[[Image:Example.jpg]]<br />
<br />
----<br />
<br />
<br />
== Heading 1 ==<br />
'''Some bold text''' ''And some italic''<br />
Lets create and [[internal link]]<br />
--[[User:Alienet|Alienet]] 14:43, 22 August 2006 (CST)<nowiki><nowiki>[[''Test something'']]</nowiki></nowiki><br />
<br />
== Heading 2 ==<br />
<br />
== Heading 3 ==<br />
<br />
== dklim's test ==<br />
<br />
<math>(a^3+d^2)*c^5 = z</math><br />
<br />
'''Bold text'''<br />
<br />
''Italic text''<br />
<br />
[[Link title]]<br />
<br />
[http://www.example.com link title]<br />
<br />
<br />
== Headline text ==<br />
<br />
[[Image:Example.jpg]]<br />
<br />
[[Media:Example.mp3]]<br />
<br />
<math>Insert formula here</math><br />
<br />
<nowiki>Insert non-formatted text here</nowiki><br />
<br />
--[[User:Dklim|Dklim]] 15:23, 15 Sep 2005 (CST)<br />
<br />
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math><br />
<br />
----<br />
<br />
----has this to do with the streamflow of digital information?<br />
<br />
<math>migration\_weight * \frac{frequency}{size^{migration\_power}}\ ;\ \ migration\_power \in (0,1]</math></div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=UCARP&diff=5817UCARP2010-02-22T13:26:00Z<p>Wensong: New page: == Introduction == UCARP is a portable implementation of the CARP (Common Address Redundancy Protocol) protocol. <blockquote> UCARP allows a couple of hosts to share common virtual IP a...</p>
<hr />
<div>== Introduction ==<br />
<br />
UCARP is a portable implementation of the CARP (Common Address Redundancy Protocol) protocol. <br />
<br />
<blockquote><br />
UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD’s alternative to the patents-bloated VRRP).<br />
<br/><br />
Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between different operating systems and no need for any dedicated extra network link between redundant hosts.<br />
</blockquote><br />
<br />
UCARP can be used to handle failover among LVS load balancers.<br />
<br />
== Examples ==<br />
<br />
<br />
== External Links ==<br />
<br />
* http://www.ucarp.org/project/ucarp<br />
<br />
<br />
{{cluster-management-stub}}<br />
<br />
[[Category:Cluster Management]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=LVS_Cluster_Management&diff=5816LVS Cluster Management2010-02-22T13:15:00Z<p>Wensong: /* Cluster Management Software */</p>
<hr />
<div>== Cluster Management ==<br />
<br />
Cluster management is to monitor and administrate all the computers in a computer cluster. It covers a wide range of functionality, such as resource monitoring, cluster membership management, reliable group communication, and full-featured administration interfaces.<br />
<br />
One of the advantages of a cluster system is that it has hardware and software redundancy, because the cluster system consists of a number of independent nodes, and each node runs a copy of operating system and application software. Cluster Management can help achieve high availability by detecting node or daemon failures and reconfiguring the system appropriately, so that the workload can be taken over by the remaining nodes in the cluster.<br />
<br />
== LVS Cluster Management ==<br />
<br />
Since LVS Cluster is [[load balancing]] cluster, the requirement of LVS cluster management is simple, cluster monitoring and administration interface are two major parts.<br />
<br />
=== Cluster Monitoring ===<br />
<br />
The major work of cluster monitoring in [[LVS]] is to monitor the availability of [[real server]]s and [[load balancer]]s, and reconfigure the system if any partial failure happens, so that the whole cluster system can still serve requests. Note that monitoring the availability of database, network file system or distributed file system is not addressed here.<br />
<br />
To monitor the availability of [[real server]]s, there are two approaches, one is to run service monitoring daemons at the load balancer to check server health periodically, the other is to run monitoring agents at real servers to collect information and report to the load balancer. The service monitor usually sends service requests and/or ICMP ECHO_REQUEST to real servers periodically, and remove/disable a real server in server list at the load balancer if there is no response in a specified time or error response, thus no new requests will be sent to this dead server. When the service monitor detects the dead server has recovered to work, the service monitor will add the server back to the available server list at the load balancer. Therefore, the load balancer can mask the failure of service daemons or servers automatically.<br />
<br />
In the monitoring agent approach, there is also a monitoring master running at the load balancer to receive information from the agents. The monitoring master will add/remove servers at the load balancer based on the availability of agents, can also adjust server weight based on server load information. However, there is more efforts to make the monitoring agents running at all kinds of server operating systems, such as Linux, FreeBSD, and Windows.<br />
<br />
The [[load balancer]] is the core of a server cluster system, and it cannot be a single failure point of the whole system. In order to prevent the whole system from being out of service because of the load balancer failure, we need setup a backup (or several backups) of the load balancer, which are connected by [[heartbeat]] or [[VRRP]]. Two heartbeat daemons run on the primary and the backup respectively, they heartbeat the message like "I'm alive" each other through serial lines and/or network interfaces periodically. When the heartbeat daemon of the backup cannot hear the heartbeat message from the primary in the specified time, it will take over the virtual IP address to provide the load-balancing service. When the failed load balancer comes back to work, there are two solutions, one is that it becomes the backup load balancer automatically, the other is the active load balancer releases the [[VIP]] address, and the recover one takes over the [[VIP]] address and becomes the primary load balancer again.<br />
<br />
The primary [[load balancer]] has state of connections, i.e. which server the connection is forwarded to. If the backup load balancer takes over without those connections information, the clients have to send their requests again to access service. In order to make load balancer failover transparent to client applications, we implement connection synchronization in [[IPVS]], the primary [[IPVS]] [[load balancer]] synchronizes connection information to the backup load balancers through UDP multicast. When the backup load balancer takes over after the primary one fails, the backup load balancer will have the state of most connections, so that almost all connections can continue to access the service through the backup load balancer.<br />
<br />
=== Administration Interface ===<br />
<br />
The administration interface of [[LVS]] cluster management should enable administrators to do the following things:<br />
* add new servers to increase the system throughput or remove servers for system maintenance, without bringing down the whole system service<br />
* monitor the traffic of LVS cluster and provide statistics<br />
<br />
== Cluster Management Software ==<br />
<br />
There are many cluster management software in conjuction with LVS to provide high availability and management of the whole system.<br />
<br />
* [[Piranha]]<br />
* [[Keepalived]]<br />
* [[UltraMonkey]]<br />
* [[UCARP]]<br />
* [[heartbeat plus ldirectord]]<br />
* [[heartbeat plus mon]]<br />
* [[feedbackd]]<br />
* [[ipvsman]]<br />
* [[LVSM]]<br />
* [[lvs-kiss]]<br />
* [[SCOP]]<br />
* [[OpenSSI Cluster integrated HA-LVS]]<br />
<br />
For computing cluster management software, see the page [[Computing Cluster Mangement]].<br />
<br />
== External Links ==<br />
<br />
* [http://en.wikipedia.org/wiki/Computer_cluster Computer Cluster]<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=HAProxy&diff=5813HAProxy2009-08-29T17:06:08Z<p>Wensong: New page: == Introduction == The introduction from HAProxy site is as follows: :HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP a...</p>
<hr />
<div>== Introduction ==<br />
<br />
The introduction from HAProxy site is as follows:<br />
:HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware.<br />
<br />
HAProxy 1.3 had already implemented the following features:<br />
* transparent proxy under Linux, which allows to directly connect to the server using the client's IP address<br />
* kernel TCP splicing to forward data between the two sides without copy in order to reach multi-gigabit data rates<br />
<br />
== References ==<br />
<br />
* [http://haproxy.1wt.eu/ HAProxy Official Site]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Server_Architecture&diff=5812Server Architecture2009-08-29T16:51:50Z<p>Wensong: </p>
<hr />
<div>== Server Architecture ==<br />
<br />
=== Multiple-Process Architecture ===<br />
<br />
=== Multiple-Thread Architecture ===<br />
<br />
=== Single-Process Event-Driven Architecture ===<br />
<br />
=== Multiple-Process Event-Driven Architecture ===<br />
<br />
=== Multiple-Thread Event-Driven Architecture ===<br />
<br />
== Server Software ==<br />
<br />
=== Apache ===<br />
<br />
Apache supports multiple-process architecture and multiple-thread architecture.<br />
<br />
=== nginx ===<br />
<br />
Nginx supports both the single-process event-driven architecture and the multiple-process event-driven architecture.<br />
<br />
=== lighttpd ===<br />
<br />
Lighttpd uses the single-process event-driven architecture.<br />
<br />
=== squid ===<br />
<br />
Squid uses the single-process event-driven architecture, because the main process of squid handles all kind of network connections, despite that different processes can handle disk I/O under the diskd configuration.<br />
<br />
=== Varnish ===<br />
<br />
[[Varnish]] uses the multiple-thread architecture. Each client connection is handled by a separate worker thread, though the main thread uses poll to process events on socket listening.<br />
<br />
=== HAProxy ===<br />
<br />
[[HAProxy]] uses the single process event-driven architecture.<br />
<br />
== References ==<br />
<br />
* http://nginx.net/<br />
* http://lighttpd.net/<br />
* http://varnish.projects.linpro.no/<br />
* http://haproxy.1wt.eu/</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Database_Load_Balancing&diff=5811Database Load Balancing2009-08-25T14:56:31Z<p>Wensong: /* MySQL Clusters */</p>
<hr />
<div>== Introduction ==<br />
<br />
== Open Source Solutions ==<br />
<br />
=== C-JDBC: Flexible Database Clustering Middleware ===<br />
<br />
C-JDBC (Clustered JDBC) is a freely available, open source, flexible and efficient middleware for database clustering. C-JDBC presents a single virtual database to the application through the JDBC interface. It does not require any modification to JDBC-based applications. It works with any database engine that provides a JDBC driver, without modification to the database engine. The flexible architecture of C-JDBC supports large and complex database cluster architectures offering various performance, fault tolerance and availability.<br />
<br />
See http://c-jdbc.objectweb.org/<br />
<br />
Sequoia is the new version of the C-JDBC project under Apache license. The website is http://sequoia.continuent.org/HomePage.<br />
<br />
And Another related project is Carob, Bringing Sequoia technology to the C/C world. The web page is http://carob.continuent.org/HomePage.<br />
Carob's purpose is to offer Sequoia access to ODBC, C and C client applications.<br />
<br />
=== SQL Relay ===<br />
SQL Relay is a persistent database connection pooling, proxying and load balancing system for Unix and Linux. SQL Relay is ideal for:<br />
* speeding up database-driven web-based applications<br />
* enhancing the scalability of database-driven web-based applications<br />
* distributing access to replicated databases<br />
* throttling database access<br />
* accessing databases from unsupported platforms<br />
* migrating applications from one database to another <br />
<br />
See http://sqlrelay.sourceforge.net/<br />
<br />
''PS: This text is copied from SQLrelay's home page.''<br />
<br />
=== MySQL Clusters ===<br />
<br />
MySQL supports couple of different ways to increase database availability and scalability. Most commonly used way of increasing availability is replication. Replication, as its name indicates, copies data from one master server to slave(s) and it has been available since MySQL 3.23. Starting with MySQL 4.1 a new storage engine introduced called NDB and NDB stands for Network Data Base. Although most people thinks that replication is a form of clustering, in MySQL world, cluster is actually a set of NDB nodes.<br />
<br />
NDB nodes uses memory to store data and keeps at least 2 copies of data on 2 seperate boxes. In essence, it is almost like Raid arrays (level 1 in this case) using memory instead of hard drives. Minimum requirement for a MySQL cluster is 3 boxes. 2 of these boxes store data in their memory and they're called Data Nodes. At least one box needed for SQL node, and it acts as a regular database server where applications are connecting to.<br />
<br />
More info about MySQL clustering concepts can be found at : <br />
* http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-basics.html<br />
* http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-overview.html<br />
<br />
:''MySQL Cluster is a technology that enables clustering of in-memory databases in a shared-nothing system. The shared-nothing architecture allows the system to work with very inexpensive hardware, and without any specific requirements on hardware or software. It also does not have any single point of failure because each component has its own memory and disk.''<br />
<br />
See the article [[Building MySQL Cluster using LVS]] for how LVS is used to balance traffic among SQL nodes in MySQL cluster.<br />
<br />
=== PostgreSQL Clusters ===<br />
<br />
There is no "native" method to create a DB clusters in PostgreSQL, but several methods are supported. The user is expected to choose the one that will fit better its application.<br />
<br />
In Unix environments you can define a PostgreSQL cluster using Heartbeat,or PgPOOL, and on Windows, PostgreSQL service may be added as a MSCS managed resource. So, on Windows a "fail-over" cluster is accepted. PgPOOL is specially created, to offer a "load-balancing" function.<br />
<br />
== Commercial Solutions ==<br />
<br />
=== Oracle Real Application Clusters ===<br />
<br />
=== IBM DB2 Clusters ===<br />
<br />
=== Microsoft SQL Server Clusters ===<br />
<br />
MS SQL server is a valid option for database clustering.<br />
<br />
Natively, a fail over cluster is supported by MS SQL server (active-passive configuration), but load balancing (active-active structure) may be configured also, under certain circumstances. We can define “active-active”, “active-passive”, “active-active-passive” or other types of cluster modes, depending on the necessities. <br />
<br />
The first step in installing MS SQL as a resource in a cluster configuration is to have MSCS already installed on the clustered machines. MSCS is the Microsoft Cluster Service (available for Windows Server 2000, 2003). When initializing the MS SQL server 2000 or 2005 , enterprise edition, the MSCS cluster will be automatically detected. <br />
<br />
A VIP (Virtual IP address) will exist for the cluster system. Clients transparently communicate to the SQL Virtual server. Physically, clients communicate with the Active Node A instance, when available, and with Passive Node B, when first server is down (in an active-passive configuration). Clients will have any knowledge of the active system behind their virtual SQL server. The Windows OS (MSCS) will handle client -- DB server communication.<br />
<br />
Both nodes in cluster are connected through a private network connection, used for determining if the pair system is active. The cluster also requires a shared disk for database data files.<br />
<br />
For more details see: <br><br />
- http://www.codeproject.com/database/SqlServerDBClusterMSCS.asp <br><br />
- http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/failclus.mspx <br><br />
- http://www.sqlservercentral.com/articles/Installation/clustering_a_sql_server_machine_/344/<br />
<br />
<br />
<br />
<br />
{{load-balancing-stub}}<br />
<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=File:Nf-lvs.png&diff=5806File:Nf-lvs.png2009-03-05T02:55:22Z<p>Wensong: </p>
<hr />
<div></div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=User_talk:Tec&diff=5805User talk:Tec2009-02-26T05:26:31Z<p>Wensong: New page: Hi Tec, Please tell me more about Weighted Source Hash algorithm. You know in my mind, source hashing is a static mapping, one hash function decides which target the one should go to. Th...</p>
<hr />
<div>Hi Tec,<br />
<br />
Please tell me more about Weighted Source Hash algorithm. You know in my mind, source hashing is a static mapping, one hash function decides which target the one should go to.<br />
<br />
Thanks,<br />
<br />
Wensong</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Purge_spam_revisions_from_mediawiki_database_permanently&diff=5800Purge spam revisions from mediawiki database permanently2008-12-20T15:33:37Z<p>Wensong: /* Working Log */ tidy up content</p>
<hr />
<div>== Introduction ==<br />
<br />
Spam programs have posted spam links on our wiki for a while. Although SpamBacklist extension was installed, "php cleanup.php" was to revert the spam links. After ConfirmEdit extension was installed, spam programs are difficult to post spam automatically. However, those spam links are still in page history, and in database.<br />
<br />
It's really annoying to keep those spams in the database, which occupy a lot of space. And, search engine crawlers can still reach those spam links in page history, those links are connected to '''*bad*''' sites, I think that it could lower page rank of our own web pages in search engines.<br />
<br />
Before you try to remove spam revisions from mediawiki database permanently, it's always good to '''backup your mediawiki database first'''.<br />
<br />
== Working Log ==<br />
<br />
Finally, spent a couple of hours hours purging all spams in page history in<br />
LVSKB manually and permanently.<br />
<br />
[http://www.mediawiki.org/wiki/Manual:Administrators#Deletion Mediawiki Administrator Help] has instructions to delete spam revisions manually.<br />
<br />
First, search all the history that contains spam revisions, there are many different approaches, for example<br />
select old_id, old_title from text where old_text like '%wyger.nl%';<br />
select * from revision where rev_text_id = 309;<br />
select * from page where page_id = 957;<br />
the delete spam history manually. Repeat this procedure if you can find more spam revisions.<br />
<br />
Second, purge them into database permanently<br />
mysql> select count(*) from archive;<br />
mysql> delete from archive;<br />
If you do not want to see deletion log, do<br />
mysql> describe logging;<br />
mysql> select * from logging where log_id >= 1710 and log_type = 'delete';<br />
mysql> delete from logging where log_id >= 1710 and log_type = 'delete';<br />
Note: 1710 is the max log_id before delete spam revision manually, for example it can be got through 'select max(log_id) from logging;'.<br />
<br />
Run "php purgeOldText.php" to purge text, which would save a lot of disk space.<br />
<pre>[wensong@dragon maintenance]$ php purgeOldText.php<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
[wensong@dragon maintenance]$ php purgeOldText.php --purge<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
Deleting...done.<br />
<br />
[wensong@dragon wensong]$ ls -l lvskb-mysql-2008022*<br />
-rw-rw-r-- 1 wensong wensong 543134 Feb 24 00:05 lvskb-mysql-20080223-1.bz2<br />
-rw-rw-r-- 1 wensong wensong 6082070 Feb 23 08:48 lvskb-mysql-20080223.bz2<br />
</pre><br />
Run "php rebuildrecentchanges.php" to rebuild recent changes page<br />
<br />
Just log this whole procedure for future reference.</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Purge_spam_revisions_from_mediawiki_database_permanently&diff=5799Purge spam revisions from mediawiki database permanently2008-12-20T15:18:10Z<p>Wensong: /* Introduction */</p>
<hr />
<div>== Introduction ==<br />
<br />
Spam programs have posted spam links on our wiki for a while. Although SpamBacklist extension was installed, "php cleanup.php" was to revert the spam links. After ConfirmEdit extension was installed, spam programs are difficult to post spam automatically. However, those spam links are still in page history, and in database.<br />
<br />
It's really annoying to keep those spams in the database, which occupy a lot of space. And, search engine crawlers can still reach those spam links in page history, those links are connected to '''*bad*''' sites, I think that it could lower page rank of our own web pages in search engines.<br />
<br />
Before you try to remove spam revisions from mediawiki database permanently, it's always good to '''backup your mediawiki database first'''.<br />
<br />
== Working Log ==<br />
<br />
Finally, spent a couple of hours hours purging all spams in page history in<br />
LVSKB manually and permanently.<br />
<br />
[http://www.mediawiki.org/wiki/Manual:Administrators#Deletion Mediawiki Administrator Help] has instructions to delete spam revisions manually.<br />
<br />
First, search all the history that contains spam revisions, there are many different approaches, for example<br />
select old_id, old_title from text where old_text like '%wyger.nl%';<br />
select * from revision where rev_text_id = 309;<br />
select * from page where page_id = 957;<br />
the delete spam history manually. Repeat this procedure if you can find more spam revisions.<br />
<br />
Second, purge them into database permanently<br />
mysql> select count(*) from archive;<br />
mysql> delete from archive;<br />
If you do not want to see deletion log, do<br />
mysql> describe logging;<br />
mysql> select * from logging where log_id >= 1710 and log_type = 'delete';<br />
mysql> delete from logging where log_id >= 1710 and log_type = 'delete';<br />
Run "php purgeOldText.php" to purge text, which would save a lot of disk space.<br />
<pre>[wensong@dragon maintenance]$ php purgeOldText.php<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
[wensong@dragon maintenance]$ php purgeOldText.php --purge<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
Deleting...done.<br />
<br />
[wensong@dragon wensong]$ ls -l lvskb-mysql-2008022*<br />
-rw-rw-r-- 1 wensong wensong 543134 Feb 24 00:05 lvskb-mysql-20080223-1.bz2<br />
-rw-rw-r-- 1 wensong wensong 6082070 Feb 23 08:48 lvskb-mysql-20080223.bz2<br />
</pre><br />
Run "php rebuildrecentchanges.php" to rebuild recent changes page<br />
<br />
Just log this whole procedure for future reference.</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Purge_spam_revisions_from_mediawiki_database_permanently&diff=5798Purge spam revisions from mediawiki database permanently2008-12-20T15:13:49Z<p>Wensong: /* Working Log */ fix the link</p>
<hr />
<div>== Introduction ==<br />
<br />
Spam programs have posted spam links on our wiki for a while. Although SpamBacklist extension was installed, "php cleanup.php" was to revert the spam links. After ConfirmEdit extension was installed, spam programs are difficult to post spam automatically. However, those spam links are still in page history, and in database.<br />
<br />
It's really annoying to keep those spams in the database, which occupy a lot of space. And, search engine crawlers can still reach those spam links in page history, those links are connected to '''*bad*''' sites, I think that it could lower page rank of our own web pages in search engines.<br />
<br />
== Working Log ==<br />
<br />
Finally, spent a couple of hours hours purging all spams in page history in<br />
LVSKB manually and permanently.<br />
<br />
[http://www.mediawiki.org/wiki/Manual:Administrators#Deletion Mediawiki Administrator Help] has instructions to delete spam revisions manually.<br />
<br />
First, search all the history that contains spam revisions, there are many different approaches, for example<br />
select old_id, old_title from text where old_text like '%wyger.nl%';<br />
select * from revision where rev_text_id = 309;<br />
select * from page where page_id = 957;<br />
the delete spam history manually. Repeat this procedure if you can find more spam revisions.<br />
<br />
Second, purge them into database permanently<br />
mysql> select count(*) from archive;<br />
mysql> delete from archive;<br />
If you do not want to see deletion log, do<br />
mysql> describe logging;<br />
mysql> select * from logging where log_id >= 1710 and log_type = 'delete';<br />
mysql> delete from logging where log_id >= 1710 and log_type = 'delete';<br />
Run "php purgeOldText.php" to purge text, which would save a lot of disk space.<br />
<pre>[wensong@dragon maintenance]$ php purgeOldText.php<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
[wensong@dragon maintenance]$ php purgeOldText.php --purge<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
Deleting...done.<br />
<br />
[wensong@dragon wensong]$ ls -l lvskb-mysql-2008022*<br />
-rw-rw-r-- 1 wensong wensong 543134 Feb 24 00:05 lvskb-mysql-20080223-1.bz2<br />
-rw-rw-r-- 1 wensong wensong 6082070 Feb 23 08:48 lvskb-mysql-20080223.bz2<br />
</pre><br />
Run "php rebuildrecentchanges.php" to rebuild recent changes page<br />
<br />
Just log this whole procedure for future reference.</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Examples&diff=5795Examples2008-12-06T04:34:02Z<p>Wensong: /* Media Service Cluster */</p>
<hr />
<div>This page is to design examples using LVS, so please feel free to write your LVS systems here and share them with other LVS users.<br />
<br />
== Web Cluster ==<br />
<br />
* [[Building Scalable Web Cluster using LVS]]<br />
* [[Building Tomcat Cluster using LVS]]<br />
* [[Building Ruby on Rails Cluster using LVS]]<br />
* [[Building Web Cache Cluster using LVS]]<br />
* [[Building clusterized proxy farms using LVS]]<br />
<br />
== Linux/Unix Cluster ==<br />
<br />
* [[Building Scalable Mail Cluster using LVS]]<br />
* [[Building Scalable FTP Cluster using LVS]]<br />
* [[Building Scalable TFTP Cluster using LVS]]<br />
* [[Building MySQL Cluster using LVS]]<br />
* [[Building Scalable DNS Cluster using LVS]]<br />
* [[Building Two-Node Directors/Real Servers using LVS and Keepalived]]<br />
* [[Building an LDAP cluster using LVS and NetWare real servers]]<br />
* [[Building Scalable DHCP Cluster using LVS]]<br />
* [[LVS/TUN mode with FreeBSD and Solaris realserver]]<br />
<br />
== Media Service Cluster ==<br />
<br />
* [[Building Scalable Media Cluster using LVS]]<br />
* [[Building Windows Media Service Cluster using LVS]]<br />
* [[Building Darwin Streaming Service Cluster using LVS]]<br />
* [[Building Helix Server Cluster using LVS]]<br />
* [http://www.freebsdcluster.org/~lasse/icecast-lvs-cluster-howto/ Building a streaming cluster with Icecast, LVS and other cools apps]<br />
<br />
== Terminal Service Cluster ==<br />
<br />
* [[Building Linux Terminal Service Cluster using LVS]]<br />
* [[Building Windows Terminal Service Cluster using LVS]]<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPv6_load_balancing&diff=5794IPv6 load balancing2008-11-09T12:58:16Z<p>Wensong: </p>
<hr />
<div>IPv6 load balancing support was added to IPVS by Julius Volz and Vince Busam at Google. The kernel patches were mainly written by Julius, the new ipvsadm version by Vince. Please note that IPv6 support is experimental and has only had limited testing.<br />
<br />
== Status ==<br />
<br />
The first IPv6 support for IPVS load balancing was released in Linux kernel 2.6.28-rc3. Go to [http://www.linuxvirtualserver.org/software/ipvs.html the IPVS Software Page] to download the latest version of [[ipvsadm]], which is needed to use these new features. The [http://www.linux-ha.org/DownloadSoftware development branch] of Linux-HA contains a ping6 module for heartbeat and limited IPv6 support in ldirectord.<br />
<br />
=== What works with IPv6 ===<br />
<br />
* forwarding mechanisms: NAT, DR, maybe Tunnel (not fully tested yet)<br />
* protocols: TCP, UDP, ESP, AH (last two not tested)<br />
* manipulation and inspection of both IPv4 and IPv6 entries with ipvsadm<br />
* 6 out of 10 schedulers (10/10 in latest net-next dev tree)<br />
* ping6 monitor in heartbeat<br />
* ldirectord using external commands for monitoring<br />
<br />
=== What is not supported with IPv6 ===<br />
<br />
* handling fragmentation or other extension headers<br />
* FTP application helper (can be loaded, but only operates on v4)<br />
* sync daemon (can be started, but only operates on v4)<br />
* probably some incorrect handling of ICMPv6 or other corner cases<br />
* Most built-in probes in ldirectord - use checkcommand<br />
* Real servers must be specified individually in ldirectord, address ranges are not supported<br />
<br />
Since fragmentation and extension headers should not occur very often, things should "mostly" work. I (Julius) tested HTTP and DNS over NAT and DR with various supported schedulers without encountering any problems. But we didn't test any exotic situations. Also, there are some TODOs in the code for things that haven't been tested or implemented yet.<br />
<br />
== Examples ==<br />
<br />
Adding an IPv6 TCP service with two real servers:<br />
<br />
ipvsadm -A -t [fc00::1]:80<br />
ipvsadm -a -t [fc00::1]:80 -r [fc00::2]:80 -g<br />
ipvsadm -a -t [fc00::1]:80 -r [fc00::3]:80 -g<br />
<br />
Adding an IPv6 fwmark service with one real server:<br />
<br />
ipvsadm -A -f 1 -6<br />
ipvsadm -A -f 1 -6 -r [fc00::2]:80 -m<br />
<br />
ldirectord configuration<br />
<br />
virtual = [fc00::1]:80<br />
protocol = tcp<br />
scheduler = wlc<br />
quiescent = no<br />
real = [fc00::3]:80 gate 1000<br />
real = [fc00::4]:80 gate 1000<br />
service = http<br />
checktype = external<br />
checkcommand = /usr/local/bin/my-custom-ipv6-http-test<br />
<br />
<br />
[[Category:IPVS]]<br />
[[Category:Load Balancing]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS&diff=5793IPVS2008-11-08T17:06:01Z<p>Wensong: /* Design and Implementation */</p>
<hr />
<div>IPVS (IP Virtual Server) implements transport-layer [[load balancing]] inside the Linux kernel, so called [[Layer-4 switching]]. IPVS running on a host acts as a [[load balancer]] before a cluster of real servers, it can direct requests for TCP/UDP based services to the real servers, and makes services of the real servers to appear as a virtual service on a single IP address.<br />
<br />
== IP Load Balancing Technologies ==<br />
<br />
Since the IP load balancing techniques have good scalability, IPVS extends the TCP/IP stack of the Linux kernel to support three IP load balancing techniques: [[LVS/NAT]], [[LVS/TUN]] and [[LVS/DR]]. The box running IPVS acts as a [[load balancer]] of network connections<br />
from clients who know a single IP address for a service, and distributes them to a set of<br />
servers that actually perform the work.<br />
<br />
* [[LVS/NAT | Virtual Server via Network Address Translation]]<br />
* [[LVS/TUN | Virtual Server via IP Tunneling]]<br />
* [[LVS/DR | Virtual Server via Direct Routing]]<br />
<br />
== Job Scheduling Algorithms ==<br />
<br />
=== Connection Scheduling Algorithms inside the Kernel ===<br />
<br />
IPVS schedules jobs at connection granularity inside the Linux kernel. Here connection is defined as data communication between client socket and server socket, no matter whether it uses TCP or UDP protocol. For scheduling UDP datagrams, IPVS [[load balancer]] records UDP datagram scheduling with configurable timeout, and the default UDP timeout is 300 seconds. Before UDP connection timeouts, all UDP datagrams from the same socket (protocol, ip address and port) will be directed to the same server.<br />
<br />
IPVS has implemented ten connection scheduling algorithms inside the kernel so far:<br />
<br />
* [[Round-Robin Scheduling]] <br />
* [[Weighted Round-Robin Scheduling]] <br />
* [[Least-Connection Scheduling]]<br />
* [[Weighted Least-Connection Scheduling]] <br />
* [[Locality-Based Least-Connection Scheduling]] <br />
* [[Locality-Based Least-Connection with Replication Scheduling]] <br />
* [[Destination Hashing Scheduling]] <br />
* [[Source Hashing Scheduling]] <br />
* [[Shortest Expected Delay Scheduling]] <br />
* [[Never Queue Scheduling]]<br />
<br />
=== Dynamic Feedback Load Balancing Scheduling ===<br />
<br />
The dynamic feedback load balancing scheduling algorithm is to use connection scheduling algorithm and adjust server weight based on dynamic feeback load information of each server, in order to avoid load imbalance among servers.<br />
<br />
* [[Dynamic Feedback Load Balancing Scheduling]]<br />
<br />
== Software ==<br />
<br />
See http://www.linuxvirtualserver.org/software/ipvs.html for downloading IPVS related software.<br />
<br />
[[ipvsadm]] is used to set up, maintain or inspect the IP virtual server table in the Linux kernel. For how to compile ipvsadm on on different Linux distributions, check [[compiling ipvsadm on different Linux distributions|this article]] for detailed information.<br />
<br />
== Design and Implementation ==<br />
<br />
* [[IPVS Design]]<br />
* [[IPVS Implementation]]<br />
* [[IPv6 load balancing]] - the status of IPv6 support in IPVS and examples<br />
<br />
== Ongoing Development ==<br />
<br />
=== IPVS On FreeBSD ===<br />
<br />
The [http://dragon.linux-vs.org/~dragonfly/htm/lvs_freebsd.htm LVS On FreeBSD] project was started by Li Wang, the goal is to port IPVS code to FreeBSD. Now, the LVS On FreeBSD module supports the [[LVS/DR]] and [[LVS/TUN]] ip load balancing technologies.<br />
<br />
=== Todo ===<br />
<br />
* [[IPVS Wish List]]<br />
<br />
[[Category:LVS Software]]<br />
[[Category:Load Balancing]]<br />
[[Category:Terms and Abbreviations]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Performance_and_Tuning&diff=5792Performance and Tuning2008-11-07T06:53:56Z<p>Wensong: /* Tools */</p>
<hr />
<div>== Performance Measurement ==<br />
<br />
=== Tools ===<br />
<br />
* testlvs: simple throughput testing tool for LVS from Julian, see [http://www.ssi.bg/~ja/#testlvs the testlvs page]<br />
* ab: apache benchmark<br />
* httperf: a tool for measuring web server performance from HP, see [http://www.hpl.hp.com/research/linux/httperf/ the httperf homepage]<br />
* Tsung: an open-source multi-protocol distributed load testing tool, see [http://tsung.erlang-projects.org/ Tsung project homepage]<br />
* specweb: web server benchmark by spec.org, it is now [http://www.spec.org/web2005/ specweb2005]. <br />
* webbench: a licensed benchmark program that measures the performance of Web servers, developed by VeriTest.<br />
<br />
== Performance Tuning ==<br />
<br />
=== Connection Hash Table Size ===<br />
<br />
The IPVS connection hash table uses the chaining scheme to handle<br />
hash collisions. Using a big IPVS connection hash table will greatly<br />
reduce conflicts when there are hundreds of thousands of connections<br />
in the hash table.<br />
<br />
Note the table size must be power of 2. The table size will be the<br />
value of 2 to the your input number power. The number to choose is<br />
from 8 to 20, the default number is 12, which means the table size<br />
is 4096. Don't input the number too small, otherwise you will lose<br />
performance on it. You can adapt the table size yourself, according<br />
to your virtual server application. It is good to set the table size<br />
not far less than the number of connections per second multiplying<br />
average lasting time of connection in the table. For example, your<br />
virtual server gets 200 connections per second, the connection lasts<br />
for 200 seconds in average in the connection table, the table size<br />
should be not far less than 200x200, it is good to set the table<br />
size 32768 (2**15).<br />
<br />
We can configure the size of IPVS connection hash table before compiling the Linux kernel. Here are the IPVS configurations in the 'make menuconfig' menu:<br />
<br />
<pre><br />
Networking Options --><br />
IP: Virtual Server Configuration --><br />
<M> virtual server support (EXPERIMENTAL)<br />
[ ] IP virtual server debugging<br />
(12) IPVS connection table size (the Nth power of 2)<br />
--- IPVS scheduler<br />
<M> round-robin scheduling<br />
<M> weighted round-robin scheduling<br />
<M> least-connection scheduling scheduling<br />
<M> weighted least-connection scheduling<br />
<M> locality-based least-connection scheduling<br />
<M> locality-based least-connection with replication scheduling<br />
<M> destination hashing scheduling<br />
<M> source hashing scheduling<br />
<M> shortest expected delay scheduling<br />
<M> never queue scheduling<br />
--- IPVS application helper<br />
<M> FTP protocol helper<br />
</pre><br />
<br />
=== Netfilter Connection Track ===<br />
<br />
[[IPVS]] uses its own simple and fast connection tracking for performance reasons, instead of using netfilter connection tracking. So, if you don't use firewalling feature at [[load balancer]] and you need an extremely fast load balancer, do not load netfilter conntrack modules into you system, because there is no need to do double tracking. Note that [[LVS/NAT]] should work too without the conntrack modules.<br />
<br />
Julian compared the performance of IPVS with ip_conntrack and without ip_conntrack. See http://archive.linuxvirtualserver.org/html/lvs-users/2001-12/msg00141.html<br />
<br />
== Some Performance Data ==<br />
<br />
While the [[LVS/DR]] cluster was pushing out 9.6Gbps traffic, the LVS [[load balancer]] was<br />
doing a negilgable ammount of work, which seems to indicate that LVS<br />
could push a great deal more traffic given sufficient real-servers and<br />
end-users.<br />
<br />
* http://archive.linuxvirtualserver.org/html/lvs-users/2005-11/msg00266.html<br />
<br />
== External Links ==<br />
<br />
* [http://www.kegel.com/c10k.html The C10K problem] written by Dan Kegel, good notes on how to configure operating systems and write code to support 10K clients on a single server<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPv6_load_balancing&diff=5787IPv6 load balancing2008-11-06T16:35:09Z<p>Wensong: </p>
<hr />
<div>IPv6 load balancing support was added to IPVS by Julius Volz and Vince Busam at Google.<br />
<br />
== Status ==<br />
<br />
IPv6 load balancing for IPVS was already in the Linux kernel 2.6.28-rc3. Goto [http://www.linuxvirtualserver.org/software/ipvs.html the IPVS Software Page] to download the latest version of [[ipvsadm]] software.<br />
<br />
See http://lwn.net/Articles/296409/ for the status of IPv6 support<br />
<br />
== Examples ==<br />
<br />
<br />
<br />
[[Category:IPVS]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPv6_load_balancing&diff=5786IPv6 load balancing2008-11-06T15:46:31Z<p>Wensong: New page: == Status == See http://lwn.net/Articles/296409/ == Examples == Category:IPVS</p>
<hr />
<div>== Status ==<br />
<br />
See http://lwn.net/Articles/296409/<br />
<br />
== Examples ==<br />
<br />
<br />
<br />
[[Category:IPVS]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=IPVS_Wish_List&diff=5785IPVS Wish List2008-11-06T15:44:27Z<p>Wensong: </p>
<hr />
<div>This page is to write all the wishes for [[IPVS]] development.<br />
<br />
'''Remember''': you can ask for features as long as you want, but that won't make such features appear magically. :) Basically, you have these options:<br />
* get your hands dirty, program and submit code to LVS<br />
* convince other programmers that your wanted feature is valuable to them so that the programmers do it for you (and the rest of the community)<br />
* put some bounty on a feature development and wait until someone wants to earn some money<br />
<br />
== Feature List ==<br />
<br />
* [[SCTP load balancing]] support<br />
* connection synchronization improvement<br />
* ip_gre tunnel<br />
<br />
== Done ==<br />
<br />
* [[IPv6 load balancing]] support<br />
<br />
[[Category:IPVS]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Performance_and_Tuning&diff=5782Performance and Tuning2008-10-23T12:51:54Z<p>Wensong: /* Tools */</p>
<hr />
<div>== Performance Measurement ==<br />
<br />
=== Tools ===<br />
<br />
* testlvs: simple throughput testing tool for LVS from Julian, see [http://www.ssi.bg/~ja/#testlvs the testlvs page]<br />
* ab: apache benchmark<br />
* httperf: a tool for measuring web server performance from HP, see [http://www.hpl.hp.com/research/linux/httperf/ the httperf homepage]<br />
* specweb: web server benchmark by spec.org, it is now [http://www.spec.org/web2005/ specweb2005]. <br />
* webbench: a licensed benchmark program that measures the performance of Web servers, developed by VeriTest.<br />
<br />
== Performance Tuning ==<br />
<br />
=== Connection Hash Table Size ===<br />
<br />
The IPVS connection hash table uses the chaining scheme to handle<br />
hash collisions. Using a big IPVS connection hash table will greatly<br />
reduce conflicts when there are hundreds of thousands of connections<br />
in the hash table.<br />
<br />
Note the table size must be power of 2. The table size will be the<br />
value of 2 to the your input number power. The number to choose is<br />
from 8 to 20, the default number is 12, which means the table size<br />
is 4096. Don't input the number too small, otherwise you will lose<br />
performance on it. You can adapt the table size yourself, according<br />
to your virtual server application. It is good to set the table size<br />
not far less than the number of connections per second multiplying<br />
average lasting time of connection in the table. For example, your<br />
virtual server gets 200 connections per second, the connection lasts<br />
for 200 seconds in average in the connection table, the table size<br />
should be not far less than 200x200, it is good to set the table<br />
size 32768 (2**15).<br />
<br />
We can configure the size of IPVS connection hash table before compiling the Linux kernel. Here are the IPVS configurations in the 'make menuconfig' menu:<br />
<br />
<pre><br />
Networking Options --><br />
IP: Virtual Server Configuration --><br />
<M> virtual server support (EXPERIMENTAL)<br />
[ ] IP virtual server debugging<br />
(12) IPVS connection table size (the Nth power of 2)<br />
--- IPVS scheduler<br />
<M> round-robin scheduling<br />
<M> weighted round-robin scheduling<br />
<M> least-connection scheduling scheduling<br />
<M> weighted least-connection scheduling<br />
<M> locality-based least-connection scheduling<br />
<M> locality-based least-connection with replication scheduling<br />
<M> destination hashing scheduling<br />
<M> source hashing scheduling<br />
<M> shortest expected delay scheduling<br />
<M> never queue scheduling<br />
--- IPVS application helper<br />
<M> FTP protocol helper<br />
</pre><br />
<br />
=== Netfilter Connection Track ===<br />
<br />
[[IPVS]] uses its own simple and fast connection tracking for performance reasons, instead of using netfilter connection tracking. So, if you don't use firewalling feature at [[load balancer]] and you need an extremely fast load balancer, do not load netfilter conntrack modules into you system, because there is no need to do double tracking. Note that [[LVS/NAT]] should work too without the conntrack modules.<br />
<br />
Julian compared the performance of IPVS with ip_conntrack and without ip_conntrack. See http://archive.linuxvirtualserver.org/html/lvs-users/2001-12/msg00141.html<br />
<br />
== Some Performance Data ==<br />
<br />
While the [[LVS/DR]] cluster was pushing out 9.6Gbps traffic, the LVS [[load balancer]] was<br />
doing a negilgable ammount of work, which seems to indicate that LVS<br />
could push a great deal more traffic given sufficient real-servers and<br />
end-users.<br />
<br />
* http://archive.linuxvirtualserver.org/html/lvs-users/2005-11/msg00266.html<br />
<br />
== External Links ==<br />
<br />
* [http://www.kegel.com/c10k.html The C10K problem] written by Dan Kegel, good notes on how to configure operating systems and write code to support 10K clients on a single server<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Performance_and_Tuning&diff=5781Performance and Tuning2008-10-23T12:31:31Z<p>Wensong: /* Tools */</p>
<hr />
<div>== Performance Measurement ==<br />
<br />
=== Tools ===<br />
<br />
* ab: apache benchmark<br />
* httperf: a tool for measuring web server performance from HP, see [http://www.hpl.hp.com/research/linux/httperf/ the httperf homepage]<br />
* specweb: web server benchmark by spec.org, it is now [http://www.spec.org/web2005/ specweb2005]. <br />
* webbench: a licensed benchmark program that measures the performance of Web servers, developed by VeriTest.<br />
<br />
== Performance Tuning ==<br />
<br />
=== Connection Hash Table Size ===<br />
<br />
The IPVS connection hash table uses the chaining scheme to handle<br />
hash collisions. Using a big IPVS connection hash table will greatly<br />
reduce conflicts when there are hundreds of thousands of connections<br />
in the hash table.<br />
<br />
Note the table size must be power of 2. The table size will be the<br />
value of 2 to the your input number power. The number to choose is<br />
from 8 to 20, the default number is 12, which means the table size<br />
is 4096. Don't input the number too small, otherwise you will lose<br />
performance on it. You can adapt the table size yourself, according<br />
to your virtual server application. It is good to set the table size<br />
not far less than the number of connections per second multiplying<br />
average lasting time of connection in the table. For example, your<br />
virtual server gets 200 connections per second, the connection lasts<br />
for 200 seconds in average in the connection table, the table size<br />
should be not far less than 200x200, it is good to set the table<br />
size 32768 (2**15).<br />
<br />
We can configure the size of IPVS connection hash table before compiling the Linux kernel. Here are the IPVS configurations in the 'make menuconfig' menu:<br />
<br />
<pre><br />
Networking Options --><br />
IP: Virtual Server Configuration --><br />
<M> virtual server support (EXPERIMENTAL)<br />
[ ] IP virtual server debugging<br />
(12) IPVS connection table size (the Nth power of 2)<br />
--- IPVS scheduler<br />
<M> round-robin scheduling<br />
<M> weighted round-robin scheduling<br />
<M> least-connection scheduling scheduling<br />
<M> weighted least-connection scheduling<br />
<M> locality-based least-connection scheduling<br />
<M> locality-based least-connection with replication scheduling<br />
<M> destination hashing scheduling<br />
<M> source hashing scheduling<br />
<M> shortest expected delay scheduling<br />
<M> never queue scheduling<br />
--- IPVS application helper<br />
<M> FTP protocol helper<br />
</pre><br />
<br />
=== Netfilter Connection Track ===<br />
<br />
[[IPVS]] uses its own simple and fast connection tracking for performance reasons, instead of using netfilter connection tracking. So, if you don't use firewalling feature at [[load balancer]] and you need an extremely fast load balancer, do not load netfilter conntrack modules into you system, because there is no need to do double tracking. Note that [[LVS/NAT]] should work too without the conntrack modules.<br />
<br />
Julian compared the performance of IPVS with ip_conntrack and without ip_conntrack. See http://archive.linuxvirtualserver.org/html/lvs-users/2001-12/msg00141.html<br />
<br />
== Some Performance Data ==<br />
<br />
While the [[LVS/DR]] cluster was pushing out 9.6Gbps traffic, the LVS [[load balancer]] was<br />
doing a negilgable ammount of work, which seems to indicate that LVS<br />
could push a great deal more traffic given sufficient real-servers and<br />
end-users.<br />
<br />
* http://archive.linuxvirtualserver.org/html/lvs-users/2005-11/msg00266.html<br />
<br />
== External Links ==<br />
<br />
* [http://www.kegel.com/c10k.html The C10K problem] written by Dan Kegel, good notes on how to configure operating systems and write code to support 10K clients on a single server<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Net-SNMP-LVS-Module&diff=5778Net-SNMP-LVS-Module2008-10-14T11:01:07Z<p>Wensong: /* INSTALLATION */ add a local copy</p>
<hr />
<div>== GENERAL ==<br />
net-snmpd-lvs-module is a dynamicaly loadable shared object for SNMP to get the configuration and some statistical information from the Linux kernel with [[IPVS|IP virtual server]] support.<br />
<br />
== INSTALLATION ==<br />
Get the [http://homes.tiscover.com/jrief/net-snmp-lvs-module/net-snmp-lvs-module-0.0.4.tar.gz source] (note: as of 13 July 2008 this URL returns 404) for this module ([[media:Net-snmp-lvs-module-0.0.4.tar.gz|local copy]] or [http://www.loadbalancer.org/download/SNMP/net-snmp-lvs-module-0.0.4.tar.gz Alternate source location]). From http://www.linuxvirtualserver.org/software/ipvs.html download the the correct version of ipvsadm, for kernel 2.6 use ipvsadm-1.24 or later, for kernel 2.4 use ipvsadm-1.21. Extract '''ipvsadm-''version''.tar.gz''', build the library and copy the subtree '''libipvs''' into the module's working directory.<br />
<br />
Check that the Linux kernel is available under '''/usr/src/linux'''. Install net-snmp and its development headers and libraries. Invoke '''make''', this should build '''lvs.so'''. Copy LVS-MIB.txt and OC-MIB.txt to '''/usr/share/snmp/mibs/'''.<br />
<br />
Check that the new MIB is visible by invoking<br />
shell> snmptranslate -m LVS-MIB -On -IR lvsServiceEntry,<br />
this should return '''.1.3.6.1.4.1.8225.4711.18.1'''<br />
Copy '''lvs.so''' to '''/usr/lib/libnetsnmplvs.so''' and add the following line to<br />
'''/etc/snmp/snmpd.conf''':<br />
dlmod lvs /usr/lib/libnetsnmplvs.so<br />
Restart your snmpd.<br />
<br />
Try if everything works invoking:<br />
shell> snmpwalk -c public -v 2c -m LVS-MIB localhost 1.3.6.1.4.1.8225.4711<br />
LVS-MIB::lvsVersion.0 = STRING: "1.2.0"<br />
LVS-MIB::lvsNumServices.0 = INTEGER: 2<br />
LVS-MIB::lvsHashTableSize.0 = INTEGER: 4096<br />
LVS-MIB::lvsTcpTimeOut.0 = INTEGER: 900<br />
LVS-MIB::lvsTcpFinTimeOut.0 = INTEGER: 120<br />
LVS-MIB::lvsUdpTimeOut.0 = INTEGER: 300<br />
LVS-MIB::lvsDaemonState.0 = INTEGER: none(0)<br />
...<br />
<br />
In case you want to add additional snmp-values to your MIB:<br />
Set export MIBS=ALL and create skeleton with<br />
shell> mib2c -c mib2c.interate.conf lvsRealTable<br />
or<br />
shell> mib2c -c mib2c.interate.conf lvsServiceTable<br />
this should create a file '''lvsRealTable.c''' and/or '''lvsServiceTable.c'''.<br />
Use the generated code as a template for '''lvs.c''' and populate the missing parts.<br />
<br />
== CREDITS ==<br />
The code for this module currently is maintained by Jacob Rief. This code is based on ucd-snmpd-lvs written by Romeo Benzoni in 2002 and never modified since. As that code did not compile with the newer net-snmp headers, I decided to rewrite the complete code. The only unchanged parts from the ucd-snmpd-lvs project are LVS-MIB.txt and OC-MIB.txt.<br />
<br />
== Extrenal Links ==<br />
Official site: http://homes.tiscover.com/jrief/net-snmp-lvs-module/ (unavailable as of13 July 2008)<br />
<br />
[[Category:LVS Software]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=File:Net-snmp-lvs-module-0.0.4.tar.gz&diff=5777File:Net-snmp-lvs-module-0.0.4.tar.gz2008-10-14T10:58:45Z<p>Wensong: net-snmp-lvs-module-0.0.4.tar.gz from Jacob Rief</p>
<hr />
<div>net-snmp-lvs-module-0.0.4.tar.gz from Jacob Rief</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=File:Kernel_net.png&diff=5775File:Kernel net.png2008-09-21T15:02:02Z<p>Wensong: Linux kernel 2.4 packet handling</p>
<hr />
<div>Linux kernel 2.4 packet handling</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Mini_Mini_Howto&diff=5773Mini Mini Howto2008-06-22T15:39:41Z<p>Wensong: /* Instructions */</p>
<hr />
<div>== Mini-Mini-Howto ==<br />
<br />
This howto describes a quick way to get up and going with LVS.<br />
<br />
== Instructions ==<br />
<br />
Start from a precompiled Linux distribution. I was using a combination of Fedora Core 4 and Fedora Core 6, but this should work with any recent version of Linux.<br />
<br />
The following will set up the [[real server]]s (192.168.1.100 and 192.168.1.101) to accept data from the [[load balancer]] (192.168.1.5) for VIP 192.168.1.10. I put it into a shell script and added that script to /etc/rc.d/rc.local on real servers.<br />
<br />
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore<br />
echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce<br />
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore<br />
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce<br />
<br />
ifconfig lo:0 192.168.1.10 netmask 255.255.255.255 broadcast 192.168.1.10 up<br />
<br />
One time setup on the load balancer:<br />
<br />
rpm -U /net/shauna/usr/local/share/RPMS/4/ipvsadm-1.24-7.x86_64.rpm<br />
<br />
I put the following script on the load balancer, again referenced in /etc/rc.d/rc.local<br />
<br />
ifconfig eth0:0 192.168.1.10 netmask 255.255.255.255 broadcast 192.168.1.10 up<br />
ipvsadm --clear<br />
ipvsadm --add-service --tcp-service 192.168.1.10:80 --scheduler rr<br />
ipvsadm --add-server --tcp-service 192.168.1.10:80 --real-server 192.168.1.100:80 --gatewaying<br />
ipvsadm --add-server --tcp-service 192.168.1.10:80 --real-server 192.168.1.101:80 --gatewaying<br />
<br />
That’s all it took. I didn’t recompile anything. My firewall treats 192.168.1.10, the VIP, just like an ordinary machine. Inside the firewall I have a standard switch. Aside from the firewall, all of the machines are using only one NIC and one subnet.<br />
<br />
Note: the [[LVS/DR]] load balancing method is used in the above example.<br />
<br />
== Rationale ==<br />
<br />
Several years ago I set up an LVS. Recently I did it again. Because of changes in the 2.6 kernel, it was a whole lot easier this time. It was almost trivial. Unfortunately, it took me a long time to dig through the documentation to what I needed.<br />
<br />
All of this information was available on Google, but it was hiding. I can appreciate the older documentation – as I said I worked with this software a long time ago – but some of it has little or no value. How many people, for example, need to use an old version of the kernel? There may be a place for that information, but it should not be the first thing that a potential user sees.<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Purge_spam_revisions_from_mediawiki_database_permanently&diff=5766Purge spam revisions from mediawiki database permanently2008-02-23T16:41:32Z<p>Wensong: </p>
<hr />
<div>== Introduction ==<br />
<br />
Spam programs have posted spam links on our wiki for a while. Although SpamBacklist extension was installed, "php cleanup.php" was to revert the spam links. After ConfirmEdit extension was installed, spam programs are difficult to post spam automatically. However, those spam links are still in page history, and in database.<br />
<br />
It's really annoying to keep those spams in the database, which occupy a lot of space. And, search engine crawlers can still reach those spam links in page history, those links are connected to '''*bad*''' sites, I think that it could lower page rank of our own web pages in search engines.<br />
<br />
== Working Log ==<br />
<br />
Finally, spent a couple of hours hours purging all spams in page history in<br />
LVSKB manually and permanently.<br />
<br />
[http://meta.wikimedia.org/wiki/Help:Administration#Deletion Mediawiki Administrator Help] has instructions to delete spam revisions manually.<br />
<br />
First, search all the history that contains spam revisions, there are many different approaches, for example<br />
select old_id, old_title from text where old_text like '%wyger.nl%';<br />
select * from revision where rev_text_id = 309;<br />
select * from page where page_id = 957;<br />
the delete spam history manually. Repeat this procedure if you can find more spam revisions.<br />
<br />
Second, purge them into database permanently<br />
mysql> select count(*) from archive;<br />
mysql> delete from archive;<br />
If you do not want to see deletion log, do<br />
mysql> describe logging;<br />
mysql> select * from logging where log_id >= 1710 and log_type = 'delete';<br />
mysql> delete from logging where log_id >= 1710 and log_type = 'delete';<br />
Run "php purgeOldText.php" to purge text, which would save a lot of disk space.<br />
<pre>[wensong@dragon maintenance]$ php purgeOldText.php<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
[wensong@dragon maintenance]$ php purgeOldText.php --purge<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
Deleting...done.<br />
<br />
[wensong@dragon wensong]$ ls -l lvskb-mysql-2008022*<br />
-rw-rw-r-- 1 wensong wensong 543134 Feb 24 00:05 lvskb-mysql-20080223-1.bz2<br />
-rw-rw-r-- 1 wensong wensong 6082070 Feb 23 08:48 lvskb-mysql-20080223.bz2<br />
</pre><br />
Run "php rebuildrecentchanges.php" to rebuild recent changes page<br />
<br />
Just log this whole procedure for future reference.</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Purge_spam_revisions_from_mediawiki_database_permanently&diff=5765Purge spam revisions from mediawiki database permanently2008-02-23T16:30:52Z<p>Wensong: New page: == Introduction == Spam programs have posted spam links on our wiki for a while. Although SpamBacklist extension was installed, "php cleanup.php" was to revert the spam links. After Confi...</p>
<hr />
<div>== Introduction ==<br />
<br />
Spam programs have posted spam links on our wiki for a while. Although SpamBacklist extension was installed, "php cleanup.php" was to revert the spam links. After ConfirmEdit extension was installed, spam programs are difficult to post spam automatically. However, those spam links are still in page history, and in database.<br />
<br />
It's really annoying to keep those spams in the database, which occupy a lot of space. And, search engine crawlers can still reach those spam links in page history, those links are connected to '''*bad*''' sites, I think that it could lower page rank of our own web pages in search engines.<br />
<br />
== Working Log ==<br />
<br />
Finally, spent a couple of hours hours purging all spams in page history in<br />
LVSKB manually and permanently.<br />
<br />
[http://meta.wikimedia.org/wiki/Help:Administration#Deletion Mediawiki Administrator Help] has instructions to delete spam revisions manually.<br />
<br />
First, search all the history that contains spam revisions, there are many different approaches, for example<br />
select old_id, old_title from text where old_text like '%wyger.nl%';<br />
select * from revision where rev_text_id = 309;<br />
select * from page where page_id = 957;<br />
the delete spam history manually. Repeat this procedure if you can find more spam revisions.<br />
<br />
Second, purge them into database permanently<br />
mysql> select count(*) from archive;<br />
mysql> delete from archive;<br />
If you do not want to see deletion log, do<br />
mysql> describe logging;<br />
mysql> select * from logging where log_id >= 1710 and log_type = 'delete';<br />
mysql> delete from logging where log_id >= 1710 and log_type = 'delete';<br />
Run "php purgeOldText.php" to purge text, which would save a lot of disk space.<br />
<pre>[wensong@dragon maintenance]$ php purgeOldText.php<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
[wensong@dragon maintenance]$ php purgeOldText.php --purge<br />
<br />
Purge Old Text<br />
<br />
Searching for active text records in revisions table...done.<br />
Searching for active text records in archive table...done.<br />
Searching for inactive text records...done.<br />
4263 inactive items found.<br />
Deleting...done.<br />
</pre><br />
Run "php rebuildrecentchanges.php" to rebuild recent changes page<br />
<br />
Just log this whole procedure for future reference.</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=User:Wensong&diff=5764User:Wensong2008-02-23T16:10:19Z<p>Wensong: </p>
<hr />
<div>== Work ==<br />
<br />
* analyse the [[Server Architecture|server architecture]] of existing server software<br />
* [[Purge spam revisions from mediawiki database permanently]]<br />
<br />
<br />
<br />
----<br />
<br />
hi Mr. Wensong, <br />
<br />
My name is Wang Feng, from CNNIC, the operator of TLD Domain ".cn". I have posted my following comments on May. 20, 2005. But it seems that there no any response of it. So I post it again for your help. <br />
<br />
>How is the concurrent processing performance of current version LVS >implementations? Could they can meet with high concurrent request applications >with more than 1 million of concurrent requests? <br />
><br />
>I don't know the basic status, so post this email for help. <br />
<br />
Thanks. <br />
<br />
Feng<br />
<br />
----<br />
<br />
Hi Feng,<br />
<br />
Please see the [[FAQ]] page.<br />
<br />
Thanks,<br />
<br />
Wensong</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Current_events&diff=5763Current events2008-02-22T15:36:03Z<p>Wensong: </p>
<hr />
<div>== 2008 ==<br />
<br />
* February 22, 2008, LVSKB was upgraded from MediaWiki 1.10.0 to MediaWiki 1.11.1.<br />
<br />
== 2007 ==<br />
<br />
* June 3, 2007, LVSKB was upgraded from MediaWiki 1.9.3 to MediaWiki 1.10.0.<br />
* March 14, 2007, LVSKB was upgraded from MediaWiki 1.6.10 to MediaWiki 1.9.3.<br />
* March 6, 2007, LVSKB was upgraded from MediaWiki 1.6.8 to MediaWiki 1.6.10.<br />
* March 4, 2007, The [http://clustercenter.org/ ClusterCenter web site] was launched to dig cluster-related information by community, including news, articles, products, reports, discussions, etc.<br />
<br />
== 2006 ==<br />
* July 23, 2006, LVSKB was upgraded from MediaWiki 1.5.8 to MediaWiki 1.6.8.<br />
* May 28, 2006, LVSKB was upgraded from MediaWiki 1.4.15 to MediaWiki 1.5.8.<br />
* February 4, 2006, LVS Chinese site (http://zh.linuxvirtualserver.org/) was setup by using of Drupal.</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=KTCPVS_Wish_List&diff=5750KTCPVS Wish List2008-01-13T05:56:31Z<p>Wensong: </p>
<hr />
<div>This page is to write all the wishes for [[KTCPVS]] development.<br />
<br />
== Feature List ==<br />
<br />
# I hope the KTCPVS to support more than 10k rules.<br />
# I hope the KTCPVS to support changing rules at any time.<br />
<br />
[[Category:KTCPVS|Wish]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=LVS_Deployment&diff=5747LVS Deployment2007-12-06T06:26:56Z<p>Wensong: </p>
<hr />
<div>This page is a good complement of [http://www.linuxvirtualserver.org/deployment.html the LVS Deployment page] on the LVS web site.<br />
<br />
Please add your LVS deployment here, and please include details about your cluster hardware and size if applicable.<br />
<br />
{| cellspacing="1" cellpadding="4" border="1" width="96%" bgcolor="#336699" style="border-collapse: collapse; margin:0 auto;"<br />
|- bgcolor="#DDDDDD"<br />
|align="center"| '''Site'''<br />
|align="center"| '''Description'''<br />
|align="center"| '''LVS Notes'''<br />
|- bgcolor="#FFFFFF"<br />
||siemens.com<br />
||Siemens<br />
||clusterized proxy farms using [[LVS/DR]]/[[UltraMonkey]]3, see the article [[Building clusterized proxy farms using LVS]] for more information.<br />
|-<br />
||[http://www.flybuys.com.au/ flybuys.com.au]<br />
||Loyalty Pacific<br />
||Clustered LVS web hosting using RHEL with [[Piranha]].<br />
|-<br />
||[http://ad.cgogo.cc/ ad.cgogo.cc]<br />
||It is ad system of put in internet<br />
||Clustered LVS web hosting using RH AS 3 with [[Piranha]].<br />
|-<br />
||[http://meta.wikimedia.org/wiki/Main_Page Wikimedia]<br />
||Wikimedia server system<br />
||[http://meta.wikimedia.org/wiki/Wikimedia_servers Wikipedia and other wikimedia systems use LVS load balancers]<br />
[http://www.nedworks.org/~mark/presentations/hd2006/ Wikimedia's (Cache) Network]<br />
|-<br />
||[http://drupal.org/ drupal.org]<br />
|| Drupal web site<br />
|| [http://drupal.org/node/26707 The future Drupal server infrastructure]<br />
|-<br />
||[http://www.clubic.com/ clubic.com]<br />
|| Clubic websites<br />
|| LVS Cluster with tens of Linux Apache servers<br />
|}<br />
<br />
<br />
== References ==<br />
<br />
* [http://folk.uio.no/jani/studies/DIW-report.pdf SourceForge Analysis, Evaluation and Improvements] (pdf document)<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Mini_Mini_Howto&diff=5744Mini Mini Howto2007-11-11T05:24:19Z<p>Wensong: /* Mini-Mini-Howto */</p>
<hr />
<div>== Mini-Mini-Howto ==<br />
<br />
This howto describes a quick way to get up and going with LVS.<br />
<br />
== Instructions ==<br />
<br />
Start from a precompiled Linux distribution. I was using a combination of Fedora Core 4 and Fedora Core 6, but this should work with any recent version of Linux.<br />
<br />
The following will set up the [[real server]]s (192.168.1.100 and 192.168.1.101) to accept data from the [[load balancer]] (192.168.1.5) for VIP 192.168.1.10. I put it into a shell script and added that script to /etc/rc.d/rc.local on real servers.<br />
<br />
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore<br />
echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce<br />
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore<br />
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce<br />
<br />
ifconfig lo:0 192.168.1.10 netmask 255.255.255.255 broadcast 192.168.1.10 up<br />
<br />
One time setup on the load balancer:<br />
<br />
rpm -U /net/shauna/usr/local/share/RPMS/4/ipvsadm-1.24-7.x86_64.rpm<br />
<br />
I put the following script on the load balancer, again referenced in /etc/rc.d/rc.local<br />
<br />
ifconfig eth0:0 192.168.1.10 netmask 255.255.255.255 broadcast 192.168.1.10 up<br />
ipvsadm --clear<br />
ipvsadm --add-service --tcp-service 192.168.1.10:80 --scheduler rr<br />
ipvsadm --add-server --tcp-service 192.168.1.10:80 --real-server 192.168.1.100:80 --gatewaying<br />
ipvsadm --add-server --tcp-service 192.168.1.10:80 --real-server 192.168.1.101:80 --gatewaying<br />
<br />
That’s all it took. I didn’t recompile anything. My firewall treats 192.168.1.10, the VIP, just like an ordinary machine. Inside the firewall I have a standard switch. Aside from the firewall, all of the machines are using only one NIC and one subnet.<br />
<br />
== Rationale ==<br />
<br />
Several years ago I set up an LVS. Recently I did it again. Because of changes in the 2.6 kernel, it was a whole lot easier this time. It was almost trivial. Unfortunately, it took me a long time to dig through the documentation to what I needed.<br />
<br />
All of this information was available on Google, but it was hiding. I can appreciate the older documentation – as I said I worked with this software a long time ago – but some of it has little or no value. How many people, for example, need to use an old version of the kernel? There may be a place for that information, but it should not be the first thing that a potential user sees.<br />
<br />
[[Category:LVS Handbook]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Building_Scalable_Network_Services&diff=5739Building Scalable Network Services2007-10-28T14:54:06Z<p>Wensong: /* Load Balancing */</p>
<hr />
<div>== Introduction ==<br />
<br />
The human society has been entering a network-centered information era, in which users need access faster, more reliable and functional network services. The increasing popularity of the World Wide Web leads to an exponential growth in the Internet usage. More and more network sites have attracted traffic at an unprecedented scale, they often worry about how to scale up their systems over time to meet this increasing demand, and how their systems remain 24x7 available. Future applications will need higher throughput, interaction and security, which requires service platform provide more processing power and higher availability. Therefore, there is a need to provide a feasible framework and some design methodologies for constructing high-performance, highly scalable and highly available network services.<br />
<br />
== Architecture ==<br />
<br />
The general system architecture of scalable network service is illustrated in the following figure:<br />
<br />
[[Image:Sns-system-architecture.png|center|660px]]<br />
<br />
It includes multiple layers of components:<br />
* Global load balancing is used to distribute traffic to the nearest and/or fastest response node at edge layer<br />
* Edge layer can have geographically distributed cache clusters to speed up content delivery<br />
* Application layer can web clusters to process different web logics<br />
* Memory cache layer can be used to speed up access of persistent data from database<br />
* Storage layer can have database cluster and file storage cluster to store different types of persistent data.<br />
<br />
== Load Balancing ==<br />
<br />
Global load balancing can use DNS-based approach to resolve domain name to the IP address of the nearest node among the geographically distributed nodes. See [[DNS Load Balancing|DNS load balancing]] for how to use bind 9 to do global load balancing.<br />
<br />
Local load balancing can be used to build network service clusters at one physical location, such as web cluster and cache cluster. [[IPVS]] is an advanced load balancing solution in open source for this purpose.<br />
<br />
== Cache Layer ==<br />
<br />
== Application Layer ==<br />
<br />
== Persistent Data Layer ==<br />
<br />
== Conclusion ==<br />
<br />
== References ==<br />
<br />
* [http://clustercenter.org/loadbalancing/LiveJournals-Backend-history-scaling/ LiveJournal's Backend: A history of scaling]<br />
* Wikimedia's (Cache) Network, http://www.nedworks.org/~mark/presentations/hd2006/<br />
* Wikipedia network ideas, http://www.aulinx.de/oss/code/wikipedia/<br />
* [http://morgamic.com/2006/03/19/scalable-php-with-phpa-memcached-and-lvs-part-1/ Scalable PHP with phpa, memcached and LVS (Part 1)]<br />
* [http://morgamic.com/2006/04/14/scalable-php-with-phpa-apc-memcached-and-lvs-part-2/ Scalable PHP with APC, memcached and LVS (Part 2)]<br />
<br />
<br />
{{techology-stub}}</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Building_Scalable_Network_Services&diff=5738Building Scalable Network Services2007-10-28T02:02:03Z<p>Wensong: /* Architecture */</p>
<hr />
<div>== Introduction ==<br />
<br />
The human society has been entering a network-centered information era, in which users need access faster, more reliable and functional network services. The increasing popularity of the World Wide Web leads to an exponential growth in the Internet usage. More and more network sites have attracted traffic at an unprecedented scale, they often worry about how to scale up their systems over time to meet this increasing demand, and how their systems remain 24x7 available. Future applications will need higher throughput, interaction and security, which requires service platform provide more processing power and higher availability. Therefore, there is a need to provide a feasible framework and some design methodologies for constructing high-performance, highly scalable and highly available network services.<br />
<br />
== Architecture ==<br />
<br />
The general system architecture of scalable network service is illustrated in the following figure:<br />
<br />
[[Image:Sns-system-architecture.png|center|660px]]<br />
<br />
It includes multiple layers of components:<br />
* Global load balancing is used to distribute traffic to the nearest and/or fastest response node at edge layer<br />
* Edge layer can have geographically distributed cache clusters to speed up content delivery<br />
* Application layer can web clusters to process different web logics<br />
* Memory cache layer can be used to speed up access of persistent data from database<br />
* Storage layer can have database cluster and file storage cluster to store different types of persistent data.<br />
<br />
== Load Balancing ==<br />
<br />
== Cache Layer ==<br />
<br />
== Application Layer ==<br />
<br />
== Persistent Data Layer ==<br />
<br />
== Conclusion ==<br />
<br />
== References ==<br />
<br />
* [http://clustercenter.org/loadbalancing/LiveJournals-Backend-history-scaling/ LiveJournal's Backend: A history of scaling]<br />
* Wikimedia's (Cache) Network, http://www.nedworks.org/~mark/presentations/hd2006/<br />
* Wikipedia network ideas, http://www.aulinx.de/oss/code/wikipedia/<br />
* [http://morgamic.com/2006/03/19/scalable-php-with-phpa-memcached-and-lvs-part-1/ Scalable PHP with phpa, memcached and LVS (Part 1)]<br />
* [http://morgamic.com/2006/04/14/scalable-php-with-phpa-apc-memcached-and-lvs-part-2/ Scalable PHP with APC, memcached and LVS (Part 2)]<br />
<br />
<br />
{{techology-stub}}</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=LVS/DR&diff=5737LVS/DR2007-10-24T15:36:07Z<p>Wensong: </p>
<hr />
<div>It stands for Linux Virtual Server via Direct Routing, an IP load balancing technology implemented in LVS. It directly routes packets to backend server through rewriting MAC address of data frame with the MAC address of the selected backend server. It has the best scalability among all other methods because the overhead of rewriting MAC address is pretty low, but it requires that the [[load balancer]] and the backend servers ([[real server]]s) are in a physical network.<br />
<br />
== Real Server ==<br />
<br />
For ARP issue in the LVS/DR cluster, see the article [[ARP Issues in LVS/DR and LVS/TUN Clusters]].<br />
<br />
If real servers run UDP service, it is probably worth reading the article [[UDP service binding issue in LVS/DR and LVS/TUN]].<br />
<br />
=== Windows ===<br />
<br />
On Windows 2003 Server, the built-in firewall on interface should '''not''' be turned on, otherwise [[LVS/DR]] may not work, because the built-in firewall is too simple to configure, it will drop the packet for VIP automatically, then the whole [[LVS/DR]] cluster would not work.<br />
<br />
== Examples ==<br />
<br />
[[Image:Dr-example.jpg|center]]<br />
<br />
== See Also ==<br />
<br />
See more information about Linux Virtual Server via Direct Routing at http://www.linuxvirtualserver.org/VS-DRouting.html<br />
<br />
<br />
{{IPVS-stub}}<br />
<br />
[[Category:IPVS]]<br />
[[Category:IP Load Balancing Technologies|DR]]<br />
[[Category:Terms and Abbreviations]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=Server_Architecture&diff=5736Server Architecture2007-10-23T13:24:41Z<p>Wensong: /* Software Software */</p>
<hr />
<div>== Server Architecture ==<br />
<br />
=== Multiple-Process Architecture ===<br />
<br />
=== Multiple-Thread Architecture ===<br />
<br />
=== Single-Process Event-Driven Architecture ===<br />
<br />
=== Multiple-Process Event-Driven Architecture ===<br />
<br />
=== Multiple-Thread Event-Driven Architecture ===<br />
<br />
== Software Software ==<br />
<br />
=== Apache ===<br />
<br />
Apache supports multiple-process architecture and multiple-thread architecture.<br />
<br />
=== nginx ===<br />
<br />
Nginx supports both the single-process event-driven architecture and the multiple-process event-driven architecture.<br />
<br />
=== lighttpd ===<br />
<br />
Lighttpd uses the single-process event-driven architecture.<br />
<br />
=== squid ===<br />
<br />
Squid uses the single-process event-driven architecture, because the main process of squid handles all kind of network connections, despite that different processes can handle disk I/O under the diskd configuration.<br />
<br />
=== Varnish ===<br />
<br />
[[Varnish]] uses the multiple-thread architecture. Each client connection is handled by a separate worker thread, though the main thread uses poll to process events on socket listening.<br />
<br />
== References ==<br />
<br />
* http://nginx.net/<br />
* http://lighttpd.net/<br />
* http://varnish.projects.linpro.no/</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=LVS/TUN&diff=5735LVS/TUN2007-10-22T15:05:24Z<p>Wensong: </p>
<hr />
<div>It stands for Linux Virtual Server via IP Tunneling, an IP [[load balancing]] technology implemented in LVS. <br />
<br />
IP tunneling (IP encapsulation) is a technique to encapsulate IP<br />
datagram within IP datagram, which allows datagrams destined for one<br />
IP address to be wrapped and redirected to another IP address. This<br />
technique can be used to build a virtual server that the [[load balancer]]<br />
tunnels the request packets to the different servers, and the servers<br />
process the requests and return the results to the clients directly,<br />
thus the service can still appear as a virtual service on a single IP<br />
address.<br />
<br />
== Real Server ==<br />
<br />
For ARP issue in the LVS/TUN cluster, see the article [[ARP Issues in LVS/DR and LVS/TUN Clusters]].<br />
<br />
If real servers run UDP service, it is probably worth reading the article [[UDP service binding issue in LVS/DR and LVS/TUN]].<br />
<br />
== Examples ==<br />
<br />
* [[LVS/TUN mode with FreeBSD and Solaris realserver|Using LVS/TUN with FreeBSD and Solaris Real Servers]]<br />
<br />
== See Also ==<br />
<br />
* See more information about Linux Virtual Server via IP Tunneling at http://www.linuxvirtualserver.org/VS-IPTunneling.html<br />
* [http://www.ssi.bg/~ja/TUN-HOWTO.txt LVS-TUN troubleshooting HOWTO from Julian]<br />
<br />
<br />
{{IPVS-stub}}<br />
<br />
[[Category:IPVS]]<br />
[[Category:IP Load Balancing Technologies|TUN]]<br />
[[Category:Terms and Abbreviations]]</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=File:Dr-example.jpg&diff=5734File:Dr-example.jpg2007-10-22T14:49:02Z<p>Wensong: </p>
<hr />
<div></div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=File:Ldap-cluster.jpg&diff=4381File:Ldap-cluster.jpg2007-08-07T09:45:55Z<p>Wensong: The architecture of LDAP cluster</p>
<hr />
<div>The architecture of LDAP cluster</div>Wensonghttp://kb.linuxvirtualserver.org/wiki?title=File:Rails-lvs-mongrel.png&diff=4223File:Rails-lvs-mongrel.png2007-07-07T01:05:22Z<p>Wensong: </p>
<hr />
<div>The architecture of Rails cluster with [[LVS/NAT]] [[load balancer]]s</div>Wensong