pacemaker

나만의 Cloud 2016. 11. 23. 15:11

pcs01


192.168.100.101, 172.27.1.11

VIP : 192.168.100.99 


pcm02

192.168.100.102, 172.27.1.12



################################

# cat /etc/hosts


192.168.100.101 pcs01

192.168.100.102 pcs02

192.168.100.98  pcs-vip

172.27.1.11     pcs01-cr

172.27.1.12     pcs02-cr


################################


# yum install -y pacemaker pcs fence-agents-all


pacemaker 설치(모든 node 동일) 

-- pacemaker를 설치하면 의존성으로 corosync도 설치가 됨 

-- pcs는 pacemaker와 corosync를 config하는 tool 

-- pcs를 설치하면 pcsd가 설치가 됨. 

-- pcsd is openssl based daemon written in ruby, manages pcs authentication between nodes, 

-- the authentication files are located in /var/lib/pcsd. 


# rpm -q -a | grep fence

fence-agents-rhevm-4.0.2-3.el7.x86_64

fence-agents-ilo-mp-4.0.2-3.el7.x86_64

fence-agents-ipmilan-4.0.2-3.el7.x86_64



################################


pcs daemon 데몬 시작 및 활성화 : used for synchronizing the Corosync configuration across the nodes


# systemctl start pcsd.service

# systemctl enable pcsd.service



패키지 설치 이후 hacluster 라는 새계정 추가 됨

구성을 위한 계정이므로 hacluster 계정에 대해 동일한 패스워드 설정


# echo "passwd" | passwd hacluster --stdin


there will be a new user on your system called hacluster. After the installation, remote login is disabled for this user. 

For tasks like synchronizing the configuration or starting services on other nodes, 

we have to set the same password for this user


################################


node01에서 진행


corosync 설정

pcs CLI를 이용해 node 간의 auth 생성 /var/lib/pcsd/tokens 저장


# pcs cluster auth pcs01-cr pcs02-cr


[root@pcs01 ~]# pcs cluster auth pcs01-cr pcs02-cr

Username: hacluster

Password:

pcs02-cr: Authorized

pcs01-cr: Authorized


################################


node01에서 진행


first_cluster'라는 cluster를 만들고, corosync config를 node간에 동기화(master node에서) 

pcs cluster setup on the same node to generate and synchronize the corosync configuration


# pcs cluster setup --name first_cluster pcs01-cr pcs02-cr -u hacluster -p passwd



Redirecting to /bin/systemctl stop  pacemaker.service

Redirecting to /bin/systemctl stop  corosync.service

Killing any remaining services...

Removing all cluster configuration files...

pcs01-cr: Succeeded

pcs02-cr: Succeeded

Synchronizing pcsd certificates on nodes pcs01-cr, pcs02-cr...

pcs02-cr: Success

pcs01-cr: Success



/etc/corosync/corosync.conf 파일이 생성되고 모든 node에 배포됨



################################


cluster 시작 (node01 수행)


# pcs cluster start --all (systemctl start corosync.service, systemctl start pacemaker.service와 같은 명령어임)


데몬 활성화(모든 node 수행)

# pcs cluster enable --all (systemctl enable corosync.service, systemctl enable pacemaker.service)



################################


Verify Corosync Installation


# corosync-cfgtool -s

# corosync-cmapctl  | grep members

# pcs status corosync


################################


Disabling STONITH and Ignoring Quorum 설정


[root@pcs01 ~]# pcs status

Cluster name: first_cluster

WARNING: no stonith devices and stonith-enabled is not false //다음과 같은 메세지 발생


# pcs stonith list

Check the list of available STONITH agents 할수 있음



# pcs property set stonith-enabled=false

# pcs property set no-quorum-policy=ignore


################################


# pcs cluster cib

# cibadmin -Q

If we inspect the raw output, we can see that the Pacemaker configuration XML file contains the following sections:

아래 5가지 섹션으로 구성되어 있음


<configuration>

<nodes>

<resources>

<constraints>

<status>


#crm_verify -LV

최종 유효성 검사


################################


Configuring the Virtual IP address


VIP 구성 및 관리그룹생성(한쪽 node에서만 진행, 주로 master에서) 


1) VIP를 제어하는 리소스를 추가 

-- 이를위해 'ocf:heartbeat:IPaddr2' (기본) 리소스 agent를 구성, 

-- 모든 리소스 agent는 2 ~ 3개의 필드로 이루어짐. 

-- resource class, OCF (Open Cluster Framework)/providers/resource agent 이름 


[root@pcs01 ~]# pcs resource standards

ocf

lsb

service

systemd

stonith


[root@pcs01 ~]# pcs resource providers

heartbeat

openstack

pacemaker


[root@pcs01 ~]# pcs resource agents ocf:heartbeat

CTDB

Delay

Dummy

Filesystem

IPaddr

IPaddr2

...

...

...



다음은 'Cluster_VIP' 라는 resource를 VIP:192.168.56.202, netmask-32bit, 모티너링 interval - 10초 로 생성


# pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=192.168.100.99 cidr_netmask=24 op monitor interval=10s


[root@pcs01 ~]# pcs status

Cluster name: first_cluster

Last updated: Tue Nov 22 03:24:17 2016          Last change: Tue Nov 22 03:23:40 2016 by root via cibadmin on pcs01-cr

Stack: corosync

Current DC: pcs02-cr (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

2 nodes and 1 resource configured


Online: [ pcs01-cr pcs02-cr ]


Full list of resources:


 Cluster_VIP    (ocf::heartbeat:IPaddr2):       Started pcs01-cr


PCSD Status:

  pcs01-cr: Online

  pcs02-cr: Online


Daemon Status:

  corosync: active/enabled

  pacemaker: active/enabled

  pcsd: active/enabled



참고로 아래 명령어로 Pacemaker 연결 상태를 변경하여 리소스가 이동할수 있도록 할수 있음


# pcs cluster standby pcs01-cr

# pcs cluster unstandby pcs01-cr


아래 처럼 manual 하게 이동시킬수도 있음

# pcs resource move Cluster_VIP pcs01-cr

# pcs resource clear my_VIP


################################


Adding the Apache Resource 추가


# yum install -y httpd  

아파치 설치


Test를 위한 HTML 파일 생성


[ALL]# cat <<EOL >/var/www/html/index.html

Apache test on $(hostname)

EOL


Enable the Apache status URL:


[ALL]# cat <<-END >/etc/httpd/conf.d/status.conf

 <Location /server-status>

    SetHandler server-status

 </Location>



# pcs resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" op monitor interval=1s


# pcs status

...

...


Online: [ pcs01-cr pcs02-cr ]


Full list of resources:


 Cluster_VIP    (ocf::heartbeat:IPaddr2):       Started pcs01-cr

 WebServer      (ocf::heartbeat:apache):        Started pcs02-cr


PCSD Status:

  pcs01-cr: Online

  pcs02-cr: Online


Daemon Status:

  corosync: active/enabled

  pacemaker: active/enabled

  pcsd: active/enabled


################################

필요에 의해 같은 곳에 있어야하는 리소스는 제약조건을 만들어야 함

WebServer 리소스와 Cluster_VIP 리소스가 따로 움직여 constraint 설정 필요

# pcs constraint colocation add WebServer Cluster_VIP INFINITY


Request <source resource> to run on the same node where pacemaker has determined <target resource> should run.

Specifying 'INFINITY' (or '-INFINITY') for the score force <source resource> to run (or not run) with <target resource>


################################


리소스 간의 Ensure Resources Start and Stop in Order 설정

예를 들어 apache의 경우 address 설정으로 인해 특정 IP로만 구동이 되어야 한다면 apache 리소스 기동전에 VIP 리소스가 기동 필요


# pcs constraint order Cluster_VIP then WebServer


################################


# pcs resource group add group_webresource Cluster_VIP WebServer



################################



'나만의 Cloud' 카테고리의 다른 글

go PATH 설정  (2) 2017.03.17
ubuntu 14.04 haproxy 설정  (0) 2016.11.15
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Posted by 뭉탁거림
,