ubuntu keepalived

나만의 Cloud 2016. 4. 20. 00:59

출처 : https://raymii.org/s/tutorials/Keepalived-Simple-IP-failover-on-Ubuntu.html


We are going to set up very simple keepalived IP failover on Ubuntu 14.04. Keepalived is a piece of software which can be used to achieve high availability by assigning two or more nodes a virtual IP and monitoring those nodes, failing over when one goes down. Keepalived can do more, like load balancing and monitoring, but this tutorial focusses on a very simple setup, just IP failover.

Internally keepalived uses VRRP. The VRRP protocol ensures that one of participating nodes is master. The backup node(s) listens for multicast packets from a node with a higher priority. If the backup node fails to receive VRRP advertisements for a period longer than three times of the advertisement timer, the backup node takes the master state and assigns the configured IP(s) to itself. In case there are more than one backup nodes with the same priority, the one with the highest IP wins the election.

I'm also a fan of Corosync/Pacemaker, you can see my articles about Corosync here.

We'll install nginx and edit the default webpage, just to see where the IP is pointing to.

Requirements

You'll need the following to get started with keepalived:

  • 2 servers in the same network

I'll be using Ubuntu 14.04 servers in this example. These servers are in the 10.32.75.0/24 network. The virtual IP will be 10.32.75.200.

Install packages

Use apt to install the required packages:

apt-get install nginx keepalived

Configuring keepalived

Create the config file on the first server (10.32.75.12):

vim /etc/keepalived/keepalived.conf

Edit and paste the following config:

! Configuration File for keepalived

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass $ place secure password here.
    }
    virtual_ipaddress {
        10.32.75.200
    }
}

Create the config file on the second server (10.32.75.14):

vim /etc/keepalived/keepalived.conf

Edit and paste the following config:

! Configuration File for keepalived

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass $ place secure password here.
    }
    virtual_ipaddress {
        10.32.75.200
    }
}

The priority must be highest on the server you want to be the master/primary. It can be 150 on the master, and 100, 99, 98, 97 on the slaves. The virtual_router_id must be the same on all nodes and the auth_pass must also be the same. My network configuration is on eth0, change it if yours is on another one.

Configuring NGINX

For this example I have set up a very simple NGINX server with a very simple HTML page.

vim /usr/share/nginx/html/index.html

Server 1:

<!DOCTYPE html>
<html>
<head>
<title>Keepalived 1!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Keepalived 1 - MASTER!</h1>
</body>
</html>

Server 2:

<!DOCTYPE html>
<html>
<head>
<title>Keepalived 2!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Keepalived 2 - backup!</h1>
</body>
</html>

sysctl

In order to be able to bind on a IP which is not yet defined on the system, we need to enable non local binding at the kernel level.

Temporary:

echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind

Permanent:

Add this to /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Enable with:

sysctl -p

Start & Failover

When the website is set up we can start both NGINX and Keepalived on both servers:

service keepalived start
service nginx start

Visit the IP you configured as a failover IP in your browser. You should see the page for server 1.

Let's do a test failover. On server 1, stop keepalived:

service keepalived stop

Refresh the webpage. You should see the page for server 2. The logging will show something like this:

tail /var/log/syslog

Output:

Jun 13 22:50:59 ha2-ubu1 Keepalived_vrrp[1579]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jun 13 22:51:00 ha2-ubu1 Keepalived_vrrp[1579]: VRRP_Instance(VI_1) Entering MASTER STATE
Jun 13 22:51:01 ha2-ubu1 ntpd[1445]: Listen normally on 9 eth0 10.32.75.200 UDP 123
Jun 13 22:51:01 ha2-ubu1 ntpd[1445]: peers refreshed
Jun 13 22:51:01 ha2-ubu1 ntpd[1445]: new interface(s) found: waking up resolver

As you can see, for a simple IP failover, keepalived is much simpler than corosync/pacemaker to set up.

You can read more on keepalived on their website. Another article here describes how to do load balancing with keepalived.


Tags: cluster,heartbeat,high-availability,keepalived,network,vrrp,


'나만의 Cloud' 카테고리의 다른 글

ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ansible 설치 및 기본 구성  (0) 2015.08.21
Virtual box Vagrant  (0) 2015.07.09
puppet Resource  (0) 2014.05.19
Posted by 뭉탁거림
,

BOSH를 알아보면.. 그 종류? 구성 방법에 대한 이야기가 많습니다.

크게 2가지 bosh에 대한 내용이 주를 이루는데요.. 먼저 bosh-lite와 micro-bosh

 

1) bosh-lite

bosh lightweight 버전으로 IaaS VM 환경없이 로컬의 vagrant 환경에서 system deploy 할 수 있다 Warden Cloud Provider Interface (CPI) 사용하며 Container 방식으로 Cloud Foundry Component를 VM 내부에 배포하게 됩니다.

개발 용도의 버전이죠...

 

2) micro-bosh

single vm으로 구성된 bosh Cloud Foundy deploy하거나 IaaS상에 소프트웨어를 deploy 있습니다.

주로 micro-bosh를 통해 CloudFoundry를 배포하죠..


bosh-lite를 통해 CF 배포에 대해 다루도록 하겠습니다.


환경 : Ubuntu 12.04.5 LTS (GNU/Linux 3.2.0-90-generic x86_64)

 

1. Install Git

$ apt-get install git

 

2. Install VirtualBox

$ echo "deb http://download.virtualbox.org/virtualbox/debian precise contrib" >> /etc/apt/sources.list

or create a new .list file as described in this thread.

$ wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -

$ sudo apt-get update

$ sudo apt-get install virtualbox-4.3

$ sudo apt-get install dkms

$ VBoxManage --version

4.3.10_Ubuntur93012

 

3. Install Vagrant (the known version to work with bosh-lite is 1.6.3 - link)

$ wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.3_x86_64.deb

$ sudo dpkg -i vagrant_1.6.3_x86_64.deb

$ vagrant --version

Vagrant 1.6.3

 

4. Check if vagrant is correctly working with the installed virtual box(테스트 이므로 UP 후 삭제)

$ vagrant init hashicorp/precise32

$ vagrant up

 

5. Install Ruby(using RVM) + RubyGems + Bundler(ruby 업데이트 및 버전관리를 위한 rvm 설치)

5.1. Install rvm

$ curl -sSL https://rvm.io/mpapis.asc | gpg --import -

$ curl -sSL https://get.rvm.io | bash -s stable

$ source /etc/profile.d/rvm.sh

$ rvm --version

5.2. Install latest ruby version

$ rvm install 1.9.3-p551

$ rvm use ruby --default 2.1.7

$ ruby -v

ruby 1.9.3p551 (2014-11-13 revision 48407) [x86_64-linux]

 

 

6. Install Bosh CLI (check the prerequisites for the target OS here) : bosh_cli 설치

- Note that Bosh CLI is not suppored on windows - github issue

$ sudo apt-get install build-essential libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev

$ gem install bosh_cli

 

 

7. Install Bosh-Lite

$ git clone https://github.com/cloudfoundry/bosh-lite

$ cd bosh-lite

$ vagrant up --provider=virtualbox

 

8. bosh target 192.168.50.4 lite

Target set to `Bosh Lite Director'

Your username: admin

Enter password: admin

Logged in as `admin'

 

9. Setup a route between the laptop and the VMs running inside Bosh Lite

$ cd bosh-lite

$ ./bin/add-route(10.254.0.0/16 수정)

 

10. upload stemcell

Stemcell 이미지 다운로드

bosh upload stemcell bosh-stemcell-2776-warden-boshlite-centos-go_agent.tgz

 

Getting the template (“stemcell”) we use to deploy Cloud Foundry

# wget http://bosh-jenkins-gems-warden.s3.amazonaws.com/stemcells/latest-bosh-stemcell-warden.tgz

Uploading the stemcell to BOSH

# bosh upload stemcell latest-bosh-stemcell-warden.tgz

# bosh stemcells

 

 

11. OK, ready to install Cloud Foundry:

# git clone https://github.com/cloudfoundry/cf-release.git

 

Go into the cf-release directory and  update the cf-release using ./update command

12. Use the update helper script to update the cf-release submodules

# cd cf-release

# ./update

 

13 spiff 설치(gvm 을 이용한 go 언어 설치 후 spiff 설치하거나, 방법은 여러가지임)

# git spiff : https://github.com/cloudfoundry-incubator/spiff #installation

# cp spiff /usr/bin/

 

Spiff : To create a valid deployment file for the full deployment of CloudFoundry it is necessary to use Spiff. Spiff is a “A declarative YAML templating system tuned for BOSH deployment manifests.”

spiff is a command line tool and declarative YAML templating system, specially designed for generating BOSH deployment manifests.

14. create deployment manifest stub

 

# bosh status --uuid : bosh target uuid 확인

# vi cf-stub.yml

---

director_uuid: DIRECTOR-UUID

generate_deployment_manifest INFRASTRUCTURE MANIFEST-STUB > cf-deployment.yml

# ./generate_deployment_manifest warden cf-stub.yml > cf-deployment.yml

cf-deployment.yml 생성

 

generate_deployment_manifest : script template directory yml 파일 spiff

 

# bosh target : bosh target 서버 재확인

 

Use bosh deployment MANIFEST-NAME to set your deployment to the generated manifest. Replace MANIFEST-NAME with the name of your deployment manifest

# bosh deployment cf-deployment.yml

 

Run bosh create release to create a Cloud Foundry release. This command prompts you for a development release name.

# bosh create release

 

Run bosh create release to create a Cloud Foundry release.

# bosh upload release

 

uploaded Cloud Foundry release.

# bosh deploy

 

 

Deploy 완료 이후 테스트

# If not using AWS, but behind a proxy, make sure to run 'export no_proxy=192.168.50.4,xip.io'

$ cf api --skip-ssl-validation https://api.10.244.0.34.xip.io

$ cf auth admin admin

$ cf create-org test-org

$ cf target -o test-org

$ cf create-space test-space

$ cf target -s test-space

 

Go language 설치

# bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)

# gvm version

# gvm listall

# gvm install go1.4

'CloudFoundry' 카테고리의 다른 글

[bosh] job lifecycle  (0) 2016.06.02
[bosh] release, job  (0) 2016.06.02
[CloudFoundry] cf push process(application 배포)  (0) 2016.04.20
CloudFoundry user-provide-service setting  (0) 2016.04.20
CPI(Cloud Provider Interface) / bosh  (0) 2016.04.08
Posted by 뭉탁거림
,

Application + MD(metadata? manifest.yml 파일) 

 

manifest file : 소스코드를 올리기전에 application에 대한 사전 정의 파일로 예를 들어 instance 갯수나, 어플리케이션 메모리, 서비스 bind 정보등이 있는 파일

 

사용자가 cf push를 통한 application 배포 시

 

1. Cloud Controller 통해 Blobstore에 소스 코드 업로드, CCDB에 및 배포 정보 저장

2. 서비스 BIND 정보가 있다면 서비스 브로커 통해서 서비스 바인딩 할당 및 BIND

3. Staging 단계 -> 소스코드와 빌드팩을 통해 실행환경(런타임)을 구성하는 단계

   업로드된 어플리케이션이 기본적으로 확장자를 통해서 detect를 하고 app 실행환경 자동 구성

   이때 staging 용도의 임시의 Application이 생성되어 staging 과정에 관여함

4. droplet 생성 후 blobstore 캐싱 -> 컨테이너 증설 시 원본을 가지고 clone

5. dea 위에 application Container가 배포 완료

 

서비스 브로커는 REST API 형태로 구현된 형태로 Service instance 가 연동할수 있도록 역할

사용자가 CLI 명령어나 WEB UI를 통해 Create 서비스



출처: http://docs.cloudfoundry.org/concepts/how-applications-are-staged.html

Posted by 뭉탁거림
,