'나만의 Cloud'에 해당되는 글 21건

  1. 2017.03.17 go PATH 설정 2
  2. 2016.11.23 pacemaker
  3. 2016.11.15 ubuntu 14.04 haproxy 설정
  4. 2016.06.02 ubuntu apache2 포트 변경 및 DocumentRoot 변경
  5. 2016.05.09 SLA
  6. 2016.04.20 ubuntu keepalived
  7. 2015.08.21 ansible 설치 및 기본 구성
  8. 2015.07.09 Virtual box Vagrant
  9. 2014.05.19 puppet Resource
  10. 2014.05.14 [puppet] puppet 설치

go PATH 설정

나만의 Cloud 2017. 3. 17. 12:05

go 설치 

https://www.digitalocean.com/community/tutorials/how-to-install-go-1-6-on-ubuntu-14-04

  • sudo curl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gz

Next, use tar to unpack the package. This command will use the Tar tool to open and expand the downloaded file, and creates a folder using the package name, and then moves it to /usr/local.

  • sudo tar -xvf go1.6.linux-amd64.tar.gz
  • sudo mv go /usr/local

Some users prefer different locations for their Go installation, or may have mandated software locations. The Go package is now in /usr/local which also ensures Go is in your $PATH for Linux. It is possible to install Go to an alternate location but the $PATH information will change. The location you pick to house your Go folder will be referenced later in this tutorial, so remember where you placed it if the location is different than /usr/local.

Step 2 — Setting Go Paths

In this step, we’ll set some paths that Go needs. The paths in this step are all given are relative to the location of your Go installation in /usr/local. If you chose a new directory, or left the file in download location, modify the commands to match your new location.

First, set Go's root value, which tells Go where to look for its files.

  • sudo nano ~/.profile

At the end of the file, add this line:

export PATH=$PATH:/usr/local/go/bin

go - how do I SET the GOPATH environment variable on Ubuntu ...

go path가 설정 되지 않아 package 다운로드가 불가한 경우가 종종 발생

package github.com/square/certstrap: cannot download, $GOPATH not set.


- 적용 전 환경 변수 확인

$ go env

GOARCH="amd64"

GOBIN=""

GOEXE=""

GOHOSTARCH="amd64"

GOHOSTOS="linux"

GOOS="linux"

GOPATH=""

GORACE=""

GOROOT="/usr/local/go"

GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"

GO15VENDOREXPERIMENT="1"

CC="gcc"

GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"

CXX="g++"

CGO_ENABLED="1"



- 적용 후 

$ export GOPATH=/home/ubuntu/go


GOARCH="amd64"

GOBIN=""

GOEXE=""

GOHOSTARCH="amd64"

GOHOSTOS="linux"

GOOS="linux"

GOPATH="/home/ubuntu/workspace/releases/cf-release/go"

GORACE=""

GOROOT="/usr/local/go"

GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"

GO15VENDOREXPERIMENT="1"

CC="gcc"

GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"

CXX="g++"

CGO_ENABLED="1"


export PATH=$PATH:$GOROOT/bin:$GOPATH/bin


'나만의 Cloud' 카테고리의 다른 글

pacemaker  (0) 2016.11.23
ubuntu 14.04 haproxy 설정  (0) 2016.11.15
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Posted by 뭉탁거림
,

pacemaker

나만의 Cloud 2016. 11. 23. 15:11

pcs01


192.168.100.101, 172.27.1.11

VIP : 192.168.100.99 


pcm02

192.168.100.102, 172.27.1.12



################################

# cat /etc/hosts


192.168.100.101 pcs01

192.168.100.102 pcs02

192.168.100.98  pcs-vip

172.27.1.11     pcs01-cr

172.27.1.12     pcs02-cr


################################


# yum install -y pacemaker pcs fence-agents-all


pacemaker 설치(모든 node 동일) 

-- pacemaker를 설치하면 의존성으로 corosync도 설치가 됨 

-- pcs는 pacemaker와 corosync를 config하는 tool 

-- pcs를 설치하면 pcsd가 설치가 됨. 

-- pcsd is openssl based daemon written in ruby, manages pcs authentication between nodes, 

-- the authentication files are located in /var/lib/pcsd. 


# rpm -q -a | grep fence

fence-agents-rhevm-4.0.2-3.el7.x86_64

fence-agents-ilo-mp-4.0.2-3.el7.x86_64

fence-agents-ipmilan-4.0.2-3.el7.x86_64



################################


pcs daemon 데몬 시작 및 활성화 : used for synchronizing the Corosync configuration across the nodes


# systemctl start pcsd.service

# systemctl enable pcsd.service



패키지 설치 이후 hacluster 라는 새계정 추가 됨

구성을 위한 계정이므로 hacluster 계정에 대해 동일한 패스워드 설정


# echo "passwd" | passwd hacluster --stdin


there will be a new user on your system called hacluster. After the installation, remote login is disabled for this user. 

For tasks like synchronizing the configuration or starting services on other nodes, 

we have to set the same password for this user


################################


node01에서 진행


corosync 설정

pcs CLI를 이용해 node 간의 auth 생성 /var/lib/pcsd/tokens 저장


# pcs cluster auth pcs01-cr pcs02-cr


[root@pcs01 ~]# pcs cluster auth pcs01-cr pcs02-cr

Username: hacluster

Password:

pcs02-cr: Authorized

pcs01-cr: Authorized


################################


node01에서 진행


first_cluster'라는 cluster를 만들고, corosync config를 node간에 동기화(master node에서) 

pcs cluster setup on the same node to generate and synchronize the corosync configuration


# pcs cluster setup --name first_cluster pcs01-cr pcs02-cr -u hacluster -p passwd



Redirecting to /bin/systemctl stop  pacemaker.service

Redirecting to /bin/systemctl stop  corosync.service

Killing any remaining services...

Removing all cluster configuration files...

pcs01-cr: Succeeded

pcs02-cr: Succeeded

Synchronizing pcsd certificates on nodes pcs01-cr, pcs02-cr...

pcs02-cr: Success

pcs01-cr: Success



/etc/corosync/corosync.conf 파일이 생성되고 모든 node에 배포됨



################################


cluster 시작 (node01 수행)


# pcs cluster start --all (systemctl start corosync.service, systemctl start pacemaker.service와 같은 명령어임)


데몬 활성화(모든 node 수행)

# pcs cluster enable --all (systemctl enable corosync.service, systemctl enable pacemaker.service)



################################


Verify Corosync Installation


# corosync-cfgtool -s

# corosync-cmapctl  | grep members

# pcs status corosync


################################


Disabling STONITH and Ignoring Quorum 설정


[root@pcs01 ~]# pcs status

Cluster name: first_cluster

WARNING: no stonith devices and stonith-enabled is not false //다음과 같은 메세지 발생


# pcs stonith list

Check the list of available STONITH agents 할수 있음



# pcs property set stonith-enabled=false

# pcs property set no-quorum-policy=ignore


################################


# pcs cluster cib

# cibadmin -Q

If we inspect the raw output, we can see that the Pacemaker configuration XML file contains the following sections:

아래 5가지 섹션으로 구성되어 있음


<configuration>

<nodes>

<resources>

<constraints>

<status>


#crm_verify -LV

최종 유효성 검사


################################


Configuring the Virtual IP address


VIP 구성 및 관리그룹생성(한쪽 node에서만 진행, 주로 master에서) 


1) VIP를 제어하는 리소스를 추가 

-- 이를위해 'ocf:heartbeat:IPaddr2' (기본) 리소스 agent를 구성, 

-- 모든 리소스 agent는 2 ~ 3개의 필드로 이루어짐. 

-- resource class, OCF (Open Cluster Framework)/providers/resource agent 이름 


[root@pcs01 ~]# pcs resource standards

ocf

lsb

service

systemd

stonith


[root@pcs01 ~]# pcs resource providers

heartbeat

openstack

pacemaker


[root@pcs01 ~]# pcs resource agents ocf:heartbeat

CTDB

Delay

Dummy

Filesystem

IPaddr

IPaddr2

...

...

...



다음은 'Cluster_VIP' 라는 resource를 VIP:192.168.56.202, netmask-32bit, 모티너링 interval - 10초 로 생성


# pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=192.168.100.99 cidr_netmask=24 op monitor interval=10s


[root@pcs01 ~]# pcs status

Cluster name: first_cluster

Last updated: Tue Nov 22 03:24:17 2016          Last change: Tue Nov 22 03:23:40 2016 by root via cibadmin on pcs01-cr

Stack: corosync

Current DC: pcs02-cr (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

2 nodes and 1 resource configured


Online: [ pcs01-cr pcs02-cr ]


Full list of resources:


 Cluster_VIP    (ocf::heartbeat:IPaddr2):       Started pcs01-cr


PCSD Status:

  pcs01-cr: Online

  pcs02-cr: Online


Daemon Status:

  corosync: active/enabled

  pacemaker: active/enabled

  pcsd: active/enabled



참고로 아래 명령어로 Pacemaker 연결 상태를 변경하여 리소스가 이동할수 있도록 할수 있음


# pcs cluster standby pcs01-cr

# pcs cluster unstandby pcs01-cr


아래 처럼 manual 하게 이동시킬수도 있음

# pcs resource move Cluster_VIP pcs01-cr

# pcs resource clear my_VIP


################################


Adding the Apache Resource 추가


# yum install -y httpd  

아파치 설치


Test를 위한 HTML 파일 생성


[ALL]# cat <<EOL >/var/www/html/index.html

Apache test on $(hostname)

EOL


Enable the Apache status URL:


[ALL]# cat <<-END >/etc/httpd/conf.d/status.conf

 <Location /server-status>

    SetHandler server-status

 </Location>



# pcs resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" op monitor interval=1s


# pcs status

...

...


Online: [ pcs01-cr pcs02-cr ]


Full list of resources:


 Cluster_VIP    (ocf::heartbeat:IPaddr2):       Started pcs01-cr

 WebServer      (ocf::heartbeat:apache):        Started pcs02-cr


PCSD Status:

  pcs01-cr: Online

  pcs02-cr: Online


Daemon Status:

  corosync: active/enabled

  pacemaker: active/enabled

  pcsd: active/enabled


################################

필요에 의해 같은 곳에 있어야하는 리소스는 제약조건을 만들어야 함

WebServer 리소스와 Cluster_VIP 리소스가 따로 움직여 constraint 설정 필요

# pcs constraint colocation add WebServer Cluster_VIP INFINITY


Request <source resource> to run on the same node where pacemaker has determined <target resource> should run.

Specifying 'INFINITY' (or '-INFINITY') for the score force <source resource> to run (or not run) with <target resource>


################################


리소스 간의 Ensure Resources Start and Stop in Order 설정

예를 들어 apache의 경우 address 설정으로 인해 특정 IP로만 구동이 되어야 한다면 apache 리소스 기동전에 VIP 리소스가 기동 필요


# pcs constraint order Cluster_VIP then WebServer


################################


# pcs resource group add group_webresource Cluster_VIP WebServer



################################



'나만의 Cloud' 카테고리의 다른 글

go PATH 설정  (2) 2017.03.17
ubuntu 14.04 haproxy 설정  (0) 2016.11.15
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Posted by 뭉탁거림
,

# apt-get install haproxy

# vi /etc/default/haproxy


# root@node01:/etc/keepalived# cat /etc/default/haproxy

# Set ENABLED to 1 if you want the init script to start haproxy.

ENABLED=0 -> 1 //수정

# Add extra flags here.

#EXTRAOPTS="-de -m 16"


https://serversforhackers.com/load-balancing-with-haproxy 파라미터 값 참조

Load Balancing Configuration

To get started balancing traffic between our three HTTP listeners, we need to set some options within HAProxy:

  • frontend - where HAProxy listens to connections
  • backend - Where HAPoxy sends incoming connections
  • stats - Optionally, setup HAProxy web tool for monitoring the load balancer and its nodes



global

        log /dev/log    local0

        log /dev/log    local1 notice

        chroot /var/lib/haproxy

        user haproxy

        group haproxy

        daemon


defaults

        log     global

        mode    http

        option  httplog

        option  dontlognull

        retries 3

        redispatch

        maxconn 2000

        contimeout 5000

        clitimeout 50000

        srvtimeout 50000

        errorfile 400 /etc/haproxy/errors/400.http

        errorfile 403 /etc/haproxy/errors/403.http

        errorfile 408 /etc/haproxy/errors/408.http

        errorfile 500 /etc/haproxy/errors/500.http

        errorfile 502 /etc/haproxy/errors/502.http

        errorfile 503 /etc/haproxy/errors/503.http

        errorfile 504 /etc/haproxy/errors/504.http


frontend http-in

        mode http

        bind *:80

        log global

        option httplog

        default_backend servers


backend servers

        mode http

        balance roundrobin

        option forwardfor

        server web01 192.168.100.103:80 check


###

Try http://x.x.x.x:3000/stats  URL to login into statistics report for HAProxy.


listen stats *:3000  //

        mode http

        stats enable

        stats uri /stats

        stats hide-version

        stats auth admin:admin




'나만의 Cloud' 카테고리의 다른 글

go PATH 설정  (2) 2017.03.17
pacemaker  (0) 2016.11.23
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Posted by 뭉탁거림
,

# apt-get install apache2 -y 

# vi /etc/apache2/sites-enabled/000-default


<VirtualHost *:8080>

        ServerAdmin webmaster@localhost


        DocumentRoot /var/www

        <Directory />

                Options FollowSymLinks

                AllowOverride None

        </Directory>

        <Directory /var/www/>

                Options Indexes FollowSymLinks MultiViews

                AllowOverride None

                Order allow,deny

                allow from all

        </Directory>


        ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

        <Directory "/usr/lib/cgi-bin">

                AllowOverride None

                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch

                Order allow,deny

                Allow from all

        </Directory>


        ErrorLog ${APACHE_LOG_DIR}/error.log


        # Possible values include: debug, info, notice, warn, error, crit,

        # alert, emerg.

        LogLevel warn


        CustomLog ${APACHE_LOG_DIR}/access.log combined


    Alias /doc/ "/usr/share/doc/"

    <Directory "/usr/share/doc/">

        Options Indexes MultiViews FollowSymLinks

        AllowOverride None

        Order deny,allow

        Deny from all

        Allow from 127.0.0.0/255.0.0.0 ::1/128

    </Directory>


</VirtualHost>

'나만의 Cloud' 카테고리의 다른 글

pacemaker  (0) 2016.11.23
ubuntu 14.04 haproxy 설정  (0) 2016.11.15
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
ansible 설치 및 기본 구성  (0) 2015.08.21
Posted by 뭉탁거림
,

SLA

나만의 Cloud 2016. 5. 9. 10:54

SLA 지표 


출처 : https://ophir.wordpress.com/2011/01/31/does-sla-really-mean-anything/

'나만의 Cloud' 카테고리의 다른 글

ubuntu 14.04 haproxy 설정  (0) 2016.11.15
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
ubuntu keepalived  (0) 2016.04.20
ansible 설치 및 기본 구성  (0) 2015.08.21
Virtual box Vagrant  (0) 2015.07.09
Posted by 뭉탁거림
,

ubuntu keepalived

나만의 Cloud 2016. 4. 20. 00:59

출처 : https://raymii.org/s/tutorials/Keepalived-Simple-IP-failover-on-Ubuntu.html


We are going to set up very simple keepalived IP failover on Ubuntu 14.04. Keepalived is a piece of software which can be used to achieve high availability by assigning two or more nodes a virtual IP and monitoring those nodes, failing over when one goes down. Keepalived can do more, like load balancing and monitoring, but this tutorial focusses on a very simple setup, just IP failover.

Internally keepalived uses VRRP. The VRRP protocol ensures that one of participating nodes is master. The backup node(s) listens for multicast packets from a node with a higher priority. If the backup node fails to receive VRRP advertisements for a period longer than three times of the advertisement timer, the backup node takes the master state and assigns the configured IP(s) to itself. In case there are more than one backup nodes with the same priority, the one with the highest IP wins the election.

I'm also a fan of Corosync/Pacemaker, you can see my articles about Corosync here.

We'll install nginx and edit the default webpage, just to see where the IP is pointing to.

Requirements

You'll need the following to get started with keepalived:

  • 2 servers in the same network

I'll be using Ubuntu 14.04 servers in this example. These servers are in the 10.32.75.0/24 network. The virtual IP will be 10.32.75.200.

Install packages

Use apt to install the required packages:

apt-get install nginx keepalived

Configuring keepalived

Create the config file on the first server (10.32.75.12):

vim /etc/keepalived/keepalived.conf

Edit and paste the following config:

! Configuration File for keepalived

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass $ place secure password here.
    }
    virtual_ipaddress {
        10.32.75.200
    }
}

Create the config file on the second server (10.32.75.14):

vim /etc/keepalived/keepalived.conf

Edit and paste the following config:

! Configuration File for keepalived

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass $ place secure password here.
    }
    virtual_ipaddress {
        10.32.75.200
    }
}

The priority must be highest on the server you want to be the master/primary. It can be 150 on the master, and 100, 99, 98, 97 on the slaves. The virtual_router_id must be the same on all nodes and the auth_pass must also be the same. My network configuration is on eth0, change it if yours is on another one.

Configuring NGINX

For this example I have set up a very simple NGINX server with a very simple HTML page.

vim /usr/share/nginx/html/index.html

Server 1:

<!DOCTYPE html>
<html>
<head>
<title>Keepalived 1!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Keepalived 1 - MASTER!</h1>
</body>
</html>

Server 2:

<!DOCTYPE html>
<html>
<head>
<title>Keepalived 2!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Keepalived 2 - backup!</h1>
</body>
</html>

sysctl

In order to be able to bind on a IP which is not yet defined on the system, we need to enable non local binding at the kernel level.

Temporary:

echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind

Permanent:

Add this to /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Enable with:

sysctl -p

Start & Failover

When the website is set up we can start both NGINX and Keepalived on both servers:

service keepalived start
service nginx start

Visit the IP you configured as a failover IP in your browser. You should see the page for server 1.

Let's do a test failover. On server 1, stop keepalived:

service keepalived stop

Refresh the webpage. You should see the page for server 2. The logging will show something like this:

tail /var/log/syslog

Output:

Jun 13 22:50:59 ha2-ubu1 Keepalived_vrrp[1579]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jun 13 22:51:00 ha2-ubu1 Keepalived_vrrp[1579]: VRRP_Instance(VI_1) Entering MASTER STATE
Jun 13 22:51:01 ha2-ubu1 ntpd[1445]: Listen normally on 9 eth0 10.32.75.200 UDP 123
Jun 13 22:51:01 ha2-ubu1 ntpd[1445]: peers refreshed
Jun 13 22:51:01 ha2-ubu1 ntpd[1445]: new interface(s) found: waking up resolver

As you can see, for a simple IP failover, keepalived is much simpler than corosync/pacemaker to set up.

You can read more on keepalived on their website. Another article here describes how to do load balancing with keepalived.


Tags: cluster,heartbeat,high-availability,keepalived,network,vrrp,


'나만의 Cloud' 카테고리의 다른 글

ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ansible 설치 및 기본 구성  (0) 2015.08.21
Virtual box Vagrant  (0) 2015.07.09
puppet Resource  (0) 2014.05.19
Posted by 뭉탁거림
,

ansible

시스템 환경 설정 및 배포 자동화 플랫폼(CM Tools)

Agentless 구조 

ssh protocol base

학습 시간량이 및 시스템 복잡성이 낮아 쉽게 배포 자동화 환경 구축 가능

멱등성 보장

- 설치


python package 관리 pip 설치

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install python-pip

yum install python-devel

pip list

pip install paramiko pyYAML jinja2 httplib2

pip install ansible



- 설정

mkdir /etc/ansible

echo "127.0.0.1" > /etc/ansible/hosts : remote 호스트 명시 

Inventory File

  - Remote Server에 대한

  - /etc/ansible/hosts/Ansible-hosts [Default]

  - '-i' 옵션으로 별도의 inventory File 지정 가능

  - Remote host Grouping 가능


[cloud]

10.12.18.200

10.12.18.199


[cloud2]

10.12.18.200

10.12.18.199


Playbook File

  - ansible 환경 설정 및 배포

  - yaml 문법을 사용

  - YAML의 제일 첫 줄은 '---'로 시작하며, 이는 YAML 포맷의 시작 지점을 뜻함.


Template File

  - 실제 처리하고자 하는 업무에 대한 Template

  - Jinja2 적용 가능(Template Task인 경우)

  - 관례상 확장자는 '.j2'


- ssh-key 등록

ssh-key 생성

ssh-keygen -t rsa

ssh-copy-id server@IP로 옮김


- ansible 명령어

1) ansible : 간단한 배포 명령어 수행

ansible -i /etc/ansible/hosts all -m shell -a hostname


2) ansible-playbook : yml 파일에 명시되어 있는 배포 작업 수행

ansible-playbook -i /etc/ansible/hosts ping.yml


---

- hosts: cloud

  remote_user: root

  tasks:

    - name: test connection

      ping:

      remote_user: root


  tasks:

    - name: remote cmd test

      command: mkdir test


'나만의 Cloud' 카테고리의 다른 글

SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Virtual box Vagrant  (0) 2015.07.09
puppet Resource  (0) 2014.05.19
[puppet] puppet 설치  (0) 2014.05.14
Posted by 뭉탁거림
,

Vagrant는 Vagrantfile 이용해서 Config


0. mkdir vagrant : 작업 폴더 생성


# vagrant box add NAME URL

vagrant box add centos64 http://downloads.sourceforge.net/project/nrel-vagrant-boxes/CentOS-6.5-x86_64-v20140504.box?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fnrel-vagrant-boxes%2Ffiles%2F&ts=1436418211&use_mirror=jaist


# vagrant init BOX_NAME

vagrant init centos64


1. vagrant init : Vagrantfile 을 만든다.

This will place a Vagrantfile in your current directory.


Box

virtual machine 의 Base Image 이다. 여러 프로젝트에서 box 를 공유해서 사용할수 있다


Vagrantfile 설정

vagrant up

vagrant reload --provision


2. vagrant box add chef/centos-6.5 

Added boxes can be re-used by multiple projects. Each project uses a box as an initial image to clone from, and never modifies the actual base image This means that if you have two projects both using the hashicorp/precise32 box we just added, adding files in one guest machine will have no effect on the other machine.



3. Vagrantfile 수정


Vagrant.configure("2") do |config|

  config.vm.box = "hashicorp/precise32"

end


4. vagrant up

In less than a minute, this command will finish and you'll have a virtual machine running 


vagrant destroy : up 에 적용됬던 내용들을 모두 지운다. box 를 지우는건 아니다.


set PATH=%PATH%;C:\Program Files (x86)\Git\bin\


5. vagrant ssh 


Vagrantfile 설정

vagrant up

vagrant reload --provision




'나만의 Cloud' 카테고리의 다른 글

ubuntu keepalived  (0) 2016.04.20
ansible 설치 및 기본 구성  (0) 2015.08.21
puppet Resource  (0) 2014.05.19
[puppet] puppet 설치  (0) 2014.05.14
[chef] chef 참고자료  (0) 2014.04.28
Posted by 뭉탁거림
,

puppet Resource

나만의 Cloud 2014. 5. 19. 14:17

Resource : Imagine a system’s configuration as a collection of many independent atomic units; call them“resources.”

- 한글로 번역하려니 .. 뜻이 애매..........

user account/file/directory/package/service/cron_job 등등...chef의 resource와 비슷한 개념인것 같다.

 

puppet의 resource는 resource type / title / attributes / value 4가지로 구성 

실 예로)

#puppet resource service

service { 'sshd':
  ensure => 'running',
  enable => 'true',
}
service { 'start-ttys':
  ensure => 'stopped',
  enable => 'true',
}

$puppet resource user

user { 'puppet':
  ensure           => 'present',
  comment          => 'Puppet',
  gid              => '52',
  home             => '/var/lib/puppet',
  password         => '!!',
  password_max_age => '-1',
  password_min_age => '-1',
  shell            => '/sbin/nologin',
  uid              => '52',
}

'나만의 Cloud' 카테고리의 다른 글

ansible 설치 및 기본 구성  (0) 2015.08.21
Virtual box Vagrant  (0) 2015.07.09
[puppet] puppet 설치  (0) 2014.05.14
[chef] chef 참고자료  (0) 2014.04.28
[git] git란?  (0) 2014.04.23
Posted by 뭉탁거림
,

puppet 설치 (CentOS6.3 기준)


0. Pre OS setting
FQDN 방식으로 hostname 설정
DNS 서버가 없기 때문에 hosts파일에 서버 등록

1. puppet labs repository 추가

sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm


2. puppet labs repository Enable
fter installing the repos, open your /etc/yum.repos.d/puppetlabs.repo file for editing. Locate the[puppetlabs-devel] stanza, and change the value of the enabled key from 0 to 1:

3. Puppet Master server 설치
yum intstall ruby
yum install puppet-server

puppet.conf 파일에 아래 내용 추가

Config Sections

[main]
    certname = puppetmaster01.example.com


sudo puppet master --verbose --no-daemonize
puppet cert --all and --list

4. Puppet agent 설치

yum intstall ruby
yum install puppet
puppet agent --server mjstest.wlstn.com --no-daemonize --verbose

Config Sections

[agent]
    certname = puppetclient1.example.com


Master to Agent 연동  

 

     Master 와 각 Agent는 SSL 통신을 합니다. Master 입장에서는 승인된 Agent정보가 필요하며, 
     반대로 Agent도 승인된 Master가 필요 합니다. 즉 쌍뱡향 SSL 승인절차를 필요 합니다.
     서로 같에 SSL 승인이 되지 않으면 연동 자체가 불가 합니다.
     Puppet 같은 경우는 자체 CA (certificate authority) 인증 기관이 있고,  SSL 인증을 최적화
     되어 있습니다. 
     이유는 공인인증서를 발급하면 비용이 들어가는데, 만약 Agent가 100대 이상이면 
     비용이 만만치 않습니다. 
     그래서 공인인증 수준의 SSL 인증 발급을 쉽게 할수 있도록 지원을 합니다.
     Puppet은 큰 특징 중 하나가 바로 SSL 인증 발급을 하는 것입니다. 
     또한 Customize 할수 있는 기능도 제공 합니다.
     

     인증서 발급 FLOW는 아래와 같습니다.


               

 

     (1) Master 서버가 기동을 해서 Agent를 리스닝 합니다.

     (2) Agent는 서버에 접속을 합니다. 
     (3) Master는 Agent의 도메인 정보를 얻은 후 SSL을 발급 합니다.
     (4) 발급된 ".pem" 파일을 Agent에 전송 합니다.

     (5) Agent는 "/etc/puppet/ssl" 폴더에 서버 와 본인 인증서를 저장 합니다.
     (6) 이후 부터는 인증서를 통해서 ssl 통신을 합니다.


  • Master 기동

 

     puppet master --no-daemonize -d -v

 

  • Agent 기동

 

     puppet agent --server [서버 도메인] --no-daemonize --verbose         

 

  • puppet.conf 조회


     vi /etc/puppet/puppet.conf

  • 인증서 조회

 

     puppet cert --all and --list

 

  • 인증서 추가


     puppet cert --sign [Agent 도메인]

 

  • 인증서 삭제

 

     puppet  cert --clean [Agent 도메인]

 

  • 인증서 재발급 방법

 

     agent의 "/etc/puppet/ssl" 하위 디렉토리 및 파일 삭제

     master에서 "puppet  cert --clean"를 통한 도메인 삭제
      master에서   "puppet cert --sign" 재등록


출처) http://beyondj2ee.pbworks.com/w/page/51641649/BeyondJ2EE-Puppet%20%EC%84%A4%EC%B9%98

'나만의 Cloud' 카테고리의 다른 글

Virtual box Vagrant  (0) 2015.07.09
puppet Resource  (0) 2014.05.19
[chef] chef 참고자료  (0) 2014.04.28
[git] git란?  (0) 2014.04.23
[chef] chef 서버가 IP가 변경 시  (0) 2014.02.13
Posted by 뭉탁거림
,