go PATH 설정

나만의 Cloud 2017. 3. 17. 12:05

go 설치 

https://www.digitalocean.com/community/tutorials/how-to-install-go-1-6-on-ubuntu-14-04

  • sudo curl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gz

Next, use tar to unpack the package. This command will use the Tar tool to open and expand the downloaded file, and creates a folder using the package name, and then moves it to /usr/local.

  • sudo tar -xvf go1.6.linux-amd64.tar.gz
  • sudo mv go /usr/local

Some users prefer different locations for their Go installation, or may have mandated software locations. The Go package is now in /usr/local which also ensures Go is in your $PATH for Linux. It is possible to install Go to an alternate location but the $PATH information will change. The location you pick to house your Go folder will be referenced later in this tutorial, so remember where you placed it if the location is different than /usr/local.

Step 2 — Setting Go Paths

In this step, we’ll set some paths that Go needs. The paths in this step are all given are relative to the location of your Go installation in /usr/local. If you chose a new directory, or left the file in download location, modify the commands to match your new location.

First, set Go's root value, which tells Go where to look for its files.

  • sudo nano ~/.profile

At the end of the file, add this line:

export PATH=$PATH:/usr/local/go/bin

go - how do I SET the GOPATH environment variable on Ubuntu ...

go path가 설정 되지 않아 package 다운로드가 불가한 경우가 종종 발생

package github.com/square/certstrap: cannot download, $GOPATH not set.


- 적용 전 환경 변수 확인

$ go env

GOARCH="amd64"

GOBIN=""

GOEXE=""

GOHOSTARCH="amd64"

GOHOSTOS="linux"

GOOS="linux"

GOPATH=""

GORACE=""

GOROOT="/usr/local/go"

GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"

GO15VENDOREXPERIMENT="1"

CC="gcc"

GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"

CXX="g++"

CGO_ENABLED="1"



- 적용 후 

$ export GOPATH=/home/ubuntu/go


GOARCH="amd64"

GOBIN=""

GOEXE=""

GOHOSTARCH="amd64"

GOHOSTOS="linux"

GOOS="linux"

GOPATH="/home/ubuntu/workspace/releases/cf-release/go"

GORACE=""

GOROOT="/usr/local/go"

GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"

GO15VENDOREXPERIMENT="1"

CC="gcc"

GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"

CXX="g++"

CGO_ENABLED="1"


export PATH=$PATH:$GOROOT/bin:$GOPATH/bin


'나만의 Cloud' 카테고리의 다른 글

pacemaker  (0) 2016.11.23
ubuntu 14.04 haproxy 설정  (0) 2016.11.15
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Posted by 뭉탁거림
,

pacemaker

나만의 Cloud 2016. 11. 23. 15:11

pcs01


192.168.100.101, 172.27.1.11

VIP : 192.168.100.99 


pcm02

192.168.100.102, 172.27.1.12



################################

# cat /etc/hosts


192.168.100.101 pcs01

192.168.100.102 pcs02

192.168.100.98  pcs-vip

172.27.1.11     pcs01-cr

172.27.1.12     pcs02-cr


################################


# yum install -y pacemaker pcs fence-agents-all


pacemaker 설치(모든 node 동일) 

-- pacemaker를 설치하면 의존성으로 corosync도 설치가 됨 

-- pcs는 pacemaker와 corosync를 config하는 tool 

-- pcs를 설치하면 pcsd가 설치가 됨. 

-- pcsd is openssl based daemon written in ruby, manages pcs authentication between nodes, 

-- the authentication files are located in /var/lib/pcsd. 


# rpm -q -a | grep fence

fence-agents-rhevm-4.0.2-3.el7.x86_64

fence-agents-ilo-mp-4.0.2-3.el7.x86_64

fence-agents-ipmilan-4.0.2-3.el7.x86_64



################################


pcs daemon 데몬 시작 및 활성화 : used for synchronizing the Corosync configuration across the nodes


# systemctl start pcsd.service

# systemctl enable pcsd.service



패키지 설치 이후 hacluster 라는 새계정 추가 됨

구성을 위한 계정이므로 hacluster 계정에 대해 동일한 패스워드 설정


# echo "passwd" | passwd hacluster --stdin


there will be a new user on your system called hacluster. After the installation, remote login is disabled for this user. 

For tasks like synchronizing the configuration or starting services on other nodes, 

we have to set the same password for this user


################################


node01에서 진행


corosync 설정

pcs CLI를 이용해 node 간의 auth 생성 /var/lib/pcsd/tokens 저장


# pcs cluster auth pcs01-cr pcs02-cr


[root@pcs01 ~]# pcs cluster auth pcs01-cr pcs02-cr

Username: hacluster

Password:

pcs02-cr: Authorized

pcs01-cr: Authorized


################################


node01에서 진행


first_cluster'라는 cluster를 만들고, corosync config를 node간에 동기화(master node에서) 

pcs cluster setup on the same node to generate and synchronize the corosync configuration


# pcs cluster setup --name first_cluster pcs01-cr pcs02-cr -u hacluster -p passwd



Redirecting to /bin/systemctl stop  pacemaker.service

Redirecting to /bin/systemctl stop  corosync.service

Killing any remaining services...

Removing all cluster configuration files...

pcs01-cr: Succeeded

pcs02-cr: Succeeded

Synchronizing pcsd certificates on nodes pcs01-cr, pcs02-cr...

pcs02-cr: Success

pcs01-cr: Success



/etc/corosync/corosync.conf 파일이 생성되고 모든 node에 배포됨



################################


cluster 시작 (node01 수행)


# pcs cluster start --all (systemctl start corosync.service, systemctl start pacemaker.service와 같은 명령어임)


데몬 활성화(모든 node 수행)

# pcs cluster enable --all (systemctl enable corosync.service, systemctl enable pacemaker.service)



################################


Verify Corosync Installation


# corosync-cfgtool -s

# corosync-cmapctl  | grep members

# pcs status corosync


################################


Disabling STONITH and Ignoring Quorum 설정


[root@pcs01 ~]# pcs status

Cluster name: first_cluster

WARNING: no stonith devices and stonith-enabled is not false //다음과 같은 메세지 발생


# pcs stonith list

Check the list of available STONITH agents 할수 있음



# pcs property set stonith-enabled=false

# pcs property set no-quorum-policy=ignore


################################


# pcs cluster cib

# cibadmin -Q

If we inspect the raw output, we can see that the Pacemaker configuration XML file contains the following sections:

아래 5가지 섹션으로 구성되어 있음


<configuration>

<nodes>

<resources>

<constraints>

<status>


#crm_verify -LV

최종 유효성 검사


################################


Configuring the Virtual IP address


VIP 구성 및 관리그룹생성(한쪽 node에서만 진행, 주로 master에서) 


1) VIP를 제어하는 리소스를 추가 

-- 이를위해 'ocf:heartbeat:IPaddr2' (기본) 리소스 agent를 구성, 

-- 모든 리소스 agent는 2 ~ 3개의 필드로 이루어짐. 

-- resource class, OCF (Open Cluster Framework)/providers/resource agent 이름 


[root@pcs01 ~]# pcs resource standards

ocf

lsb

service

systemd

stonith


[root@pcs01 ~]# pcs resource providers

heartbeat

openstack

pacemaker


[root@pcs01 ~]# pcs resource agents ocf:heartbeat

CTDB

Delay

Dummy

Filesystem

IPaddr

IPaddr2

...

...

...



다음은 'Cluster_VIP' 라는 resource를 VIP:192.168.56.202, netmask-32bit, 모티너링 interval - 10초 로 생성


# pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=192.168.100.99 cidr_netmask=24 op monitor interval=10s


[root@pcs01 ~]# pcs status

Cluster name: first_cluster

Last updated: Tue Nov 22 03:24:17 2016          Last change: Tue Nov 22 03:23:40 2016 by root via cibadmin on pcs01-cr

Stack: corosync

Current DC: pcs02-cr (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

2 nodes and 1 resource configured


Online: [ pcs01-cr pcs02-cr ]


Full list of resources:


 Cluster_VIP    (ocf::heartbeat:IPaddr2):       Started pcs01-cr


PCSD Status:

  pcs01-cr: Online

  pcs02-cr: Online


Daemon Status:

  corosync: active/enabled

  pacemaker: active/enabled

  pcsd: active/enabled



참고로 아래 명령어로 Pacemaker 연결 상태를 변경하여 리소스가 이동할수 있도록 할수 있음


# pcs cluster standby pcs01-cr

# pcs cluster unstandby pcs01-cr


아래 처럼 manual 하게 이동시킬수도 있음

# pcs resource move Cluster_VIP pcs01-cr

# pcs resource clear my_VIP


################################


Adding the Apache Resource 추가


# yum install -y httpd  

아파치 설치


Test를 위한 HTML 파일 생성


[ALL]# cat <<EOL >/var/www/html/index.html

Apache test on $(hostname)

EOL


Enable the Apache status URL:


[ALL]# cat <<-END >/etc/httpd/conf.d/status.conf

 <Location /server-status>

    SetHandler server-status

 </Location>



# pcs resource create WebServer ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" op monitor interval=1s


# pcs status

...

...


Online: [ pcs01-cr pcs02-cr ]


Full list of resources:


 Cluster_VIP    (ocf::heartbeat:IPaddr2):       Started pcs01-cr

 WebServer      (ocf::heartbeat:apache):        Started pcs02-cr


PCSD Status:

  pcs01-cr: Online

  pcs02-cr: Online


Daemon Status:

  corosync: active/enabled

  pacemaker: active/enabled

  pcsd: active/enabled


################################

필요에 의해 같은 곳에 있어야하는 리소스는 제약조건을 만들어야 함

WebServer 리소스와 Cluster_VIP 리소스가 따로 움직여 constraint 설정 필요

# pcs constraint colocation add WebServer Cluster_VIP INFINITY


Request <source resource> to run on the same node where pacemaker has determined <target resource> should run.

Specifying 'INFINITY' (or '-INFINITY') for the score force <source resource> to run (or not run) with <target resource>


################################


리소스 간의 Ensure Resources Start and Stop in Order 설정

예를 들어 apache의 경우 address 설정으로 인해 특정 IP로만 구동이 되어야 한다면 apache 리소스 기동전에 VIP 리소스가 기동 필요


# pcs constraint order Cluster_VIP then WebServer


################################


# pcs resource group add group_webresource Cluster_VIP WebServer



################################



'나만의 Cloud' 카테고리의 다른 글

go PATH 설정  (2) 2017.03.17
ubuntu 14.04 haproxy 설정  (0) 2016.11.15
ubuntu apache2 포트 변경 및 DocumentRoot 변경  (0) 2016.06.02
SLA  (0) 2016.05.09
ubuntu keepalived  (0) 2016.04.20
Posted by 뭉탁거림
,

서버의 고가용성을 목적으로 Data Replication 관련 DRDB 패키지 설치

DRBD와 heartbeat 혹은 keepalived 이용해 active-standby를 구성


OS : ubuntu 14.04 (64bit)

NIC : 

- node01 : public(eth0, 192.168.100.101), heartbeat(eth2, 172.27.0.101 mtu 9000) 

- node02 : public(eth0, 192.168.100.102), heartbeat(eth2, 172.27.0.102 mtu 9000) 

DISK : sdb1(Replication DISK 10G), sdc1(META DISK 1GB)



1. 사전 준비


1) ntp 설치(nodeo01, node02) 

# apt-get install ntp -y

# /etc/init.d/ntp start

# ntpq -p

# date


2) iptables 해제

# iptables -F


3) /etc/hosts file 설정

master/slave 서버 호스트명 기재


# cat /etc/hosts

127.0.0.1       localhost

# 127.0.1.1       node01


192.168.100.101 node01  node01

192.168.100.102 node02  node02

172.27.0.101    node01-private

172.27.0.102    node02-private


4) DISK 파티셔닝

# fdisk /dev/sdb, /dev/sdc



2, DRDB 패키지 설치


# apt-get install drbd8-utils


# root@node02:/etc# cat drbd.conf

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example


include "drbd.d/global_common.conf";

include "drbd.d/*.res";


# /etc/drbd.conf


# cat /etc/drbd.d/nfs.res

 

resource nfs {    # nfs라는 리소스 정의

  protocol C;                 # DATA 전송 프로토콜 A 비동기식 빠른속도   <=> C 동기식

        startup {

                wfc-timeout 30;

                outdated-wfc-timeout 20;

                degr-wfc-timeout 30;

      #become-primary-on both; # Active-Active 설정시​

        }

disk {

                on-io-error detach;   //  This is the recommended option. On the occurrence of a lower-level I/O error, the node drops its backing device, and continues in diskless mode.             

fencing resource-only; //If a node becomes a disconnected primary, it tries to fence the peer's disk. 

        }

        net {

                cram-hmac-alg sha1;

                shared-secret sync_disk;

                #allow-two-primaries yes;         # Active-Active 구성시 옵션으로 아래 4줄

                #after-sb-0pri discard-zero-changes; 

                #after-sb-1pri discard-secondary; 

                #after-sb-2pri disconnect; 

        }

        syncer {    # resynchroniztion 에서 사용하는 대역폭

                rate 100M;                  # 초당 100M

                verify-alg sha1;

al-extents 257;

        }

        on node01 {                          # 호스트 별로 리소스 정의 (uname -n 값과 반드시 일치)

                device /dev/drbd0;           # drbd의 논리 블록 디바이스명. mkfs,mount 수행해야됨

                disk /dev/sdb1;              # 미러링 하고자 하는 물리 디스크 디바이스명

                address 172.27.0.101:7788;   # 데이타 동기화를 위한 수신 IP/포트

                meta-disk /dev/sdc1;         # 자체 메타 공간 사용 meta-disk internal;

        }

        on node02 {

                device /dev/drbd0;

                disk /dev/sdb1;

                address 172.27.0.102:7788;

                meta-disk /dev/sdc1;

        }



3. 실행

메타데이터 생성 - master/slave 모두

# drbdadm create-md nfs #meta

# dd if=/dev/zero of=/dev/sdb bs=1M  <=오류 시 dd로 disk 초기화

# /etc/init.d/drbd start​ #drbd 서비스 시작, wfc-timeout 시간 이내에 두서버를 모두 on 



# drbdadm -- --overwrite-data-of-peer primary nfs #active 서버 지정 한쪽에서만 수행

# drbdadm primary --force nfs #active 서버 지정

# drbd-overview #확인


# mkfs.ext4 /dev/drbd0 #primary 에서 지정

# mkdir /data

# mount /dev/drbd0 /data/


4. drbdadm CLI

# drbdadm primary nfs

# drbdadm secondary nfs

# drbd-overview

drbdadm -- --overwrite-data-of-peer primary all

primary로 부터 데이터 동기화




'리눅스-Linux' 카테고리의 다른 글

DNS 서버 bind9 설정 관련  (0) 2015.10.11
[Linux] iptables 사용  (0) 2015.10.11
[리눅스] 유용한 find , grep 사용법  (1) 2015.10.11
[Linux] iscsi.conf timeout 관련  (0) 2015.06.10
[리눅스] 파일 무결성 검증  (1) 2014.11.18
Posted by 뭉탁거림
,