備忘録/にわかエンジニアが好きなように書く

個人的にとりあえず仕組みを知りたいから勉強になる?ってことで、 (利便性無視で)触りたように好きに構築してみる 個人用の備忘録となるので内容の保証はないのでその点はご了承ください。 ※変な内容や間違いを書いているなどありましたらコメントやご指摘いただけると幸いです。

Kubernetes1.10.2 & Calico のインストール on centos7

Calicoの導入に5回実施し41回は成功して無事に動いていそうな状態となる。

残りは、calicoのpodが再起動を繰り返したり、kube-dnsが動かなかったり原因はよくわからない。

環境(VMworkstation上)

・Kubernetes"1.10.2”:Masterノート1台(CentOS7.4) , Workerノード:3台(CentOS7.4)

    CNI : Calico3.1

・VyOS ※マスターと各ノード間で疎通できるように設定済

・接続構成(イメージ)

f:id:pocket01:20180504151447p:plain

参考

Quickstart for Calico on Kubernetes | calico

 

Calico実装要件

  • AMD64プロセッサ
  • 2CPU
  • 2GBのRAM
  • 10GBの空きディスク容量
  • RedHat Enterprise Linux 7.x +、CentOS 7.x +、Ubuntu 16.04+、またはDebian 8.x +

kubernetes導入

dockerインストール

インストール(対象: Master / Worker)

# yum install -y yum-utils device-mapper-persistent-data lvm2 
# yum-config-manager \
--add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install -y --setopt=obsoletes=0 docker-ce-17.03.2.ce-1.el7.centos

 起動設定(対象: Master / Worker)

# systemctl start docker
# systemctl enable docker

 確認(対象: Master / Worker)

# docker version

kubeadm、kubelet、kubectlのインストール

Kubernetes用Repository追加(対象: Master / Worker)

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
■■■■ ここから ■■■■
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
■■■■ ここまで ■■■■

kubelet kubeadm kubectlのインストール(対象: Master / Worker)

# yum install -y kubelet kubeadm kubectl

kubelet起動(対象: Master / Worker)

# systemctl enable kubelet
# systemctl start kubelet

バイパス設定(対象: Master / Worker)

# cat <<EOF > /etc/sysctl.d/k8s.conf
■■■■ ここから ■■■■
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
■■■■ ここまで ■■■■

バイパス設定確認(対象: Master / Worker)

# sysctl --system
~ 省略 ~
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

cgroupドライバ確認と修正(対象: Master / Worker)

# docker info | grep -i cgroup
# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
~ 省略 ~
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
~ 省略 ~
# sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
~ 省略 ~
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
~ 省略 ~

サービス再起動

# systemctl daemon-reload
# systemctl restart kubelet

SWAPの無効化(対象: Master / Worker)

# swapoff -a
# vi /etc/fstab
■■■ 修正 ■■■ 先頭に # をつけてコメントアウト
#/dev/mapper/centos-swap swap swap defaults 0 0
■■■■■■■■■■■■■■

kubeadmを使用したクラスタの作成

マスターノードの設定を初期化(対象: Master)

--apiserver-advertise-address=10.10.0.201  ※Masterノードのアドレス

--pod-network-cidr=172.16.0.0/16 

# kubeadm init --apiserver-advertise-address=10.10.0.201 --pod-network-cidr=172.16.0.0/16
~ 省略 ~
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 10.10.0.201:6443 --token axs7ha.zxos3omti7uh2g1t --discovery-token-ca-cert-hash sha256:ab2d9af84e3604d59b598fe0b0cbd4a66be25c1b12291b2532c68e051d928a42

[root@k8s-master ~]#

※Workerノード追加用コマンドの "kubeadm join ・・・・" をメモしておく

 

クラスタへのアクセス設定(対象: Master / Worker)

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

ステータス確認(対象: Master / Worker)

# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.0.201:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
#

Calico導入

Installing Calico for policy and networking (recommended) | calico

calico.yamlの取得

# curl -O https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

マニフェスト修正と適用

# vi calico.yaml
■■■■ 修正 ■■■■ pod-networkのアドレスに修正
- name: CALICO_IPV4POOL_CIDR
value: "172.16.0.0/16"
■■■■■■■■■■■■■■

# kubectl apply -f calico.yaml
configmap "calico-config" created
daemonset.extensions "calico-etcd" created
service "calico-etcd" created
daemonset.extensions "calico-node" created
deployment.extensions "calico-kube-controllers" created
clusterrolebinding.rbac.authorization.k8s.io "calico-cni-plugin" created
clusterrole.rbac.authorization.k8s.io "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" created
clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created
#

ステータス"node"確認(対象: Master )

calicoのステータスが"Running"であることを確認し、

kube-dnsのステータスが"Pending"となっているが気にせずに次の手順に進む。

# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-2z69c 1/1 Running 0 46s
kube-system calico-kube-controllers-685755779f-zmjqx 1/1 Running 0 45s
kube-system calico-node-l5pp6 2/2 Running 0 44s
kube-system etcd-k8s-master 1/1 Running 0 2m
kube-system kube-apiserver-k8s-master 1/1 Running 0 2m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 2m
kube-system kube-dns-86f4d74b45-sfwv5 0/3 Pending 0 3m
kube-system kube-proxy-lq4xh 1/1 Running 0 3m
kube-system kube-scheduler-k8s-master 1/1 Running 0 2m

Node追加

Node追加(対象: Worker 3台)

"kubeadm init・・・" の実行時に表示されたコマンドをワーカノードで入力する

# kubeadm join 10.10.0.201:6443 --token axs7ha.zxos3omti7uh2g1t --discovery-token-ca-cert-hash sha256:ab2d9af84e3604d59b598fe0b0cbd4a66be25c1b12291b2532c68e051d928a42
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.10.0.201:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.0.201:6443"
[discovery] Requesting info from "https://10.10.0.201:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.0.201:6443"
[discovery] Successfully established connection with API Server "10.10.0.201:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

ステータス"node"確認(対象: Master )

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 13m v1.10.2
worker-node1 Ready <none> 2m v1.10.2
worker-node2 Ready <none> 2m v1.10.2
worker-node3 Ready <none> 2m v1.10.2
#

ステータス"pods"確認(対象: Master )

この時点で kube-dns のステータスが Running と確認できた。calico.yamlの実行後の待ち時間が短かった?

# kubectl get -o wide pods,nodes --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system pod/calico-etcd-2z69c 1/1 Running 0 36m 10.10.0.201 k8s-master
kube-system pod/calico-kube-controllers-685755779f-zmjqx 1/1 Running 0 35m 10.10.0.201 k8s-master
kube-system pod/calico-node-5cqwf 2/2 Running 0 27m 10.20.0.202 worker-node2
kube-system pod/calico-node-l5pp6 2/2 Running 0 35m 10.10.0.201 k8s-master
kube-system pod/calico-node-svnnz 2/2 Running 0 27m 10.20.0.201 worker-node1
kube-system pod/calico-node-v8d9q 2/2 Running 0 27m 10.30.0.201 worker-node3
kube-system pod/etcd-k8s-master 1/1 Running 0 37m 10.10.0.201 k8s-master
kube-system pod/kube-apiserver-k8s-master 1/1 Running 0 37m 10.10.0.201 k8s-master
kube-system pod/kube-controller-manager-k8s-master 1/1 Running 0 37m 10.10.0.201 k8s-master
kube-system pod/kube-dns-86f4d74b45-sfwv5 3/3 Running 0 38m 192.168.50.1 worker-node3
kube-system pod/kube-proxy-d8ctw 1/1 Running 0 27m 10.20.0.201 worker-node1
kube-system pod/kube-proxy-ds8qf 1/1 Running 0 27m 10.20.0.202 worker-node2
kube-system pod/kube-proxy-ffptl 1/1 Running 0 27m 10.30.0.201 worker-node3
kube-system pod/kube-proxy-lq4xh 1/1 Running 0 38m 10.10.0.201 k8s-master
kube-system pod/kube-scheduler-k8s-master 1/1 Running 0 37m 10.10.0.201 k8s-master

NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/k8s-master Ready master 38m v1.10.2 <none> CentOS Linux 7 (Core) 3.10.0-693.21.1.el7.x86_64 docker://17.3.2
node/worker-node1 Ready <none> 27m v1.10.2 <none> CentOS Linux 7 (Core) 3.10.0-693.21.1.el7.x86_64 docker://17.3.2
node/worker-node2 Ready <none> 27m v1.10.2 <none> CentOS Linux 7 (Core) 3.10.0-693.21.1.el7.x86_64 docker://17.3.2
node/worker-node3 Ready <none> 27m v1.10.2 <none> CentOS Linux 7 (Core) 3.10.0-693.21.1.el7.x86_64 docker://17.3.2
#

 

f:id:pocket01:20180506131358p:plain