使用kubeadm安裝Kubernetes1.15

kubeadm是Kubernetes官方提供的用于快速安裝Kubernetes集群的工具,伴隨Kubernetes每個版本的發(fā)布都會同步更新,kubeadm會對集群配置方面的一些實(shí)踐做調(diào)整,通過實(shí)驗(yàn)kubeadm可以學(xué)習(xí)到Kubernetes官方在集群配置上一些新的最佳實(shí)踐。

成都創(chuàng)新互聯(lián)于2013年開始,先為江漢等服務(wù)建站,江漢等地企業(yè),進(jìn)行企業(yè)商務(wù)咨詢服務(wù)。為江漢企業(yè)網(wǎng)站制作PC+手機(jī)+微官網(wǎng)三網(wǎng)同步一站式服務(wù)解決您的所有建站問題。

1.準(zhǔn)備

1.1系統(tǒng)配置

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.246 k8s-master
192.168.0.247 k8s-node1
192.168.0.248 k8s-node2

如果各個主機(jī)啟用了防火墻,需要開放Kubernetes各個組件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一節(jié)。 這里簡單起見在各節(jié)點(diǎn)禁用防火墻:

systemctl stop firewalld
systemctl disable firewalld

禁用SELINUX:

setenforce 0

vi /etc/selinux/config
SELINUX=disabled

創(chuàng)建/etc/sysctl.d/k8s.conf文件,添加如下內(nèi)容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

執(zhí)行命令使修改生效。

modprobe br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy開啟ipvs的前置條件

由于ipvs已經(jīng)加入到了內(nèi)核的主干,所以為kube-proxy開啟ipvs的前提需要加載以下的內(nèi)核模塊:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在所有的Kubernetes節(jié)點(diǎn)node1和node2上執(zhí)行以下腳本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面腳本創(chuàng)建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節(jié)點(diǎn)重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經(jīng)正確加載所需的內(nèi)核模塊。

接下來還需要確保各個節(jié)點(diǎn)上已經(jīng)安裝了ipset軟件包yum install ipset。 為了便于查看ipvs的代理規(guī)則,最好安裝一下管理工具ipvsadm yum install ipvsadm。

如果以上前提條件如果不滿足,則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式。

1.3安裝Docker

Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器運(yùn)行時接口。默認(rèn)的容器運(yùn)行時仍然是Docker,使用的是kubelet中內(nèi)置dockershim CRI實(shí)現(xiàn)。

安裝docker的yum源:

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查看最新的Docker版本:

yum list docker-ce.x86_64  --showduplicates |sort -r
docker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.5-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.4-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.3-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.2-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.1-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64            18.06.3.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.2.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable

Kubernetes 1.15當(dāng)前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。 這里在各節(jié)點(diǎn)安裝docker的18.09.7版本。

yum makecache fast

yum install -y --setopt=obsoletes=0 \
  docker-ce-18.09.7-3.el7 

systemctl start docker

systemctl enable docker

確認(rèn)一下iptables filter表中FOWARD鏈的默認(rèn)策略(pllicy)為ACCEPT。

iptables -nvL
Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

1.4 修改docker cgroup driver為systemd
根據(jù)文檔CRI installation中的內(nèi)容,對于使用systemd作為init system的Linux的發(fā)行版,使用systemd作為docker的cgroup driver可以確保服務(wù)器節(jié)點(diǎn)在資源緊張的情況更加穩(wěn)定,因此這里修改各個節(jié)點(diǎn)上docker的cgroup driver為systemd。

創(chuàng)建或修改/etc/docker/daemon.json:

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重啟docker:

systemctl restart docker

docker info | grep Cgroup
Cgroup Driver: systemd

2.使用kubeadm部署Kubernetes

2.1 安裝kubeadm和kubelet
下面在各節(jié)點(diǎn)安裝kubeadm和kubelet:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast

yum install -y kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2

...

Installed:
  kubeadm.x86_64 0:1.15.0-0                  kubectl.x86_64 0:1.15.0-0                      kubelet.x86_64 0:1.15.0-0                                 

Dependency Installed:
  conntrack-tools.x86_64 0:1.4.4-4.el7            cri-tools.x86_64 0:1.12.0-0                   kubernetes-cni.x86_64 0:0.7.5-0     libnetfilter_cthelper.x86_64 0:1.0.0-9.el7    
  libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7     libnetfilter_queue.x86_64 0:1.0.2-2.el7_2

從安裝結(jié)果可以看出還安裝了cri-tools, kubernetes-cni, socat三個依賴:

官方從Kubernetes 1.14開始將cni依賴升級到了0.7.5版本
socat是kubelet的依賴
cri-tools是CRI(Container Runtime Interface)容器運(yùn)行時接口的命令行工具
運(yùn)行kubelet –help可以看到原來kubelet的絕大多數(shù)命令行flag參數(shù)都被DEPRECATED了,如:

......
--address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and::for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
......
而官方推薦我們使用–config指定配置文件,并在配置文件中指定原來這些flag所配置的內(nèi)容。具體內(nèi)容可以查看這里Set Kubelet parameters via a config file。這也是Kubernetes為了支持動態(tài)Kubelet配置(Dynamic Kubelet Configuration)才這么做的,參考Reconfigure a Node’s Kubelet in a Live Cluster。

kubelet的配置文件必須是json或yaml格式,具體可查看這里。

Kubernetes 1.8開始要求關(guān)閉系統(tǒng)的Swap,如果不關(guān)閉,默認(rèn)配置下kubelet將無法啟動。 關(guān)閉系統(tǒng)的Swap方法如下:

swapoff -a
修改 /etc/fstab 文件,注釋掉 SWAP 的自動掛載,使用free -m確認(rèn)swap已經(jīng)關(guān)閉。 swappiness參數(shù)調(diào)整,修改/etc/sysctl.d/k8s.conf添加下面一行:

vm.swappiness=0
執(zhí)行下面命令:

sysctl -p /etc/sysctl.d/k8s.conf

使修改生效。

2.2 使用kubeadm init初始化集群

在各節(jié)點(diǎn)開機(jī)啟動kubelet服務(wù):

systemctl enable kubelet.service

初始化master之前確認(rèn)修改/etc/sysconfig/kubelet 中的內(nèi)容為:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

kubeadm init \
--apiserver-advertise-address=192.168.0.246 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.13.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

結(jié)果主要內(nèi)容如下:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.246:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \
    --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

執(zhí)行以下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

最后給出了將節(jié)點(diǎn)加入集群的命令:
kubeadm join 192.168.0.246:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

查看一下集群狀態(tài),確認(rèn)個組件都處于healthy狀態(tài):

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

2.3 安裝Pod Network

接下來安裝flannel network add-on:

kdir -p ~/k8s/
cd ~/k8s
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

[這里注意kube-flannel.yml這個文件里的flannel的鏡像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64請?zhí)砑渔溄用枋鯹()

如果鏡像拉取失敗請每一個node進(jìn)行手動拉?。?/p>

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64<br/>docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64<br/>docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

使用kubectl get pod –all-namespaces -o wide確保所有的Pod都處于Running狀態(tài)。

kubectl get pod -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coreDNS-5c98db65d4-dr8lf        1/1     Running   0          52m
coredns-5c98db65d4-lp8dg        1/1     Running   0          52m
etcd-node1                      1/1     Running   0          51m
kube-apiserver-node1            1/1     Running   0          51m
kube-controller-manager-node1   1/1     Running   0          51m
kube-flannel-ds-amd64-mm296     1/1     Running   0          44s
kube-proxy-kchkf                1/1     Running   0          52m
kube-scheduler-node1            1/1     Running   0          51m

2.5 向Kubernetes集群中添加Node節(jié)點(diǎn)

下面將node主機(jī)添加到Kubernetes集群中,在node上執(zhí)行:

kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \
    --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

加入集群很是順利,下面在master節(jié)點(diǎn)上執(zhí)行命令查看集群中的節(jié)點(diǎn):

kubectl get node
NAME    STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   57m   v1.15.2
k8s-node1 Ready   <none>  11s          v1.15.2
k8s-node2   Ready    <none>  11s   v1.15.2

2.6 kube-proxy開啟ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”

kubectl edit cm kube-proxy -n kube-system
之后重啟各個節(jié)點(diǎn)上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-7fsrg                1/1     Running   0          3s
kube-proxy-k8vhm                1/1     Running   0          9s
kubectl logs kube-proxy-7fsrg  -n kube-system
I0703 04:42:33.308289       1 server_others.go:170] Using ipvs Proxier.
W0703 04:42:33.309074       1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0703 04:42:33.309831       1 server.go:534] Version: v1.15.0
I0703 04:42:33.320088       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0703 04:42:33.320365       1 config.go:96] Starting endpoints config controller
I0703 04:42:33.320393       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0703 04:42:33.320455       1 config.go:187] Starting service config controller
I0703 04:42:33.320470       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0703 04:42:33.420899       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0703 04:42:33.420969       1 controller_utils.go:1036] Caches are synced for service config controller

3.Kubernetes常用組件部署

越來越多的公司和團(tuán)隊(duì)開始使用Helm這個Kubernetes的包管理器,這里也將使用Helm安裝Kubernetes的常用組件。

3.1 Helm的安裝

Helm由客戶端命helm令行工具和服務(wù)端tiller組成,Helm的安裝十分簡單。 下載helm命令行工具到master節(jié)點(diǎn)node1的/usr/local/bin下,這里下載的2.14.1版本:

curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
tar -zxvf helm-v2.14.1-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/

為了安裝服務(wù)端tiller,還需要在這臺機(jī)器上配置好kubectl工具和kubeconfig文件,確保kubectl工具可以在這臺機(jī)器上訪問apiserver且正常使用。 這里的master節(jié)點(diǎn)已經(jīng)配置好了kubectl。

因?yàn)镵ubernetes APIServer開啟了RBAC訪問控制,所以需要創(chuàng)建tiller使用的service account: tiller并分配合適的角色給它。 詳細(xì)內(nèi)容可以查看helm文檔中的Role-based Access Control。 這里簡單起見直接分配cluster-admin這個集群內(nèi)置的ClusterRole給它。創(chuàng)建helm-rbac.yaml文件:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
kubectl create -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

接下來使用helm部署tiller:

helm init --service-account tiller --skip-refresh
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

tiller默認(rèn)被部署在k8s集群中的kube-system這個namespace下:

kubectl get pod -n kube-system -l app=helm
NAME                            READY   STATUS    RESTARTS   AGE
tiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s
helm version
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

如果 tiller拉取失敗請?jiān)谒蠳ode節(jié)點(diǎn)手動拉取鏡像:

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 <br/>docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1<br/>docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1

注意由于某些原因需要網(wǎng)絡(luò)可以訪問gcr.io和kubernetes-charts.storage.googleapis.com,如果無法訪問可以通過helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.13.1 –skip-refresh使用私有鏡像倉庫中的tiller鏡像
示例:
helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 --service-account=tiller --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

最后在master上修改helm chart倉庫的地址為azure提供的鏡像地址(阿里云地址:):

helm repo add stable http://mirror.azure.cn/kubernetes/charts
"stable" has been added to your repositories

helm repo list
NAME    URL                                     
stable  http://mirror.azure.cn/kubernetes/charts
local   http://127.0.0.1:8879/charts

3.3 使用Helm部署dashboard
kubernetes-dashboard.yaml:

image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
ingress:
enabled: true
hosts:

  • k8s.frognew.com
    annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    tls:
  • secretName: frognew-com-tls-secret
    hosts:
    • k8s.frognew.com
      nodeSelector:
      node-role.kubernetes.io/edge: ''
      tolerations:
  • key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
  • key: node-role.kubernetes.io/master
    operator: Exists
    effect: PreferNoSchedule
    rbac:
    clusterAdminRole: true

helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-pkm2s kubernetes.io/service-account-token 3 3m7s

kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s
Name: kubernetes-dashboard-token-pkm2s
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 2f0781dd-156a-11e9-b0f0-080027bb7c43

Type: kubernetes.io/service-account-token

Data

ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTFlOS1iMGYwLTA4MDAyN2JiN2M0MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.24ad6ZgZMxdydpwlmYAiMxZ9VSIN7dDR7Q6-RLW0qC81ajXoQKHAyrEGpIonfld3gqbE0xO8nisskpmlkQra72-9X6sBPoByqIKyTsO83BQlME2sfOJemWD0HqzwSCjvSQa0x-bUlq9HgH2vEXzpFuSS6Svi7RbfzLXlEuggNoC4MfA4E2hF1OXml8iAKx-49y1BQQe5FGWyCyBSi1TD-ZpVs44H5gIvsGK2kcvi0JT4oHXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHnOwlX7IEIB0oBX4mPg2_xNGnqwcu-8OERU9IoqAAE2cZa0v3b5O2LMcJPrcxrVOukvRIumA
在dashboard的登錄窗口使用上面的token登錄。

使用kubeadm安裝Kubernetes 1.15

3.4 使用Helm部署metrics-server
從Heapster的github https://github.com/kubernetes/heapster中可以看到已經(jīng),heapster已經(jīng)DEPRECATED。 這里是heapster的deprecation timeline。 可以看出heapster從Kubernetes 1.12開始從Kubernetes各種安裝腳本中移除。

Kubernetes推薦使用metrics-server。我們這里也使用helm來部署metrics-server。

metrics-server.yaml:

args:

  • --logtostderr
  • --kubelet-insecure-tls
  • --kubelet-preferred-address-types=InternalIP
    nodeSelector:
    node-role.kubernetes.io/edge: ''
    tolerations:
    • key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
    • key: node-role.kubernetes.io/master
      operator: Exists
      effect: PreferNoSchedule

helm install stable/metrics-server \
-n metrics-server \
--namespace kube-system \
-f metrics-server.yaml
使用下面的命令可以獲取到關(guān)于集群節(jié)點(diǎn)基本的指標(biāo)信息:

如果拉取鏡像失敗請手動拉?。?/p>

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 gcr.io/google_containers/metrics-server-amd64:v0.3.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5
kubectl top node
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
node1   650m         32%    1276Mi          73%
node2   73m          3%     527Mi           30%
kubectl top pod -n kube-system
NAME                                    CPU(cores)   MEMORY(bytes)   
coredns-5c98db65d4-dr8lf                8m           7Mi             
coredns-5c98db65d4-lp8dg                6m           8Mi             
etcd-node1                              44m          46Mi            
kube-apiserver-node1                    74m          295Mi           
kube-controller-manager-node1           35m          50Mi            
kube-flannel-ds-amd64-7lwm9             2m           8Mi             
kube-flannel-ds-amd64-mm296             5m           9Mi             
kube-proxy-7fsrg                        1m           11Mi            
kube-proxy-k8vhm                        3m           11Mi            
kube-scheduler-node1                    8m           15Mi            
kubernetes-dashboard-848b8dd798-c4sc2   2m           14Mi            
metrics-server-8456fb6676-fwh3t         10m          19Mi            
tiller-deploy-7bf78cdbf7-9q94c          1m           16Mi

遺憾的是,當(dāng)前Kubernetes Dashboard還不支持metrics-server。因此如果使用metrics-server替代了heapster,將無法在dashboard中以圖形展示Pod的內(nèi)存和CPU情況(實(shí)際上這也不是很重要,當(dāng)前我們是在Prometheus和Grafana中定制的Kubernetes集群中各個Pod的監(jiān)控,因此在dashboard中查看Pod內(nèi)存和CPU也不是很重要)。 Dashboard的github上有很多這方面的討論,如https://github.com/kubernetes/dashboard/issues/2986,Dashboard已經(jīng)準(zhǔn)備在將來的某個時間點(diǎn)支持metrics-server。但由于metrics-server和metrics pipeline肯定是Kubernetes在monitor方面未來的方向,所以推薦使用metrics-server。

k8s國內(nèi)鏡像倉庫

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 k8s.gcr.io/ube-apiserver:v1.15.0
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0

docker pull  registry.aliyuncs.com/google_containers/pause:3.1
docker tag registry.aliyuncs.com/google_containers/pause:3.1  k8s.gcr.io/pause:3.1
docker rmi registry.aliyuncs.com/google_containers/pause:3.1

docker pull  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 
docker tag   registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0   k8s.gcr.io/kube-controller-manager:v1.15.0
docker rmi   registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0  

docker pull  registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 
docker tag   registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0   k8s.gcr.io/kube-scheduler:v1.15.0
docker rmi   registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0  

docker pull  registry.aliyuncs.com/google_containers/coredns:1.3.1     
docker tag   registry.aliyuncs.com/google_containers/coredns:1.3.1       k8s.gcr.io/coredns:1.3.1    
docker rmi   registry.aliyuncs.com/google_containers/coredns:1.3.1      

docker pull  registry.aliyuncs.com/google_containers/etcd:3.3.10     
docker tag   registry.aliyuncs.com/google_containers/etcd:3.3.10       k8s.gcr.io/etcd:3.3.10    
docker rmi   registry.aliyuncs.com/google_containers/etcd:3.3.10 

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1

docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker rmi  registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1

docker pull  registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 
docker tag   registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0   k8s.gcr.io/kube-proxy:v1.15.0
docker rmi   registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0  

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 k8s.gcr.io/metrics-server-amd64:v0.3.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5

網(wǎng)站標(biāo)題:使用kubeadm安裝Kubernetes1.15
分享網(wǎng)址:http://bm7419.com/article26/jcigjg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供App設(shè)計(jì)網(wǎng)站設(shè)計(jì)、網(wǎng)站內(nèi)鏈外貿(mào)網(wǎng)站建設(shè)、網(wǎng)站改版、網(wǎng)頁設(shè)計(jì)公司

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)

網(wǎng)站優(yōu)化排名