kubernetesk8s部署-創(chuàng)新互聯(lián)

kubernetes k8s 部署

成都創(chuàng)新互聯(lián)服務(wù)項(xiàng)目包括東源網(wǎng)站建設(shè)、東源網(wǎng)站制作、東源網(wǎng)頁制作以及東源網(wǎng)絡(luò)營(yíng)銷策劃等。多年來,我們專注于互聯(lián)網(wǎng)行業(yè),利用自身積累的技術(shù)優(yōu)勢(shì)、行業(yè)經(jīng)驗(yàn)、深度合作伙伴關(guān)系等,向廣大中小型企業(yè)、政府機(jī)構(gòu)等提供互聯(lián)網(wǎng)行業(yè)的解決方案,東源網(wǎng)站推廣取得了明顯的社會(huì)效益與經(jīng)濟(jì)效益。目前,我們服務(wù)的客戶以成都為中心已經(jīng)輻射到東源省份的部分城市,未來相信會(huì)繼續(xù)擴(kuò)大服務(wù)區(qū)域并繼續(xù)獲得客戶的支持與信任!

集群規(guī)劃

centos-test-ip-207-master    192.168.11.207
centos-test-ip-208                 192.168.11.208
centos-test-ip-209                 192.168.11.209

kubernetes 1.10.7
flannel flannel-v0.10.0-linux-amd64.tar
ETCD etcd-v3.3.8-linux-amd64.tar
CNI cni-plugins-amd64-v0.7.1
docker 18.03.1-ce

安裝包下載

etcd:https://github.com/coreos/etcd/releases/
flannel:https://github.com/coreos/flannel/releases/
cni:https://github.com/containernetworking/plugins/releases
kubernetes:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1107

注:安裝包 kubernetes1.10  看官笑納
鏈接:https://pan.baidu.com/s/1_7EfOMlRkQSybEH_p6NtTw 
提取碼:345b

互相解析,關(guān)防火墻,關(guān)掉分區(qū),服務(wù)器時(shí)間 (三臺(tái)同步)

解析

vim /etc/hosts
192.168.11.207 centos-test-ip-207-master
192.168.11.208 centos-test-ip-208
192.168.11.209 centos-test-ip-209

防火墻

systemctl stop firewalld
setenforce 0

關(guān)閉swap

swapoff -a
vim /etc/fstab    //  swap設(shè)置注釋

同步服務(wù)時(shí)區(qū) # 時(shí)間同步可忽略

tzselect

公鑰傳輸 # 主傳從

ssh-keygen
ssh-copy-id

安裝docker (三臺(tái)同步)

卸載原有版本

yum remove docker docker-common docker-selinux docker-engine

安裝docker所依賴驅(qū)動(dòng)

yum install -y yum-utils device-mapper-persistent-data lvm2

添加yum源 #官方源拉取延遲,故選擇國(guó)內(nèi)阿里源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast

選擇docker版本安裝

yum list docker-ce --showduplicates | sort -r

選擇安裝18.03.1.ce

yum -y install docker-ce-18.03.1.ce

啟動(dòng)docker

systemctl start docker
systemctl enable docker

安裝ETCD集群

同步操作

tar xvf etcd-v3.3.8-linux-amd64.tar.gz
cd etcd-v3.3.8-linux-amd64
cp etcd etcdctl /usr/bin
mkdir -p /var/lib/etcd /etc/etcd    #創(chuàng)建相關(guān)文件夾

etcd配置文件

主要文件操作 /usr/lib/systemd/system/etcd.service/etc/etcd/etcd.conf
etcd集群的主從節(jié)點(diǎn)關(guān)系與kubernetes集群的主從節(jié)點(diǎn)關(guān)系不是同的
etcd集群在啟動(dòng)和運(yùn)行過程中會(huì)選舉出主節(jié)點(diǎn)
因此三個(gè)節(jié)點(diǎn)命名 etcd-i,etcd-ii,etcd-iii 體驗(yàn)關(guān)系

207-master
cat   /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
cat    /etc/etcd/etcd.conf
# [member]
# 節(jié)點(diǎn)名稱
ETCD_NAME=etcd-i
# 數(shù)據(jù)存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 監(jiān)聽其他Etcd實(shí)例的地址
ETCD_LISTEN_PEER_URLS="http://192.168.11.207:2380"
# 監(jiān)聽客戶端地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.11.207:2379,http://127.0.0.1:2379"

#[cluster]
# 通知其他Etcd實(shí)例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.11.207:2380"
# 初始化集群內(nèi)節(jié)點(diǎn)地址
ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380"   
# 初始化集群狀態(tài),new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
# 通知客戶端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.207:2379,http://127.0.0.1:2379"
208
cat  /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
cat /etc/etcd/etcd.conf
# [member]
# 節(jié)點(diǎn)名稱
ETCD_NAME=etcd-ii
# 數(shù)據(jù)存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 監(jiān)聽其他Etcd實(shí)例的地址
ETCD_LISTEN_PEER_URLS="http://192.168.11.208:2380"
# 監(jiān)聽客戶端地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.11.208:2379,http://127.0.0.1:2379"

#[cluster]
# 通知其他Etcd實(shí)例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.11.208:2380"
# 初始化集群內(nèi)節(jié)點(diǎn)地址
ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380"   
# 初始化集群狀態(tài),new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
# 通知客戶端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.208:2379,http://127.0.0.1:2379"
209
cat  /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
cat /etc/etcd/etcd.conf
# [member]
# 節(jié)點(diǎn)名稱
ETCD_NAME=etcd-iii
# 數(shù)據(jù)存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 監(jiān)聽其他Etcd實(shí)例的地址
ETCD_LISTEN_PEER_URLS="http://192.168.11.209:2380"
# 監(jiān)聽客戶端地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.11.209:2379,http://127.0.0.1:2379"

#[cluster]
# 通知其他Etcd實(shí)例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.11.209:2380"
# 初始化集群內(nèi)節(jié)點(diǎn)地址
ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380"   
# 初始化集群狀態(tài),new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
# 通知客戶端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.209:2379,http://127.0.0.1:2379"

啟動(dòng)ETCD集群

主→從順序操作

systemctl daemon-reload    ## 重新加載配置文件
systemctl start etcd.service
systemctl enable etcd.service

查看集群信息

[root@centos-test-ip-207-master ~]# etcdctl member list
e8bd2d4d9a7cba8: name=etcd-ii peerURLs=http://192.168.11.208:2380 clientURLs=http://127.0.0.1:2379,http://192.168.11.208:2379 isLeader=true
50a675761b915629: name=etcd-i peerURLs=http://192.168.11.207:2380 clientURLs=http://127.0.0.1:2379,http://192.168.11.207:2379 isLeader=false
9a891df60a11686b: name=etcd-iii peerURLs=http://192.168.11.209:2380 clientURLs=http://127.0.0.1:2379,http://192.168.11.209:2379 isLeader=false
[root@centos-test-ip-207-master ~]# etcdctl cluster-health
member e8bd2d4d9a7cba8 is healthy: got healthy result from http://127.0.0.1:2379
member 50a675761b915629 is healthy: got healthy result from http://127.0.0.1:2379
member 9a891df60a11686b is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

安裝flannel

同步操作

mkdir -p /opt/flannel/bin/
tar xvf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/flannel/bin/
cat /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/flannel/bin/flanneld -etcd-endpoints=http://192.168.11.207:2379,http://192.168.11.208:2379,http://192.168.11.209:2379 -etcd-prefix=coreos.com/network
ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

設(shè)置flannel網(wǎng)絡(luò)配置(網(wǎng)段劃分,網(wǎng)段信息可修改) # 主操作即可

[root@centos-test-ip-207-master ~]# etcdctl mk /coreos.com/network/config '{"Network":"172.18.0.0/16", "SubnetMin": "172.18.1.0", "SubnetMax": "172.18.254.0",  "Backend": {"Type": "vxlan"}}'

修改網(wǎng)段:刪除:etcdctl rm /coreos.com/network/config  ,再執(zhí)行配置命令即可

下載flannel

同步操作
flannel服務(wù)依賴flannel鏡像,所以要先下載flannel鏡像,執(zhí)行以下命令從阿里云下載,創(chuàng)建鏡像tag

docker pull registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64
docker tag registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0

注:
配置docker
flannel配置中有一項(xiàng)
ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c
flannel啟動(dòng)后執(zhí)行mk-docker-opts.sh,并生成/etc/docker/flannel_net.env文件
flannel會(huì)修改docker網(wǎng)絡(luò),flannel_net.env是flannel生成的docker配置參數(shù),因此,還要修改docker配置項(xiàng)

cat  /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
EnvironmentFile=/etc/docker/flannel_net.env      # 添加
ExecReload=/bin/kill -s HUP $MAINPID
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT    #添加
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

注:
After:flannel啟動(dòng)之后再啟動(dòng)docker
EnvironmentFile:配置docker的啟動(dòng)參數(shù),由flannel生成
ExecStart:增加docker啟動(dòng)參數(shù)
ExecStartPost:在docker啟動(dòng)之后執(zhí)行,會(huì)修改主機(jī)的iptables路由規(guī)則

啟動(dòng)flannel

同步操作

systemctl daemon-reload
systemctl start flannel.service
systemctl enable flannel.service
systemctl restart docker.service

安裝CNI

同步操作

mkdir -p /opt/cni/bin /etc/cni/net.d
tar xvf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
cat  /etc/cni/net.d/10-flannel.conflist
{
  "name":"cni0",
  "cniVersion":"0.3.1",
  "plugins":[
    {
      "type":"flannel",
      "delegate":{
        "forceAddress":true,
        "isDefaultGateway":true
      }
    },
    {
      "type":"portmap",
      "capabilities":{
        "portMappings":true
      }
    }
  ]
}

安裝K8S集群

CA證書

kubernetes k8s 部署
同步操作

mkdir -p /etc/kubernetes/ca
207
cd /etc/kubernetes/ca/

生產(chǎn)證書和私鑰

[root@centos-test-ip-207-master ca]# openssl genrsa -out ca.key 2048
[root@centos-test-ip-207-master ca]# openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s" -days 5000 -out ca.crt

生成kube-apiserver證書和私鑰

[root@centos-test-ip-207-master ca]# cat master_ssl.conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s
IP.1 = 172.18.0.1
IP.2 = 192.168.11.207
[root@centos-test-ip-207-master ca]# openssl genrsa -out apiserver-key.pem 2048
[root@centos-test-ip-207-master ca]# openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=k8s" -config master_ssl.conf
[root@centos-test-ip-207-master ca]# openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile master_ssl.conf

生成kube-controller-manager/kube-scheduler證書和私鑰

[root@centos-test-ip-207-master ca]# openssl genrsa -out cs_client.key 2048
[root@centos-test-ip-207-master ca]# openssl req -new -key cs_client.key -subj "/CN=k8s" -out cs_client.csr
[root@centos-test-ip-207-master ca]# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000

拷貝證書到208,209

[root@centos-test-ip-207-master ca]# scp ca.crt ca.key centos-test-ip-208:/etc/kubernetes/ca/
[root@centos-test-ip-207-master ca]# scp ca.crt ca.key centos-test-ip-209:/etc/kubernetes/ca/
208證書配置

/CN 對(duì)應(yīng)本機(jī)IP
cd /etc/kubernetes/ca/

[root@centos-test-ip-208 ca]# openssl genrsa -out kubelet_client.key 2048
[root@centos-test-ip-208 ca]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.3.193" -out kubelet_client.csr
[root@centos-test-ip-208 ca]# openssl x509 -req -in kubelet_client.csr  -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
209證書配置

/CN 對(duì)應(yīng)本機(jī)IP
cd /etc/kubernetes/ca/

[root@centos-test-ip-209 ca]# openssl genrsa -out kubelet_client.key 2048
[root@centos-test-ip-209 ca]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.11.209" -out kubelet_client.csr
[root@centos-test-ip-209 ca]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000

安裝k8s

207
[root@centos-test-ip-207-master ~]# tar xvf kubernetes-server-linux-amd64.tar.gz -C /opt
[root@centos-test-ip-207-master ~]# cd /opt/kubernetes/server/bin
[root@centos-test-ip-207-master bin]# cp -a `ls |egrep -v "*.tar|*_tag"` /usr/bin
[root@centos-test-ip-207-master bin]# mkdir -p /var/log/kubernetes

配置kube-apiserver

[root@centos-test-ip-207-master bin]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver.conf
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置apiserver.conf

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/apiserver.conf
KUBE_API_ARGS="\
    --storage-backend=etcd3 \
    --etcd-servers=http://192.168.11.207:2379,http://192.168.11.208:2379,http://192.168.11.209:2379 \
    --bind-address=0.0.0.0 \
    --secure-port=6443  \
    --service-cluster-ip-range=172.18.0.0/16 \
    --service-node-port-range=1-65535 \
    --kubelet-port=10250 \
    --advertise-address=192.168.11.207 \
    --allow-privileged=false \
    --anonymous-auth=false \
    --client-ca-file=/etc/kubernetes/ca/ca.crt \
    --tls-private-key-file=/etc/kubernetes/ca/apiserver-key.pem \
    --tls-cert-file=/etc/kubernetes/ca/apiserver.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,NamespaceExists,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota \
    --logtostderr=true \
    --log-dir=/var/log/kubernets \
    --v=2"

注:
#解釋說明
--etcd-servers #連接到etcd集群
--secure-port #開啟安全端口6443
--client-ca-file、--tls-private-key-file、--tls-cert-file配置CA證書
--enable-admission-plugins #開啟準(zhǔn)入權(quán)限
--anonymous-auth=false #不接受匿名訪問,若為true,則表示接受,此處設(shè)置為false,便于dashboard訪問

配置kube-controller-manager

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/kube-controller-config.yaml
apiVersion: v1
kind: Config
users:
- name: controller
  user:
    client-certificate: /etc/kubernetes/ca/cs_client.crt
    client-key: /etc/kubernetes/ca/cs_client.key
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
contexts:
- context:
    cluster: local
    user: controller
  name: default-context
current-context: default-context

配置kube-controller-manager.service

[root@centos-test-ip-207-master bin]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager.conf
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置controller-manager.conf

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/controller-manager.conf
KUBE_CONTROLLER_MANAGER_ARGS="\
    --master=https://192.168.11.207:6443 \
    --service-account-private-key-file=/etc/kubernetes/ca/apiserver-key.pem \
    --root-ca-file=/etc/kubernetes/ca/ca.crt \
    --cluster-signing-cert-file=/etc/kubernetes/ca/ca.crt \
    --cluster-signing-key-file=/etc/kubernetes/ca/ca.key \
    --kubeconfig=/etc/kubernetes/kube-controller-config.yaml \
    --logtostderr=true \
    --log-dir=/var/log/kubernetes \
    --v=2"

注:
master連接到master節(jié)點(diǎn)
service-account-private-key-file、root-ca-file、cluster-signing-cert-file、cluster-signing-key-file配置CA證書
kubeconfig是配置文件

配置kube-scheduler

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/kube-scheduler-config.yaml
apiVersion: v1
kind: Config
users:
- name: scheduler
  user:
    client-certificate: /etc/kubernetes/ca/cs_client.crt
    client-key: /etc/kubernetes/ca/cs_client.key
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
contexts:
- context:
    cluster: local
    user: scheduler
  name: default-context
current-context: default-context
[root@centos-test-ip-207-master bin]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=/etc/kubernetes/scheduler.conf
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/scheduler.conf
KUBE_SCHEDULER_ARGS="\
    --master=https://192.168.11.207:6443 \
    --kubeconfig=/etc/kubernetes/kube-scheduler-config.yaml \
    --logtostderr=true \
    --log-dir=/var/log/kubernetes \
    --v=2"

啟動(dòng)master

systemctl daemon-reload
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service

日志查看

journalctl -xeu kube-apiserver --no-pager
journalctl -xeu kube-controller-manager --no-pager
journalctl -xeu kube-scheduler --no-pager
# 實(shí)時(shí)查看加 -f

節(jié)點(diǎn)部署K8S

從(節(jié)點(diǎn))同步操作

tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /opt
cd /opt/kubernetes/server/bin
cp -a kubectl kubelet kube-proxy /usr/bin/
mkdir -p /var/log/kubernetes
cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# 修改內(nèi)核參數(shù),iptables過濾規(guī)則生效.如果未用到可忽略
sysctl -p  #配置生效

208 配置kubelet

[root@centos-test-ip-208 ~]# cat /etc/kubernetes/kubelet-config.yaml
apiVersion: v1
kind: Config
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ca/kubelet_client.crt
    client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
    server: https://192.168.11.207:6443
  name: local
contexts:
- context:
    cluster: local
    user: kubelet
  name: default-context
current-context: default-context
preferences: {}
[root@centos-test-ip-208 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@centos-test-ip-208 ~]#  cat /etc/kubernetes/kubelet.conf
KUBELET_ARGS="\
    --kubeconfig=/etc/kubernetes/kubelet-config.yaml \
    --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 \
    --hostname-override=192.168.11.208 \
    --network-plugin=cni \
    --cni-conf-dir=/etc/cni/net.d \
    --cni-bin-dir=/opt/cni/bin \
    --logtostderr=true \
    --log-dir=/var/log/kubernetes \
    --v=2"

注:###################
--hostname-override #配置node名稱 建議使用node節(jié)點(diǎn)的IP
#--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--pod-infra-container-image #指定pod的基礎(chǔ)鏡像 默認(rèn)是google的,建議改為國(guó)內(nèi),或者FQ
或者 下載到本地重新命名鏡像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
--kubeconfig #為配置文件

配置KUBE-代理

[root@centos-test-ip-208 ~]# cat /etc/kubernetes/proxy-config.yaml
apiVersion: v1
kind: Config
users:
- name: proxy
  user:
    client-certificate: /etc/kubernetes/ca/kubelet_client.crt
    client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
    server: https://192.168.11.207:6443
  name: local
contexts:
- context:
    cluster: local
    user: proxy
  name: default-context
current-context: default-context
preferences: {}
[root@centos-test-ip-208 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy.conf
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@centos-test-ip-208 ~]# cat /etc/kubernetes/proxy.conf
KUBE_PROXY_ARGS="\
    --master=https://192.168.11.207:6443 \
    --hostname-override=192.168.11.208 \
    --kubeconfig=/etc/kubernetes/proxy-config.yaml \
    --logtostderr=true \
    --log-dir=/var/log/kubernetes \
    --v=2"

209 配置kubelet

[root@centos-test-ip-209 ~]# cat /etc/kubernetes/kubelet-config.yaml
apiVersion: v1
kind: Config
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ca/kubelet_client.crt
    client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
    server: https://192.168.11.207:6443
  name: local
contexts:
- context:
    cluster: local
    user: kubelet
  name: default-context
current-context: default-context
preferences: {}
[root@centos-test-ip-209 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@centos-test-ip-209 ~]# cat /etc/kubernetes/kubelet.conf
KUBELET_ARGS="\
    --kubeconfig=/etc/kubernetes/kubelet-config.yaml \
    --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 \
    --hostname-override=192.168.11.209 \
    --network-plugin=cni \
    --cni-conf-dir=/etc/cni/net.d \
    --cni-bin-dir=/opt/cni/bin \
    --logtostderr=true \
    --log-dir=/var/log/kubernetes \
    --v=2"

注:
###################
--hostname-override #配置node名稱 建議使用node節(jié)點(diǎn)的IP
#--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--pod-infra-container-image #指定pod的基礎(chǔ)鏡像 默認(rèn)是google的,建議改為國(guó)內(nèi),或者FQ
或者 下載到本地重新命名鏡像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
--kubeconfig #為配置文件

配置KUBE-代理

 [root@centos-test-ip-209 ~]# cat /etc/kubernetes/proxy-config.yaml
apiVersion: v1
kind: Config
users:
- name: proxy
  user:
    client-certificate: /etc/kubernetes/ca/kubelet_client.crt
    client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
    server: https://192.168.3.121:6443
  name: local
contexts:
- context:
    cluster: local
    user: proxy
  name: default-context
current-context: default-context
preferences: {}
 [root@centos-test-ip-209 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy.conf
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
 [root@centos-test-ip-209 ~]# cat /etc/kubernetes/proxy.conf
KUBE_PROXY_ARGS="\
    --master=https://192.168.11.207:6443 \
    --hostname-override=192.168.11.209 \
    --kubeconfig=/etc/kubernetes/proxy-config.yaml \
    --logtostderr=true \
    --log-dir=/var/log/kubernetes \
    --v=2"

注:
--hostname-override #配置node名稱,要與kubelet對(duì)應(yīng),kubelet配置了,則kube-proxy也要配置
--master #連接master服務(wù)
--kubeconfig #為配置文件

啟動(dòng)節(jié)點(diǎn),日志查看 #注:一定要關(guān)閉swap分區(qū)

從(節(jié)點(diǎn))同步操作

systemctl daemon-reload
systemctl start kubelet.service
systemctl enable kubelet.service
systemctl start kube-proxy.service
systemctl enable kube-proxy.service
journalctl -xeu kubelet --no-pager
journalctl -xeu kube-proxy --no-pager
# 實(shí)時(shí)查看加 -f

master查看節(jié)點(diǎn)

[root@centos-test-ip-207-master ~]# kubectl get nodes
NAME             STATUS    ROLES     AGE       VERSION
192.168.11.208   Ready     <none>    1d        v1.10.7
192.168.11.209   Ready     <none>    1d        v1.10.7

集群測(cè)試

配置nginx 測(cè)試文件 (master)

[root@centos-test-ip-207-master bin]# cat  nginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
  labels:
    name: nginx-rc
spec:
  replicas: 2
  selector:
    name: nginx-pod
  template:
    metadata:
      labels: 
        name: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
[root@centos-test-ip-207-master bin]# cat nginx-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: 
    name: nginx-service
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30081
  selector:
    name: nginx-pod

啟動(dòng)

master(207)

kubectl create -f nginx-rc.yaml
kubectl create -f nginx-svc.yaml

#查看pod創(chuàng)建情況

[root@centos-test-ip-207-master bin]# kubectl get pod -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-rc-d9kkc   1/1       Running   0          1d        172.18.30.2    192.168.11.209
nginx-rc-l9ctn   1/1       Running   0          1d        172.18.101.2   192.168.11.208

注:http://節(jié)點(diǎn):30081/ 出現(xiàn)nginx界面配置完成

刪除服務(wù)及nginx的部署 #配置文件出現(xiàn)問題以下命令可以刪除重新操作

kubectl delete -f nginx-svc.yaml
kubectl delete -f nginx-rc.yaml

界面 UI 下載部署 (master)

(主)207 操作

下載dashboard yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
修改文件 kubernetes-dashboard.yaml
image 那里 要修改下.默認(rèn)的地址被墻了

#image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort                                 #  添加  type:NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000                            # 添加 nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
創(chuàng)建權(quán)限控制yaml
dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1  
kind: ClusterRoleBinding  
metadata:  
  name: kubernetes-dashboard  
  labels:  
    k8s-app: kubernetes-dashboard  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: cluster-admin  
subjects:  
- kind: ServiceAccount  
  name: kubernetes-dashboard  
  namespace: kube-system

創(chuàng)建,查看

kubectl create -f kubernetes-dashboard.yaml
kubectl create -f dashboard-admin.yaml
[root@centos-test-ip-207-master ~]#  kubectl  get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
default       nginx-rc-d9kkc                          1/1       Running   0          1d        172.18.30.2    192.168.11.209
default       nginx-rc-l9ctn                          1/1       Running   0          1d        172.18.101.2   192.168.11.208
kube-system   kubernetes-dashboard-66c9d98865-qgbgq   1/1       Running   0          20h       172.18.30.9    192.168.11.209

訪問 # 火狐訪問 google 出現(xiàn)不了秘鑰界面

注意:HTTPS 訪問
直接訪問 https://節(jié)點(diǎn):配置的端口 訪問

kubernetes k8s 部署
kubernetes k8s 部署

kubernetes k8s 部署
kubernetes k8s 部署

訪問會(huì)提示登錄.我們采取token登錄
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token
[root@centos-test-ip-207-master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token
Name:       default-token-t8hbl
Type:       kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9..(很多字符)

####
#將這些字符復(fù)制到前端登錄即可.

另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內(nèi)外云服務(wù)器15元起步,三天無理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國(guó)服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡(jiǎn)單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢(shì),專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場(chǎng)景需求。

文章名稱:kubernetesk8s部署-創(chuàng)新互聯(lián)
網(wǎng)頁地址:http://www.bm7419.com/article24/ceshce.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供商城網(wǎng)站、企業(yè)網(wǎng)站制作、面包屑導(dǎo)航、電子商務(wù)、靜態(tài)網(wǎng)站、做網(wǎng)站

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

h5響應(yīng)式網(wǎng)站建設(shè)