docker中ceph集群的日常運(yùn)維操作有哪些-創(chuàng)新互聯(lián)

這篇文章給大家分享的是有關(guān)docker中ceph集群的日常運(yùn)維操作有哪些的內(nèi)容。小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,一起跟隨小編過來看看吧。

成都創(chuàng)新互聯(lián)2013年開創(chuàng)至今,是專業(yè)互聯(lián)網(wǎng)技術(shù)服務(wù)公司,擁有項(xiàng)目成都網(wǎng)站建設(shè)、做網(wǎng)站網(wǎng)站策劃,項(xiàng)目實(shí)施與項(xiàng)目整合能力。我們以讓每一個(gè)夢(mèng)想脫穎而出為使命,1280元丹鳳做網(wǎng)站,已為上家服務(wù),為丹鳳各地企業(yè)和個(gè)人服務(wù),聯(lián)系電話:028-86922220

查看ceph的所有守護(hù)進(jìn)程

[root@k8s-node1 ceph]# systemctl list-unit-files |grep ceph
ceph-disk@.service                            static  
ceph-mds@.service                             disabled
ceph-mgr@.service                             disabled
ceph-mon@.service                             enabled 
ceph-osd@.service                             enabled 
ceph-radosgw@.service                         disabled
ceph-mds.target                               enabled 
ceph-mgr.target                               enabled 
ceph-mon.target                               enabled 
ceph-osd.target                               enabled 
ceph-radosgw.target                           enabled 
ceph.target                                   enabled

按照類型在 ceph 節(jié)點(diǎn)上啟動(dòng)特定類型的守護(hù)進(jìn)程

systemctl start ceph-osd.target
systemctl start ceph-mon.target
systemctl start ceph-mds.target

ceph 節(jié)點(diǎn)上啟動(dòng)特定的守護(hù)進(jìn)程實(shí)例

systemctl start ceph-osd@{id}
systemctl start ceph-mon@{hostname}
systemctl start ceph-msd@{hostname}

mon 監(jiān)控狀態(tài)檢查

[root@k8s-node1 ceph]# ceph -s
    cluster 2e6519d9-b733-446f-8a14-8622796f83ef
     health HEALTH_OK
     monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}
            election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3
        mgr active: k8s-node1 standbys: k8s-node3, k8s-node2
     osdmap e31: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v13640: 64 pgs, 1 pools, 0 bytes data, 0 objects
            35913 MB used, 21812 MB / 57726 MB avail
                  64 active+clean
[root@k8s-node1 ceph]# ceph 
ceph> status
    cluster 2e6519d9-b733-446f-8a14-8622796f83ef
     health HEALTH_OK
     monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}
            election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3
        mgr active: k8s-node1 standbys: k8s-node3, k8s-node2
     osdmap e31: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v13670: 64 pgs, 1 pools, 0 bytes data, 0 objects
            35915 MB used, 21810 MB / 57726 MB avail
                  64 active+clean
ceph> health
HEALTH_OK
ceph> mon_status
{"name":"k8s-node1","rank":0,"state":"leader","election_epoch":26,"quorum":[0,1,2],"features":{"required_con":"9025616074522624","required_mon":["kraken"],"quorum_con":"1152921504336314367","quorum_mon":["kraken"]},"outside_quorum":[],"extra_probe_peers":["172.16.22.202:6789\/0","172.16.22.203:6789\/0"],"sync_provider":[],"monmap":{"epoch":4,"fsid":"2e6519d9-b733-446f-8a14-8622796f83ef","modified":"2018-10-28 21:30:09.197608","created":"2018-10-28 09:49:11.509071","features":{"persistent":["kraken"],"optional":[]},"mons":[{"rank":0,"name":"k8s-node1","addr":"172.16.22.201:6789\/0","public_addr":"172.16.22.201:6789\/0"},{"rank":1,"name":"k8s-node2","addr":"172.16.22.202:6789\/0","public_addr":"172.16.22.202:6789\/0"},{"rank":2,"name":"k8s-node3","addr":"172.16.22.203:6789\/0","public_addr":"172.16.22.203:6789\/0"}]}}

ceph 日志記錄

ceph 日志默認(rèn)的位置保存在節(jié)點(diǎn)/var/log/ceph/ceph.log 里面可以使用 ceph -w 查看實(shí)時(shí)的日志記錄情況

哪個(gè)節(jié)點(diǎn)報(bào)錯(cuò)了,就登錄到哪個(gè)節(jié)點(diǎn)上用下面的命令看日志。

[root@k8s-node1 ceph]# ceph -w

ceph mon 也在不斷的對(duì)自?狀態(tài)進(jìn)?各種檢查,檢查失敗的時(shí)候會(huì)將自?的信息寫到集群日志中去

[root@k8s-node1 ceph]# ceph mon stat
e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}, election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3

檢查 osd

[root@k8s-node1 ceph]# ceph osd stat
     osdmap e31: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
[root@k8s-node1 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.05516 root default                                         
-2 0.01839     host k8s-node1                                   
 0 0.01839         osd.0           up  1.00000          1.00000 
-3 0.01839     host k8s-node2                                   
 1 0.01839         osd.1           up  1.00000          1.00000 
-4 0.01839     host k8s-node3                                   
 2 0.01839         osd.2           up  1.00000          1.00000

檢查 pool 的?小以及可用狀態(tài)

[root@k8s-node1 ceph]#  ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    57726M     21811M       35914M         62.21 
POOLS:
    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd      0         0         0         5817M           0

感謝各位的閱讀!關(guān)于“docker中ceph集群的日常運(yùn)維操作有哪些”這篇文章就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,讓大家可以學(xué)到更多知識(shí),如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到吧!

文章題目:docker中ceph集群的日常運(yùn)維操作有哪些-創(chuàng)新互聯(lián)
文章地址:http://bm7419.com/article36/hcosg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供商城網(wǎng)站域名注冊(cè)、虛擬主機(jī)定制開發(fā)、動(dòng)態(tài)網(wǎng)站、外貿(mào)建站

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

成都網(wǎng)站建設(shè)