分布式存储系统之Ceph集群状态获取及ceph配置文件说明

前文我们了解了Ceph的访问接口的启用相关话题 , 回顾请参考https://www.cnblogs.com/qiuhom-1874/p/16727620.html;今天我们来聊一聊获取ceph集群状态和ceph配置文件说明相关话题;
Ceph集群状态获取常用命令
1、ceph -s :该命令用于输出ceph集群系统状态信息

分布式存储系统之Ceph集群状态获取及ceph配置文件说明

文章插图
提示:ceph -s主要输出有三类信息 , 一类是集群相关信息,比如集群id,健康状态;第二类是服务类相关信息,比如集群运行了几个mon节点,几个mgr节点,几个mds , osd和rgw;这些服务都处于什么样的状态等等;我们把这些信息称为集群运行状况 , 它可以让我们一目了然的了解到集群现有运行状况;第三类信息是数据存储类的信息;比如有多少个存储池 , 和pg数量;usage用来展示集群使用容量和剩余容量以及总容量;这里需要注意一点,集群显示的总磁盘大小,它不等于可以存储这么多对象数据;因为每一个对象数据都多个副本,所以真正能够存储对象数据的量应该根据副本的数量来计算;默认情况下 , 我们创建的存储都是副本型存储池,副本数量是3个(其中一个主,两个从),即每一个对象数据都会存储三份,所以真正能够存储对象数据的空间只有总空间的三分之一;
获取集群的即时状态信息
2、获取pg的状态
[cephadm@ceph-admin ceph-cluster]$ ceph pg stat304 pgs: 304 active+clean; 3.8 KiB data, 10 GiB used, 890 GiB / 900 GiB avail[cephadm@ceph-admin ceph-cluster]$3、获取存储池的状态
[cephadm@ceph-admin ceph-cluster]$ ceph osd pool statspool testpool id 1nothing is going onpool rbdpool id 2nothing is going onpool .rgw.root id 3nothing is going onpool default.rgw.control id 4nothing is going onpool default.rgw.meta id 5nothing is going onpool default.rgw.log id 6nothing is going onpool cephfs-metadatpool id 7nothing is going onpool cephfs-datapool id 8nothing is going on[cephadm@ceph-admin ceph-cluster]$提示:如果后面没有跟指定的存储表示获取所有存储的状态;
4、获取存储池大小和空间使用情况
[cephadm@ceph-admin ceph-cluster]$ ceph dfGLOBAL:SIZEAVAILRAW USED%RAW USED900 GiB890 GiB10 GiB1.13POOLS:NAMEIDUSED%USEDMAX AVAILOBJECTStestpool10 B0281 GiB0rbdpool2389 B0281 GiB5.rgw.root31.1 KiB0281 GiB4default.rgw.control40 B0281 GiB8default.rgw.meta50 B0281 GiB0default.rgw.log60 B0281 GiB175cephfs-metadatpool72.2 KiB0281 GiB22cephfs-datapool80 B0281 GiB0[cephadm@ceph-admin ceph-cluster]$提示:ceph df输出的内容主要分两大段 , 第一段是global,全局存储空间用量情况;size表示总空间大小,avail表示剩余空间大?。籖AW USED表示已用到原始存储空间;%RAW USED表示已用原始空间占比重空间的比例;第二段是相关存储空间使用情况;其中MAX AVAIL表示对应存储池能够使用的最大容量;OBJECTS表示该存储池中对象的个数;
获取存储空间用量详细情况
[cephadm@ceph-admin ceph-cluster]$ ceph df detailGLOBAL:SIZEAVAILRAW USED%RAW USEDOBJECTS900 GiB890 GiB10 GiB1.13214POOLS:NAMEIDQUOTA OBJECTSQUOTA BYTESUSED%USEDMAX AVAILOBJECTSDIRTYREADWRITERAW USEDtestpool1N/AN/A0 B0281 GiB002 B2 B0 Brbdpool2N/AN/A389 B0281 GiB5575 B19 B1.1 KiB.rgw.root3N/AN/A1.1 KiB0281 GiB4466 B4 B3.4 KiBdefault.rgw.control4N/AN/A0 B0281 GiB880 B0 B0 Bdefault.rgw.meta5N/AN/A0 B0281 GiB000 B0 B0 Bdefault.rgw.log6N/AN/A0 B0281 GiB1751757.2 KiB4.8 KiB0 Bcephfs-metadatpool7N/AN/A2.2 KiB0281 GiB22220 B45 B6.7 KiBcephfs-datapool8N/AN/A0 B0281 GiB000 B0 B0 B[cephadm@ceph-admin ceph-cluster]$5、检查OSD和MON的状态
[cephadm@ceph-admin ceph-cluster]$ ceph osd stat10 osds: 10 up, 10 in; epoch: e99[cephadm@ceph-admin ceph-cluster]$ ceph osd dumpepoch 99fsid 7fd4a619-9767-4b46-9cee-78b9dfe88f34created 2022-09-24 00:36:13.639715modified 2022-09-25 12:33:15.111283flags sortbitwise,recovery_deletes,purged_snapdirscrush_version 25full_ratio 0.95backfillfull_ratio 0.9nearfull_ratio 0.85require_min_compat_client jewelmin_compat_client jewelrequire_osd_release mimicpool 1 'testpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 42 flags hashpspool stripe_width 0pool 2 'rbdpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 81 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbdremoved_snaps [1~3]pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 84 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 87 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 89 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 6 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 91 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgwpool 7 'cephfs-metadatpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 99 flags hashpspool stripe_width 0 application cephfspool 8 'cephfs-datapool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 99 flags hashpspool stripe_width 0 application cephfsmax_osd 10osd.0 upinweight 1 up_from 67 up_thru 96 down_at 66 last_clean_interval [64,65) 192.168.0.71:6802/1361 172.16.30.71:6802/1361 172.16.30.71:6803/1361 192.168.0.71:6803/1361 exists,up bf3649af-e3f4-41a2-a5ce-8f1a316d344eosd.1 upinweight 1 up_from 68 up_thru 96 down_at 66 last_clean_interval [64,65) 192.168.0.71:6800/1346 172.16.30.71:6800/1346 172.16.30.71:6801/1346 192.168.0.71:6801/1346 exists,up 7293a12a-7b4e-4c86-82dc-0acc15c3349eosd.2 upinweight 1 up_from 67 up_thru 96 down_at 66 last_clean_interval [60,65) 192.168.0.72:6800/1389 172.16.30.72:6800/1389 172.16.30.72:6801/1389 192.168.0.72:6801/1389 exists,up 96c437c5-8e82-4486-910f-9e98d195e4f9osd.3 upinweight 1 up_from 67 up_thru 96 down_at 66 last_clean_interval [60,65) 192.168.0.72:6802/1406 172.16.30.72:6802/1406 172.16.30.72:6803/1406 192.168.0.72:6803/1406 exists,up 4659d2a9-09c7-49d5-bce0-4d2e65f5198cosd.4 upinweight 1 up_from 71 up_thru 96 down_at 68 last_clean_interval [59,66) 192.168.0.73:6802/1332 172.16.30.73:6802/1332 172.16.30.73:6803/1332 192.168.0.73:6803/1332 exists,up de019aa8-3d2a-4079-a99e-ec2da2d4edb9osd.5 upinweight 1 up_from 71 up_thru 96 down_at 68 last_clean_interval [58,66) 192.168.0.73:6800/1333 172.16.30.73:6800/1333 172.16.30.73:6801/1333 192.168.0.73:6801/1333 exists,up 119c8748-af3b-4ac4-ac74-6171c90c82ccosd.6 upinweight 1 up_from 69 up_thru 96 down_at 68 last_clean_interval [59,66) 192.168.0.74:6800/1306 172.16.30.74:6800/1306 172.16.30.74:6801/1306 192.168.0.74:6801/1306 exists,up 08d8dd8b-cdfe-4338-83c0-b1e2b5c2a799osd.7 upinweight 1 up_from 69 up_thru 96 down_at 68 last_clean_interval [60,65) 192.168.0.74:6802/1301 172.16.30.74:6802/1301 172.16.30.74:6803/1301 192.168.0.74:6803/1301 exists,up 9de6cbd0-bb1b-49e9-835c-3e714a867393osd.8 upinweight 1 up_from 73 up_thru 96 down_at 66 last_clean_interval [59,65) 192.168.0.75:6800/1565 172.16.30.75:6800/1565 172.16.30.75:6801/1565 192.168.0.75:6801/1565 exists,up 63aaa0b8-4e52-4d74-82a8-fbbe7b48c837osd.9 upinweight 1 up_from 73 up_thru 96 down_at 66 last_clean_interval [59,65) 192.168.0.75:6802/1558 172.16.30.75:6802/1558 172.16.30.75:6803/1558 192.168.0.75:6803/1558 exists,up 6bf3204a-b64c-4808-a782-434a93ac578c[cephadm@ceph-admin ceph-cluster]$

推荐阅读