分布式存储系统之Ceph集群部署( 七 )

提示:create可以指定数据盘 , 日志盘以及block-db盘和bluestore 日志盘等信息;
将ceph-mon01的/dev/sdb盘添加为集群osd
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create ceph-mon01 --data /dev/sdb查看集群状态

分布式存储系统之Ceph集群部署

文章插图
提示:可以看到现在集群osd有一个正常,存储空间为80G;说明我们刚才添加到osd已经成功;后续其他主机上的osd也是上述过程,先擦净磁盘,然后在添加为osd;
列出对应主机上的osd信息
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd list ceph-mon01[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf[ceph_deploy.cli][INFO] Invoked (2.0.1): /bin/ceph-deploy osd list ceph-mon01[ceph_deploy.cli][INFO] ceph-deploy options:[ceph_deploy.cli][INFO]username: None[ceph_deploy.cli][INFO]verbose: False[ceph_deploy.cli][INFO]debug: False[ceph_deploy.cli][INFO]overwrite_conf: False[ceph_deploy.cli][INFO]subcommand: list[ceph_deploy.cli][INFO]quiet: False[ceph_deploy.cli][INFO]cd_conf: <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f01148f9128>[ceph_deploy.cli][INFO]cluster: ceph[ceph_deploy.cli][INFO]host: ['ceph-mon01'][ceph_deploy.cli][INFO]func: <function osd at 0x7f011493d9b0>[ceph_deploy.cli][INFO]ceph_conf: None[ceph_deploy.cli][INFO]default_release: False[ceph-mon01][DEBUG ] connection detected need for sudo[ceph-mon01][DEBUG ] connected to host: ceph-mon01[ceph-mon01][DEBUG ] detect platform information from remote host[ceph-mon01][DEBUG ] detect machine type[ceph-mon01][DEBUG ] find the location of an executable[ceph_deploy.osd][INFO] Distro info: CentOS Linux 7.9.2009 Core[ceph_deploy.osd][DEBUG ] Listing disks on ceph-mon01...[ceph-mon01][DEBUG ] find the location of an executable[ceph-mon01][INFO] Running command: sudo /usr/sbin/ceph-volume lvm list[ceph-mon01][DEBUG ][ceph-mon01][DEBUG ][ceph-mon01][DEBUG ] ====== osd.0 =======[ceph-mon01][DEBUG ][ceph-mon01][DEBUG ][block]/dev/ceph-56cdba71-749f-4c01-8364-f5bdad0b8f8d/osd-block-538baff0-ed25-4e3f-9ed7-f228a7ca0086[ceph-mon01][DEBUG ][ceph-mon01][DEBUG ]block device/dev/ceph-56cdba71-749f-4c01-8364-f5bdad0b8f8d/osd-block-538baff0-ed25-4e3f-9ed7-f228a7ca0086[ceph-mon01][DEBUG ]block uuid40cRBg-53ZO-Dbho-wWo6-gNcJ-ZJJi-eZC6Vt[ceph-mon01][DEBUG ]cephx lockbox secret[ceph-mon01][DEBUG ]cluster fsid7fd4a619-9767-4b46-9cee-78b9dfe88f34[ceph-mon01][DEBUG ]cluster nameceph[ceph-mon01][DEBUG ]crush device classNone[ceph-mon01][DEBUG ]encrypted0[ceph-mon01][DEBUG ]osd fsid538baff0-ed25-4e3f-9ed7-f228a7ca0086[ceph-mon01][DEBUG ]osd id0[ceph-mon01][DEBUG ]typeblock[ceph-mon01][DEBUG ]vdo0[ceph-mon01][DEBUG ]devices/dev/sdb[cephadm@ceph-admin ceph-cluster]$提示:到此我们RADOS集群相关组件就都部署完毕了;
管理osd ceph命令查看osd相关信息
1、查看osd状态
[root@ceph-mon01 ~]#ceph osd stat10 osds: 10 up, 10 in; epoch: e56提示:osds表示现有集群里osd总数;up表示活动在线的osd数量,in表示在集群内的osd数量;
2、查看osd编号
[root@ceph-mon01 ~]# ceph osd ls0123456789[root@ceph-mon01 ~]#3、查看osd映射状态
[root@ceph-mon01 ~]# ceph osd dumpepoch 56fsid 7fd4a619-9767-4b46-9cee-78b9dfe88f34created 2022-09-24 00:36:13.639715modified 2022-09-24 02:29:38.086464flags sortbitwise,recovery_deletes,purged_snapdirscrush_version 25full_ratio 0.95backfillfull_ratio 0.9nearfull_ratio 0.85require_min_compat_client jewelmin_compat_client jewelrequire_osd_release mimicpool 1 'testpool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 42 flags hashpspool stripe_width 0max_osd 10osd.0 upinweight 1 up_from 55 up_thru 0 down_at 0 last_clean_interval [0,0) 192.168.0.71:6800/52355 172.16.30.71:6800/52355 172.16.30.71:6801/52355 192.168.0.71:6801/52355 exists,up bf3649af-e3f4-41a2-a5ce-8f1a316d344eosd.1 upinweight 1 up_from 9 up_thru 42 down_at 0 last_clean_interval [0,0) 192.168.0.71:6802/49913 172.16.30.71:6802/49913 172.16.30.71:6803/49913 192.168.0.71:6803/49913 exists,up 7293a12a-7b4e-4c86-82dc-0acc15c3349eosd.2 upinweight 1 up_from 13 up_thru 42 down_at 0 last_clean_interval [0,0) 192.168.0.72:6800/48196 172.16.30.72:6800/48196 172.16.30.72:6801/48196 192.168.0.72:6801/48196 exists,up 96c437c5-8e82-4486-910f-9e98d195e4f9osd.3 upinweight 1 up_from 17 up_thru 55 down_at 0 last_clean_interval [0,0) 192.168.0.72:6802/48679 172.16.30.72:6802/48679 172.16.30.72:6803/48679 192.168.0.72:6803/48679 exists,up 4659d2a9-09c7-49d5-bce0-4d2e65f5198cosd.4 upinweight 1 up_from 21 up_thru 55 down_at 0 last_clean_interval [0,0) 192.168.0.73:6800/48122 172.16.30.73:6800/48122 172.16.30.73:6801/48122 192.168.0.73:6801/48122 exists,up de019aa8-3d2a-4079-a99e-ec2da2d4edb9osd.5 upinweight 1 up_from 25 up_thru 55 down_at 0 last_clean_interval [0,0) 192.168.0.73:6802/48601 172.16.30.73:6802/48601 172.16.30.73:6803/48601 192.168.0.73:6803/48601 exists,up 119c8748-af3b-4ac4-ac74-6171c90c82ccosd.6 upinweight 1 up_from 29 up_thru 55 down_at 0 last_clean_interval [0,0) 192.168.0.74:6801/58248 172.16.30.74:6800/58248 172.16.30.74:6801/58248 192.168.0.74:6802/58248 exists,up 08d8dd8b-cdfe-4338-83c0-b1e2b5c2a799osd.7 upinweight 1 up_from 33 up_thru 55 down_at 0 last_clean_interval [0,0) 192.168.0.74:6803/58727 172.16.30.74:6802/58727 172.16.30.74:6803/58727 192.168.0.74:6804/58727 exists,up 9de6cbd0-bb1b-49e9-835c-3e714a867393osd.8 upinweight 1 up_from 37 up_thru 42 down_at 0 last_clean_interval [0,0) 192.168.0.75:6800/48268 172.16.30.75:6800/48268 172.16.30.75:6801/48268 192.168.0.75:6801/48268 exists,up 63aaa0b8-4e52-4d74-82a8-fbbe7b48c837osd.9 upinweight 1 up_from 41 up_thru 42 down_at 0 last_clean_interval [0,0) 192.168.0.75:6802/48751 172.16.30.75:6802/48751 172.16.30.75:6803/48751 192.168.0.75:6803/48751 exists,up 6bf3204a-b64c-4808-a782-434a93ac578c[root@ceph-mon01 ~]#

推荐阅读