提示:对于rbd客户端来说,要想连入ceph集群,首先它对mon需要有对的权限 , 其次要想在osd之上存储数据,可以授权为* , 表示可读可写 , 但需要限定在对应存储池之上;
导出client.test用户的keyring文件,并传给客户端
[root@ceph-admin ~]# ceph --user test -s2022-10-04 01:31:24.776 7faddac3e700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.test.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory2022-10-04 01:31:24.776 7faddac3e700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication[errno 2] error connecting to the cluster[root@ceph-admin ~]# ceph auth get client.testexported keyring for client.test[client.test]key = AQB0Gztj63xwGhAAq7JFXnK2mQjBfhq0/kB5uA==caps mon = "allow r"caps osd = "allow * pool=ceph-rbdpool"[root@ceph-admin ~]# ceph auth get client.test -o /etc/ceph/ceph.client.test.keyringexported keyring for client.test[root@ceph-admin ~]# ceph --user test -scluster:id:7fd4a619-9767-4b46-9cee-78b9dfe88f34health: HEALTH_OKservices:mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03mgr: ceph-mgr01(active), standbys: ceph-mon01, ceph-mgr02mds: cephfs-1/1/1 up{0=ceph-mon02=up:active}osd: 10 osds: 10 up, 10 inrgw: 1 daemon activedata:pools:10 pools, 464 pgsobjects: 250objects, 3.8 KiBusage:10 GiB used, 890 GiB / 900 GiB availpgs:464 active+clean[root@ceph-admin ~]#提示:这里需要说明一下 , 我这里是用admin host主机来充当客户端来使用,本地/etc/ceph/目录下保存的以后集群的配置文件;所以客户端主机上必须要有对应授权keyring文件,以及集群配置文件才能正常连入ceph集群;如果我们在客户端主机上能够使用ceph -s 命令指定对应用户能够查看到集群状态 , 说明对应keyring和配置文件是没有问题的;
3、客户端映射image
[root@ceph-admin ~]# fdisk -lDisk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000a7984Device BootStartEndBlocksIdSystem/dev/sda1*2048105062352428883Linux/dev/sda21050624104857599519034888eLinux LVMDisk /dev/mapper/centos-root: 52.1 GB, 52072284160 bytes, 101703680 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-swap: 1073 MB, 1073741824 bytes, 2097152 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes[root@ceph-admin ~]# rbd map --user test ceph-rbdpool/vol01/dev/rbd0[root@ceph-admin ~]# fdisk -lDisk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000a7984Device BootStartEndBlocksIdSystem/dev/sda1*2048105062352428883Linux/dev/sda21050624104857599519034888eLinux LVMDisk /dev/mapper/centos-root: 52.1 GB, 52072284160 bytes, 101703680 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-swap: 1073 MB, 1073741824 bytes, 2097152 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/rbd0: 5368 MB, 5368709120 bytes, 10485760 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 4194304 bytes / 4194304 bytes[root@ceph-admin ~]#提示:我们使用rbd map指定用户,指定存储池和image即可连入ceph集群,将指定存储池里的image映射为本地的磁盘设备;
查看映射的image
[root@ceph-admin ~]# rbd showmappedid poolimage snap device0ceph-rbdpool vol01 -/dev/rbd0[root@ceph-admin ~]#提示:这种手动命令行连入ceph的方式,一旦客户端重启,对应连接就断开了,所以如果我们需要开机自动连入ceph集群使用rbd磁盘 , 我们还需要将对应信息写进/etc/rc.d/rc.local文件中 , 并给该文件加上可执行权限即可;
手动断开映射
[root@ceph-admin ~]# rbd unmap ceph-rbdpool/vol01[root@ceph-admin ~]# rbd showmapped[root@ceph-admin ~]#调整image的大小
命令格式:rbd resize [--pool <pool>] [--image <image>] --size <size> [--allow-shrink] [--no-progress] <image-spec>
增大空间:rbd resize [--pool <pool>] [--image <image>] --size <size>
减少空间:rbd resize [--pool <pool>] [--image <image>] --size <size> [--allow-shrink]
[root@ceph-admin ~]# rbd create --size 2G ceph-rbdpool/vol02[root@ceph-admin ~]# rbd ls-p ceph-rbdpoolvol01vol02[root@ceph-admin ~]# rbd ls-p ceph-rbdpool -lNAMESIZE PARENT FMT PROT LOCKvol01 5 GiB2vol02 2 GiB2[root@ceph-admin ~]# rbd resize --size 10G ceph-rbdpool/vol02Resizing image: 100% complete...done.[root@ceph-admin ~]# rbd ls-p ceph-rbdpool -lNAMESIZE PARENT FMT PROT LOCKvol015 GiB2vol02 10 GiB2[root@ceph-admin ~]# rbd resize --size 8G ceph-rbdpool/vol02Resizing image: 0% complete...failed.rbd: shrinking an image is only allowed with the --allow-shrink flag[root@ceph-admin ~]# rbd resize --size 8G ceph-rbdpool/vol02 --allow-shrinkResizing image: 100% complete...done.[root@ceph-admin ~]# rbd ls-p ceph-rbdpool -lNAMESIZE PARENT FMT PROT LOCKvol01 5 GiB2vol02 8 GiB2[root@ceph-admin ~]#
推荐阅读
- 荣耀magic3支持鸿蒙系统吗_荣耀magic3能升级鸿蒙吗
- 手机怎么自己刷机,恢复系统(手机可以自己刷机吗)
- 电脑刷机怎么操作(电脑刷机重装系统)
- docker搭建yapi接口文档系统、Idea中上传接口、在线调用
- 分布式存储系统之Ceph集群CephX认证和授权
- 分布式存储系统之Ceph集群存储池操作
- 阴阳师剧情收录系统有什么功能
- 分布式存储系统之Ceph集群存储池、PG 与 CRUSH
- 苹果ios14.7新功能_苹果ios14.7系统怎么样
- centos7系统资源限制整理