分布式存储系统之Ceph集群部署( 六 )


[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap --helpusage: ceph-deploy disk zap [-h] [--debug] [HOST] DISK [DISK ...]positional arguments:HOSTRemote HOST(s) to connectDISKDisk(s) to zapoptional arguments:-h, --helpshow this help message and exit--debugEnable debug mode on remote ceph-volume calls[cephadm@ceph-admin ceph-cluster]$ ceph-deploy disk zap ceph-mon01 /dev/sdb /dev/sdc[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf[ceph_deploy.cli][INFO] Invoked (2.0.1): /bin/ceph-deploy disk zap ceph-mon01 /dev/sdb /dev/sdc[ceph_deploy.cli][INFO] ceph-deploy options:[ceph_deploy.cli][INFO]username: None[ceph_deploy.cli][INFO]verbose: False[ceph_deploy.cli][INFO]debug: False[ceph_deploy.cli][INFO]overwrite_conf: False[ceph_deploy.cli][INFO]subcommand: zap[ceph_deploy.cli][INFO]quiet: False[ceph_deploy.cli][INFO]cd_conf: <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f35f8500f80>[ceph_deploy.cli][INFO]cluster: ceph[ceph_deploy.cli][INFO]host: ceph-mon01[ceph_deploy.cli][INFO]func: <function disk at 0x7f35f84d1a28>[ceph_deploy.cli][INFO]ceph_conf: None[ceph_deploy.cli][INFO]default_release: False[ceph_deploy.cli][INFO]disk: ['/dev/sdb', '/dev/sdc'][ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph-mon01[ceph-mon01][DEBUG ] connection detected need for sudo[ceph-mon01][DEBUG ] connected to host: ceph-mon01[ceph-mon01][DEBUG ] detect platform information from remote host[ceph-mon01][DEBUG ] detect machine type[ceph-mon01][DEBUG ] find the location of an executable[ceph_deploy.osd][INFO] Distro info: CentOS Linux 7.9.2009 Core[ceph-mon01][DEBUG ] zeroing last few blocks of device[ceph-mon01][DEBUG ] find the location of an executable[ceph-mon01][INFO] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdb[ceph-mon01][WARNIN] --> Zapping: /dev/sdb[ceph-mon01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table[ceph-mon01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync[ceph-mon01][WARNIN]stderr: 10+0 records in[ceph-mon01][WARNIN] 10+0 records out[ceph-mon01][WARNIN]stderr: 10485760 bytes (10 MB) copied, 0.0721997 s, 145 MB/s[ceph-mon01][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph-mon01[ceph-mon01][DEBUG ] connection detected need for sudo[ceph-mon01][DEBUG ] connected to host: ceph-mon01[ceph-mon01][DEBUG ] detect platform information from remote host[ceph-mon01][DEBUG ] detect machine type[ceph-mon01][DEBUG ] find the location of an executable[ceph_deploy.osd][INFO] Distro info: CentOS Linux 7.9.2009 Core[ceph-mon01][DEBUG ] zeroing last few blocks of device[ceph-mon01][DEBUG ] find the location of an executable[ceph-mon01][INFO] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdc[ceph-mon01][WARNIN] --> Zapping: /dev/sdc[ceph-mon01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table[ceph-mon01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync[ceph-mon01][WARNIN]stderr: 10+0 records in[ceph-mon01][WARNIN] 10+0 records out[ceph-mon01][WARNIN] 10485760 bytes (10 MB) copied[ceph-mon01][WARNIN]stderr: , 0.0849861 s, 123 MB/s[ceph-mon01][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>[cephadm@ceph-admin ceph-cluster]$提示:擦净磁盘我们需要在后面接对应主机和磁盘;若设备上此前有数据,则可能需要在相应节点上以root用户使用“ceph-volume lvm zap --destroy {DEVICE}”命令进行;
添加osd
查看 ceph-deploy osd帮助
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd --helpusage: ceph-deploy osd [-h] {list,create} ...Create OSDs from a data disk on a remote host:ceph-deploy osd create {node} --data /path/to/deviceFor bluestore, optional devices can be used::ceph-deploy osd create {node} --data /path/to/data --block-db /path/to/db-deviceceph-deploy osd create {node} --data /path/to/data --block-wal /path/to/wal-deviceceph-deploy osd create {node} --data /path/to/data --block-db /path/to/db-device --block-wal /path/to/wal-deviceFor filestore, the journal must be specified, as well as the objectstore::ceph-deploy osd create {node} --filestore --data /path/to/data --journal /path/to/journalFor data devices, it can be an existing logical volume in the format of:vg/lv, or a device. For other OSD components like wal, db, and journal, itcan be logical volume (in vg/lv format) or it must be a GPT partition.positional arguments:{list,create}listList OSD info from remote host(s)createCreate new Ceph OSD daemon by preparing and activating adeviceoptional arguments:-h, --helpshow this help message and exit[cephadm@ceph-admin ceph-cluster]$提示:ceph-deploy osd有两个子命令 , list表示列出远程主机上osd;create表示创建一个新的ceph osd守护进程设备;
查看ceph-deploy osd create 帮助
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy osd create --helpusage: ceph-deploy osd create [-h] [--data DATA] [--journal JOURNAL][--zap-disk] [--fs-type FS_TYPE] [--dmcrypt][--dmcrypt-key-dir KEYDIR] [--filestore][--bluestore] [--block-db BLOCK_DB][--block-wal BLOCK_WAL] [--debug][HOST]positional arguments:HOSTRemote host to connectoptional arguments:-h, --helpshow this help message and exit--data DATAThe OSD data logical volume (vg/lv) or absolute pathto device--journal JOURNALLogical Volume (vg/lv) or path to GPT partition--zap-diskDEPRECATED - cannot zap when creating an OSD--fs-type FS_TYPEfilesystem to use to format DEVICE (xfs, btrfs)--dmcryptuse dm-crypt on DEVICE--dmcrypt-key-dir KEYDIRdirectory where dm-crypt keys are stored--filestorefilestore objectstore--bluestorebluestore objectstore--block-db BLOCK_DBbluestore block.db path--block-wal BLOCK_WALbluestore block.wal path--debugEnable debug mode on remote ceph-volume calls[cephadm@ceph-admin

推荐阅读