分布式存储系统之Ceph集群状态获取及ceph配置文件说明( 二 )

除了上述命令来检查osd状态,我们还可以根据OSD在CRUSH MPA中的位置查看osd
[cephadm@ceph-admin ceph-cluster]$ ceph osd treeIDCLASS WEIGHTTYPE NAMESTATUS REWEIGHT PRI-AFF -10.87891 root default -90.17578host ceph-mgr016hdd 0.07809osd.6up1.00000 1.000007hdd 0.09769osd.7up1.00000 1.00000 -30.17578host ceph-mon010hdd 0.07809osd.0up1.00000 1.000001hdd 0.09769osd.1up1.00000 1.00000 -50.17578host ceph-mon022hdd 0.07809osd.2up1.00000 1.000003hdd 0.09769osd.3up1.00000 1.00000 -70.17578host ceph-mon034hdd 0.07809osd.4up1.00000 1.000005hdd 0.09769osd.5up1.00000 1.00000-110.17578host node018hdd 0.07809osd.8up1.00000 1.000009hdd 0.09769osd.9up1.00000 1.00000[cephadm@ceph-admin ceph-cluster]$提示:从上面的输出信息我们可以看到每台主机上osd编号情况,以及每个OSD的权重;
检查mon节点状态
[cephadm@ceph-admin ceph-cluster]$ ceph mon state3: 3 mons at {ceph-mon01=192.168.0.71:6789/0,ceph-mon02=192.168.0.72:6789/0,ceph-mon03=192.168.0.73:6789/0}, election epoch 18, leader 0 ceph-mon01, quorum 0,1,2 ceph-mon01,ceph-mon02,ceph-mon03[cephadm@ceph-admin ceph-cluster]$ ceph mon dumpdumped monmap epoch 3epoch 3fsid 7fd4a619-9767-4b46-9cee-78b9dfe88f34last_changed 2022-09-24 01:56:24.196075created 2022-09-24 00:36:13.2101550: 192.168.0.71:6789/0 mon.ceph-mon011: 192.168.0.72:6789/0 mon.ceph-mon022: 192.168.0.73:6789/0 mon.ceph-mon03[cephadm@ceph-admin ceph-cluster]$提示:上述两条命令都能显示出集群有多少个mon节点,以及对应节点的ip地址和监听端口,以及mon节点编号等信息;ceph mon stat除了能显示有多少mon节点和mon的详细信息外,它还显示领导节点的编号,以及选举次数;
查看仲裁状态
[cephadm@ceph-admin ceph-cluster]$ ceph quorum_status{"election_epoch":18,"quorum":[0,1,2],"quorum_names":["ceph-mon01","ceph-mon02","ceph-mon03"],"quorum_leader_name":"ceph-mon01","monmap":{"epoch":3,"fsid":"7fd4a619-9767-4b46-9cee-78b9dfe88f34","modified":"2022-09-24 01:56:24.196075","created":"2022-09-24 00:36:13.210155","features":{"persistent":["kraken","luminous","mimic","osdmap-prune"],"optional":[]},"mons":[{"rank":0,"name":"ceph-mon01","addr":"192.168.0.71:6789/0","public_addr":"192.168.0.71:6789/0"},{"rank":1,"name":"ceph-mon02","addr":"192.168.0.72:6789/0","public_addr":"192.168.0.72:6789/0"},{"rank":2,"name":"ceph-mon03","addr":"192.168.0.73:6789/0","public_addr":"192.168.0.73:6789/0"}]}}[cephadm@ceph-admin ceph-cluster]$使用管理套接字查询集群状态
Ceph的管理套接字接口常用于查询守护进程,套接字默认保存 于/var/run/ceph目录 , 此接口的使用不能以远程方式进程 , 只能在对应节点上使用;
命令的使用格式:ceph --admin-daemon /var/run/ceph/socket-name 命令;比如获取帮助信息 ceph --admin-daemon /var/run/ceph/socket-name help
[root@ceph-mon01 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok help{"calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore","compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests","config diff": "dump diff of current config and default config","config diff get": "dump diff get <field>: dump diff of current and default config setting <field>","config get": "config get <field>: get the config value","config help": "get config setting schema and descriptions","config set": "config set <field> <val> [<val> ...]: set a config variable","config show": "dump current config settings","config unset": "config unset <field>: unset a config variable","dump_blacklist": "dump blacklisted clients and times","dump_blocked_ops": "show the blocked ops currently in flight","dump_historic_ops": "show recent ops","dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration","dump_historic_slow_ops": "show slowest recent ops","dump_mempools": "get mempool stats","dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore","dump_op_pq_state": "dump op priority queue state","dump_ops_in_flight": "show the ops currently in flight","dump_osd_network": "Dump osd heartbeat network ping times","dump_pgstate_history": "show recent state history","dump_reservations": "show recovery reservations","dump_scrubs": "print scheduled scrubs","dump_watchers": "show clients which have active watches, and on which objects","flush_journal": "flush the journal to permanent store","flush_store_cache": "Flush bluestore internal cache","get_command_descriptions": "list available commands","get_heap_property": "get malloc extension heap property","get_latest_osdmap": "force osd to update the latest map from the mon","get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.","getomap": "output entire object map","git_version": "get git sha1","heap": "show heap usage info (available only if compiled with tcmalloc)","help": "list available commands","injectdataerr": "inject data error to an object","injectfull": "Inject a full disk (optional count times)","injectmdataerr": "inject metadata error to an object","list_devices": "list OSD devices.","log dump": "dump recent log entries to log file","log flush": "flush log entries to log file","log reopen": "reopen log file","objecter_requests": "show in-progress osd requests","ops": "show the ops currently in flight","perf dump": "dump perfcounters value","perf histogram dump": "dump perf histogram values","perf histogram schema": "dump perf histogram schema","perf reset": "perf reset <name>: perf reset all or one perfcounter name","perf schema": "dump perfcounters schema","rmomapkey": "remove omap key","set_heap_property": "update malloc extension heap property","set_recovery_delay": "Delay osd recovery by specified seconds","setomapheader": "set omap header","setomapval": "set omap key","smart": "probe OSD devices for SMART data.","status": "high-level status of OSD","trigger_deep_scrub": "Trigger a scheduled deep scrub ","trigger_scrub": "Trigger a scheduled scrub ","truncobj": "truncate object to length","version": "get ceph version"}[root@ceph-mon01 ~]#

推荐阅读