Bendi新闻
>
使用cephadm部署ceph集群

使用cephadm部署ceph集群

5月前

一、cephadm介绍

从红帽ceph5开始使用cephadm代替之前的ceph-ansible作为管理整个集群生命周期的工具,包括部署,管理,监控。

cephadm引导过程在单个节点(bootstrap节点)上创建一个小型存储集群,包括一个Ceph Monitor和一个Ceph Manager,以及任何所需的依赖项。

如下图所示:

cephadm可以登录到容器仓库来拉取ceph镜像和使用对应镜像来在对应ceph节点进行部署。ceph容器镜像对于部署ceph集群是必须的,因为被部署的ceph容器是基于那些镜像。

为了和ceph集群节点通信,cephadm使用ssh。通过使用ssh连接,cephadm可以向集群中添加主机,添加存储和监控那些主机。

该节点让集群up的软件包就是cepadm,podman或docker,python3和chrony。这个容器化的版本减少了ceph集群部署的复杂性和依赖性。

1、python3

yum -y install python3

2、podman或者docker来运行容器

# 安装阿里云提供的docker-ceyum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum -y install docker-cesystemctl enable docker --now# 配置镜像加速器mkdir -p /etc/dockertee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://bp1bh1ga.mirror.aliyuncs.com"]}EOFsystemctl daemon-reloadsystemctl restart docker

3、时间同步(比如chrony或者NTP)


二、部署ceph集群前准备

2.1、节点准备


节点名称

系统

IP地址

ceph角色

硬盘

node1

Rocky Linux release 8.6

172.24.1.6

mon,mgr,服务器端,管理节点

/dev/vdb,/dev/vdc/,dev/vdd

node2

Rocky Linux release 8.6

172.24.1.7

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node3

Rocky Linux release 8.6

172.24.1.8

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node4

Rocky Linux release 8.6

172.24.1.9

客户端,管理节点


2.2、修改每个节点的/etc/host

172.24.1.6 node1172.24.1.7 node2172.24.1.8 node3172.24.1.9 node4


2.3、在node1节点上做免密登录

[root@node1 ~]# ssh-keygen[root@node1 ~]# ssh-copy-id root@node2[root@node1 ~]# ssh-copy-id root@node3[root@node1 ~]# ssh-copy-id root@node4

三、node1节点安装cephadm

1.安装epel源[root@node1 ~]# yum -y install epel-release2.安装ceph源[root@node1 ~]# yum search release-ceph上次元数据过期检查:0:57:14 前,执行于 2023年02月14日 星期二 14时22分00秒。================= 名称 匹配:release-ceph ============================================centos-release-ceph-nautilus.noarch : Ceph Nautilus packages from the CentOS Storage SIG repositorycentos-release-ceph-octopus.noarch : Ceph Octopus packages from the CentOS Storage SIG repositorycentos-release-ceph-pacific.noarch : Ceph Pacific packages from the CentOS Storage SIG repositorycentos-release-ceph-quincy.noarch : Ceph Quincy packages from the CentOS Storage SIG repository[root@node1 ~]# yum -y install centos-release-ceph-pacific.noarch3.安装cephadm[root@node1 ~]# yum -y install cephadm4.安装ceph-common[root@node1 ~]# yum -y install ceph-common

四、其它节点安装docker-ce,python3

具体过程看标题一。


五、部署ceph集群

5.1、部署ceph集群,顺便把dashboard(图形控制界面)安装上

[root@node1 ~]# cephadm bootstrap --mon-ip 172.24.1.6 --allow-fqdn-hostname --initial-dashboard-user admin --initial-dashboard-password redhat --dashboard-password-noupdateVerifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) is presentsystemctl is presentlvcreate is presentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 0b565668-ace4-11ed-960c-5254000de7a0Verifying IP 172.24.1.6 port 3300 ...Verifying IP 172.24.1.6 port 6789 ...Mon IP `172.24.1.6` is in CIDR network `172.24.1.0/24`- internal network (--cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v16...Ceph version: ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)Extracting ceph user uid/gid from container image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to 172.24.1.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5 is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH key to /etc/ceph/ceph.pubAdding key to root@localhost authorized_keys...Adding host node1...Deploying mon service with default placement...Deploying mgr service with default placement...Deploying crash service with default placement...Deploying prometheus service with default placement...Deploying grafana service with default placement...Deploying node-exporter service with default placement...Deploying alertmanager service with default placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9 is availableGenerating a dashboard self-signed certificate...Creating initial admin user...Fetching dashboard port number...Ceph Dashboard is now available at:
URL: https://node1.domain1.example.com:8443/ User: admin Password: redhat
Enabling client.admin keyring and conf on hosts with "admin" labelYou can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid 0b565668-ace4-11ed-960c-5254000de7a0 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/pacific/mgr/telemetry/
Bootstrap complete.

5.2、把集群公钥复制到将成为集群成员的节点

[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node4

5.3、添加节点node2,node3,node4(各节点要先安装docker-ce,python3)

[root@node1 ~]# ceph orch host add node2 172.24.1.7Added host 'node2' with addr '172.24.1.7'[root@node1 ~]# ceph orch host add node3 172.24.1.8Added host 'node3' with addr '172.24.1.8'[root@node1 ~]# ceph orch host add node4 172.24.1.9Added host 'node4' with addr '172.24.1.9'


5.4、给node1、node4打上管理员标签,拷贝ceph配置文件和keyring到node4

[root@node1 ~]# ceph orch host label add node1 _adminAdded label _admin to host node1[root@node1 ~]# ceph orch host label add node4 _adminAdded label _admin to host node4[root@node1 ~]# scp /etc/ceph/{*.conf,*.keyring} root@node4:/etc/ceph[root@node1 ~]# ceph orch host lsHOST   ADDR        LABELS  STATUS  node1  172.24.1.6  _admin          node2  172.24.1.7                  node3  172.24.1.8                  node4  172.24.1.9  _admin

5.5、添加mon

[root@node1 ~]# ceph orch apply mon "node1,node2,node3"Scheduled mon update...

5.6、添加mgr

[root@node1 ~]# ceph orch apply mgr --placement="node1,node2,node3"Scheduled mgr update...

5.7、添加osd

[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdd或者:[root@node1 ~]# for i in node1 node2 node3; do for j in vdb vdc vdd; do ceph orch daemon add osd $i:/dev/$j; done; doneCreated osd(s) 0 on host 'node1'Created osd(s) 1 on host 'node1'Created osd(s) 2 on host 'node1'Created osd(s) 3 on host 'node2'Created osd(s) 4 on host 'node2'Created osd(s) 5 on host 'node2'Created osd(s) 6 on host 'node3'Created osd(s) 7 on host 'node3'Created osd(s) 8 on host 'node3'
[root@node1 ~]# ceph orch device lsHOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS node1 /dev/vdb hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdc hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdd hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdb hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdc hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdd hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdb hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdc hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdd hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked

5.8、至此,ceph集群部署完毕!

[root@node1 ~]# ceph -s  cluster:    id:     0b565668-ace4-11ed-960c-5254000de7a0    health: HEALTH_OK   services:    mon: 3 daemons, quorum node1,node2,node3 (age 7m)    mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxu    osd: 9 osds: 9 up (since 59s), 9 in (since 81s)   data:    pools:   1 pools, 1 pgs    objects: 0 objects, 0 B    usage:   53 MiB used, 90 GiB / 90 GiB avail    pgs:     1 active+clean


5.9、node4节点管理ceph

# 在目录5.4已经将ceph配置文件和keyring拷贝到node4节点[root@node4 ~]# ceph -s-bash: ceph: 未找到命令,需要安装ceph-common# 安装ceph源[root@node4 ~]# yum -y install centos-release-ceph-pacific.noarch# 安装ceph-common[root@node4 ~]# yum -y install ceph-common[root@node4 ~]# ceph -s  cluster:    id:     0b565668-ace4-11ed-960c-5254000de7a0    health: HEALTH_OK   services:    mon: 3 daemons, quorum node1,node2,node3 (age 7m)    mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxu    osd: 9 osds: 9 up (since 59s), 9 in (since 81s)   data:    pools:   1 pools, 1 pgs    objects: 0 objects, 0 B    usage:   53 MiB used, 90 GiB / 90 GiB avail    pgs:     1 active+clean


链接:https://blog.51cto.com/zengyi/6059434

(版权归原作者所有,侵删)


微信扫码关注该文公众号作者

来源:马哥Linux运维

相关新闻

Kubernetes部署PostgreSQL集群如何使用DOCKER部署一个GO WEB应用程序Ceph Reef 使用加密后的性能测试Kubernetes高可用集群二进制部署v1.28.0版本在Linux中,如何在Linux中使用Ansible进行自动化部署?南加大:各州避孕药及紧急避孕药使用量下降麻州参议院通过限塑令!禁止使用一次性塑料袋 再生纸袋每个收10美分浅谈 man 命令的日常使用亚马逊将正在北美淘汰使用包装盒中的塑料气垫OpenAI将停止中国等不支持的国家和地区的API使用如何使用阿里云搭建微信小程序?我国重复使用运载火箭首次10公里级垂直起降飞行试验圆满成功;中国通号卡斯柯无人机AI巡检完成首飞场景验证丨智能制造日报7月1号开始,本拿比学区宣布新规:限制学生使用手机使用Kubesec检查YAML文件安全Pornhub再禁美国5州使用,需上传驾照身份证有钱买卡还不够,10万卡H100集群有多难搭?一文解析算力集群技术要点亚城华人看过来!亚特兰大这学区禁止学生在校期间使用手机!使用查询分离后,从20s优化到500ms,牛哇~[电脑] 华硕 ROG 超神 PG27UQR-W 27寸4K显示器使用有感美国禽流感开始致人死亡,没有特效药!疫苗仅够20%人使用……请大家低调使用!!使用PAI-DSW x Free Prompt Editing开发个人AIGC绘图小助理华男家得宝购物,使用信用卡结账时,被报警通缉...全美第二大学区投票禁止学生在校使用手机、社交媒体
logo
联系我们隐私协议©2024 bendi.news
Bendi新闻
Bendi.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Bendi.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。