Hi all,
i've 4 nodes proxmox, with CEPH, only three are monitors.
For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the log, thath i don't understand that is already ready or it's working
create OSD on /dev/sdg (bluestore)
wiping block device /dev/sdg
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.901488 s, 233 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 16fbdc99-cf4b-4c80-b330-1117b0177ade
Running command: /sbin/vgcreate --force --yes ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2 /dev/sdg
stdout: Physical volume "/dev/sdg" successfully created.
stdout: Volume group "ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2" successfully created
Running command: /sbin/lvcreate --yes -l 572317 -n osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2
stdout: Logical volume "osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-20
--> Executable selinuxenabled not in PATH: /sbin:/bin:/usr/sbin:/usr/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2/osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade
Running command: /bin/chown -R ceph:ceph /dev/dm-10
Running command: /bin/ln -s /dev/ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2/osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade /var/lib/ceph/osd/ceph-20/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-20/activate.monmap
stderr: 2022-01-24T16:52:23.535+0100 7f511a933700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2022-01-24T16:52:23.535+0100 7f511a933700 -1 AuthRegistry(0x7f511405b2e8) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 5
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-20/keyring --create-keyring --name osd.20 --add-key AQA1y+5hSFrGOBAAb83LeZcEjVUwy3+4XOeCxw==
stdout: creating /var/lib/ceph/osd/ceph-20/keyring
added entity osd.20 auth(key=AQA1y+5hSFrGOBAAb83LeZcEjVUwy3+4XOeCxw==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-20/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-20/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 20 --monmap /var/lib/ceph/osd/ceph-20/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-20/ --osd-uuid 16fbdc99-cf4b-4c80-b330-1117b0177ade --setuser ceph --setgroup ceph
stderr: 2022-01-24T16:52:23.887+0100 7f50ff769f00 -1 bluestore(/var/lib/ceph/osd/ceph-20/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sdg
And if i check "ceph osd tree" i see these osd in "out".
Thanks
i've 4 nodes proxmox, with CEPH, only three are monitors.
For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the log, thath i don't understand that is already ready or it's working
create OSD on /dev/sdg (bluestore)
wiping block device /dev/sdg
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.901488 s, 233 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 16fbdc99-cf4b-4c80-b330-1117b0177ade
Running command: /sbin/vgcreate --force --yes ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2 /dev/sdg
stdout: Physical volume "/dev/sdg" successfully created.
stdout: Volume group "ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2" successfully created
Running command: /sbin/lvcreate --yes -l 572317 -n osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2
stdout: Logical volume "osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-20
--> Executable selinuxenabled not in PATH: /sbin:/bin:/usr/sbin:/usr/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2/osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade
Running command: /bin/chown -R ceph:ceph /dev/dm-10
Running command: /bin/ln -s /dev/ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2/osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade /var/lib/ceph/osd/ceph-20/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-20/activate.monmap
stderr: 2022-01-24T16:52:23.535+0100 7f511a933700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2022-01-24T16:52:23.535+0100 7f511a933700 -1 AuthRegistry(0x7f511405b2e8) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 5
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-20/keyring --create-keyring --name osd.20 --add-key AQA1y+5hSFrGOBAAb83LeZcEjVUwy3+4XOeCxw==
stdout: creating /var/lib/ceph/osd/ceph-20/keyring
added entity osd.20 auth(key=AQA1y+5hSFrGOBAAb83LeZcEjVUwy3+4XOeCxw==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-20/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-20/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 20 --monmap /var/lib/ceph/osd/ceph-20/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-20/ --osd-uuid 16fbdc99-cf4b-4c80-b330-1117b0177ade --setuser ceph --setgroup ceph
stderr: 2022-01-24T16:52:23.887+0100 7f50ff769f00 -1 bluestore(/var/lib/ceph/osd/ceph-20/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sdg
And if i check "ceph osd tree" i see these osd in "out".
Thanks