Proxmox unable to create new ceph OSDs

jw6677

Active Member
Oct 19, 2019
93
5
28
33
www.cayk.ca
accessing through the web dashboard:8007, I am unable to create new OSDs at this time. Here's the output of the GUI task:
Code:
create OSD on /dev/sdad (bluestore)
creating block.db on '/dev/sdi'
  Physical volume "/dev/sdi" successfully created.
  Volume group "ceph-e57edb56-d9d3-4e27-ae29-9dcb092976d0" successfully created
  Logical volume "osd-db-d981354c-698f-448a-9abf-6158d71bcb4b" created.
using 'ceph-e57edb56-d9d3-4e27-ae29-9dcb092976d0/osd-db-d981354c-698f-448a-9abf-6158d71bcb4b' for block.db
wiping block device /dev/sdad
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.29278 s, 162 MB/s
usage: ceph-volume lvm create [-h] --data DATA [--data-size DATA_SIZE]
                              [--data-slots DATA_SLOTS] [--osd-id OSD_ID]
                              [--osd-fsid OSD_FSID]
                              [--cluster-fsid CLUSTER_FSID]
                              [--crush-device-class CRUSH_DEVICE_CLASS]
                              [--dmcrypt] [--no-systemd] [--bluestore]
                              [--block.db BLOCK_DB]
                              [--block.db-size BLOCK_DB_SIZE]
                              [--block.db-slots BLOCK_DB_SLOTS]
                              [--block.wal BLOCK_WAL]
                              [--block.wal-size BLOCK_WAL_SIZE]
                              [--block.wal-slots BLOCK_WAL_SLOTS]
                              [--filestore] [--journal JOURNAL]
                              [--journal-size JOURNAL_SIZE]
                              [--journal-slots JOURNAL_SLOTS]
ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDataDevice object at 0x7f5ca278bbb0> value: '/dev/sdad'
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 587ad752-283b-4359-8b77-f52d25f10c25 --block.db ceph-e57edb56-d9d3-4e27-ae29-9dcb092976d0/osd-db-d981354c-698f-448a-9abf-6158d71bcb4b --data /dev/sdad' failed: exit code 2

Here is what I get when manually trying to run the last command in the CLI:
Try 1:
Code:
root@server:~# ceph-volume lvm create --cluster-fsid 587ad752-283b-4359-8b77-f52d25f10c25 --block.db ceph-e57edb56-d9d3-4e27-ae29-9dcb092976d0/osd-db-d981354c-698f-448a-9abf-6158d71bcb4b --data /dev/sdad
usage: ceph-volume lvm create [-h] --data DATA [--data-size DATA_SIZE] [--data-slots DATA_SLOTS] [--osd-id OSD_ID] [--osd-fsid OSD_FSID] [--cluster-fsid CLUSTER_FSID] [--crush-device-class CRUSH_DEVICE_CLASS]
                              [--dmcrypt] [--no-systemd] [--bluestore] [--block.db BLOCK_DB] [--block.db-size BLOCK_DB_SIZE] [--block.db-slots BLOCK_DB_SLOTS] [--block.wal BLOCK_WAL]
                              [--block.wal-size BLOCK_WAL_SIZE] [--block.wal-slots BLOCK_WAL_SLOTS] [--filestore] [--journal JOURNAL] [--journal-size JOURNAL_SIZE] [--journal-slots JOURNAL_SLOTS]
ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDataDevice object at 0x7fdc51c57f70> value: '/dev/sdad'

Try 2:
Code:
root@server:~# ceph-volume lvm create --cluster-fsid 587ad752-283b-4359-8b77-f52d25f10c25 --block.db ceph-e57edb56-d9d3-4e27-ae29-9dcb092976d0/osd-db-d981354c-698f-448a-9abf-6158d71bcb4b --data '/dev/sdad'
usage: ceph-volume lvm create [-h] --data DATA [--data-size DATA_SIZE] [--data-slots DATA_SLOTS] [--osd-id OSD_ID] [--osd-fsid OSD_FSID] [--cluster-fsid CLUSTER_FSID] [--crush-device-class CRUSH_DEVICE_CLASS]
                              [--dmcrypt] [--no-systemd] [--bluestore] [--block.db BLOCK_DB] [--block.db-size BLOCK_DB_SIZE] [--block.db-slots BLOCK_DB_SLOTS] [--block.wal BLOCK_WAL]
                              [--block.wal-size BLOCK_WAL_SIZE] [--block.wal-slots BLOCK_WAL_SLOTS] [--filestore] [--journal JOURNAL] [--journal-size JOURNAL_SIZE] [--journal-slots JOURNAL_SLOTS]
ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDataDevice object at 0x7fa9440fef40> value: '/dev/sdad'

pveversion:
pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve)
 
Last edited:
Edit: Solved on one node with:

apt install --reinstall ceph-volume

Not sure what happened there, but seems to be good for that particular node.


Previously thought totally resolved, but the saga continues
 
Last edited:
Whoops, nope, this is reappearing for a second node in the cluster. Unable to create OSDs:

Code:
create OSD on /dev/nvme3n1 (bluestore)
wiping block device /dev/nvme3n1
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.270562 s, 775 MB/s
usage: ceph-volume lvm create [-h] --data DATA [--data-size DATA_SIZE]
                              [--data-slots DATA_SLOTS] [--osd-id OSD_ID]
                              [--osd-fsid OSD_FSID]
                              [--cluster-fsid CLUSTER_FSID]
                              [--crush-device-class CRUSH_DEVICE_CLASS]
                              [--dmcrypt] [--no-systemd] [--bluestore]
                              [--block.db BLOCK_DB]
                              [--block.db-size BLOCK_DB_SIZE]
                              [--block.db-slots BLOCK_DB_SLOTS]
                              [--block.wal BLOCK_WAL]
                              [--block.wal-size BLOCK_WAL_SIZE]
                              [--block.wal-slots BLOCK_WAL_SLOTS]
                              [--filestore] [--journal JOURNAL]
                              [--journal-size JOURNAL_SIZE]
                              [--journal-slots JOURNAL_SLOTS]
ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDataDevice object at 0x7fbeeafca460> value: '/dev/nvme3n1'
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 587ad752-283b-4359-8b77-f52d25f10c25 --data /dev/nvme3n1' failed: exit code 2
 
If /dev/sdad , /dev/sdi , /dev/nvme3n1 were configured as OSDs before, then you need to delete the old LVM volume.

You find that under your node->disks->lvm->choose volume->up right->more->destroy

Be careful and look twice that you choose the correct ones!
 
Hey folks,

The issue doens't appear to be related to old disk stuff on the drive, as i was sure to vgremove, pvremove, wipefs -a, and even had to gdisk+wipefs to create and remove a partition, then wipe the partition table, just to get the changes recognized by the os.

I found a (janky) workaround, by creating a new and empty volume group, and then passing that to ceph-volume, instead of the /dev/$DISK, but the core functionality still seems broken.

Here is lsblk, though I worked past the issue at hand, so you'll see a bunch of ceph LVs I manually created:

Code:
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0   4.9T  0 loop /var/lib/lxc/104/rootfs
sda                         8:0    1 465.8G  0 disk
├─sda1                      8:1    1  1007K  0 part
├─sda2                      8:2    1   512M  0 part  /boot/efi
└─sda3                      8:3    1 465.3G  0 part
  ├─pve-swap              253:0    0     8G  0 lvm  [SWAP]
  └─pve-root              253:1    0   197G  0 lvm  /
sdb                         8:16   1 232.9G  0 disk
sdc                         8:32   1 233.8G  0 disk
nvme3n1                   259:0    0 931.5G  0 disk
├─ceph--nvme--osd--2-osd1 253:7    0 223.6G  0 lvm 
├─ceph--nvme--osd--2-osd2 253:8    0 223.6G  0 lvm 
├─ceph--nvme--osd--2-osd3 253:9    0 223.6G  0 lvm 
└─ceph--nvme--osd--2-osd4 253:10   0 223.6G  0 lvm 
nvme1n1                   259:1    0 953.9G  0 disk
nvme0n1                   259:2    0 931.5G  0 disk
└─pve-server              253:2    0   500G  0 lvm 
nvme2n1                   259:3    0   1.1T  0 disk
├─ceph--nvme--osd--1-osd1 253:3    0 268.3G  0 lvm 
├─ceph--nvme--osd--1-osd2 253:4    0 268.3G  0 lvm 
├─ceph--nvme--osd--1-osd3 253:5    0 268.3G  0 lvm 
└─ceph--nvme--osd--1-osd4 253:6    0 268.3G  0 lvm
 
Thanks, drives are moving around, and /dev naming is not persistent.
Rest assured this error message was coming from within the proxmox GUI, so even if this were the issue, it would mean the GUI was allowing the selection of a device that doesn't exist.


Here is the proper lsblk, sorry, wrong node:

Code:
NAME                                                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                             8:0    0 931.5G  0 disk
├─sda1                                                          8:1    0  1007K  0 part
├─sda2                                                          8:2    0   512M  0 part
└─sda3                                                          8:3    0   931G  0 part
  ├─pve-swap                                                  253:33   0    50G  0 lvm  [SWAP]
  ├─pve-root                                                  253:39   0   400G  0 lvm  /
  ├─pve-data_tmeta                                            253:41   0     1G  0 lvm 
  │ └─pve-data                                                253:49   0    48G  0 lvm 
  └─pve-data_tdata                                            253:42   0    48G  0 lvm 
    └─pve-data                                                253:49   0    48G  0 lvm 
sdb                                                             8:16   0 894.3G  0 disk
├─ceph--1344304b--fcc7--4f47--89c2--2de143d77661-osd--db--295e429c--ae6e--4506--8fc5--3892eb77eb7d
│                                                             253:7    0   440G  0 lvm 
└─ceph--1344304b--fcc7--4f47--89c2--2de143d77661-osd--db--df8ab23a--e807--4064--bb7b--15a78c3a9e0c
                                                              253:53   0   440G  0 lvm 
sdc                                                             8:32   0 894.3G  0 disk
├─ceph--37c35d53--0e96--4253--a4f8--426a66e79f01-osd--db--d6896a89--dace--48a9--a371--c55d158d5936
│                                                             253:55   0   440G  0 lvm 
└─ceph--37c35d53--0e96--4253--a4f8--426a66e79f01-osd--db--52fbf53d--9d5b--456d--8915--8224e0569d45
                                                              253:57   0   440G  0 lvm 
sdd                                                             8:48   0 372.6G  0 disk
sde                                                             8:64   0 372.6G  0 disk
sdf                                                             8:80   0 372.6G  0 disk
├─ceph--48cce022--079c--4e89--86da--82a522e53c31-osd--db--0723eda9--cb1e--4638--a280--f1e6d403b322
│                                                             253:2    0   150G  0 lvm 
└─ceph--48cce022--079c--4e89--86da--82a522e53c31-osd--db--6f83b023--8d46--4fb7--a6fc--cf6fc895a2c8
                                                              253:3    0   150G  0 lvm 
sdg                                                             8:96   0 223.6G  0 disk
├─ceph--575e4ce4--e592--4dfa--a261--1956e715ef6c-osd--db--db0a830c--5237--42bb--9036--edcd846ab0c0
│                                                             253:9    0   100G  0 lvm 
└─ceph--575e4ce4--e592--4dfa--a261--1956e715ef6c-osd--db--4f597c46--485f--4c5c--bd21--07a2e2275e20
                                                              253:61   0   100G  0 lvm 
sdh                                                             8:112  0 223.6G  0 disk
├─ceph--8347d2d3--efd9--46a4--b7d6--522b00e98b50-osd--db--6e43138a--078c--4872--9c81--589536ed9921
│                                                             253:10   0   110G  0 lvm 
└─ceph--8347d2d3--efd9--46a4--b7d6--522b00e98b50-osd--db--2963fa73--8879--400e--818e--be20dbaa4469
                                                              253:32   0   110G  0 lvm 
sdi                                                             8:128  0  93.2G  0 disk
├─ceph--cfd370c2--0567--4ced--bf55--60a5c3cf52bc-osd--db--b3cfbddc--63b9--47f3--8fb7--d94e14e2aecb
│                                                             253:4    0    45G  0 lvm 
└─ceph--cfd370c2--0567--4ced--bf55--60a5c3cf52bc-osd--db--98a366e5--815e--4c2d--87a2--74b5c42fbe2a
                                                              253:6    0    45G  0 lvm 
sdj                                                             8:144  0  93.2G  0 disk
├─ceph--e8df32d2--5f10--45b3--83d3--5a5fedbb6235-osd--db--b8a14f93--0193--4cc1--8c84--b94d6adb6b32
│                                                             253:11   0    45G  0 lvm 
└─ceph--e8df32d2--5f10--45b3--83d3--5a5fedbb6235-osd--db--9ad20dd9--8489--4453--8524--9bb86a7e4398
                                                              253:15   0    45G  0 lvm 
sdk                                                             8:160  0  93.2G  0 disk
├─ceph--a9270579--db55--4574--be83--7abda6102f56-osd--db--bd23e939--0fc8--496c--a8a3--63aede1803a2
│                                                             253:17   0    45G  0 lvm 
└─ceph--a9270579--db55--4574--be83--7abda6102f56-osd--db--c3ad60af--3371--44e6--9529--2c262577d143
                                                              253:25   0    45G  0 lvm 
sdl                                                             8:176  0  93.2G  0 disk
├─ceph--edac1ffc--2cba--4720--bc46--f26718db0dfd-osd--db--4f18e29f--3cdf--45b7--99df--8b7d8fa3aa6f
│                                                             253:28   0    45G  0 lvm 
└─ceph--edac1ffc--2cba--4720--bc46--f26718db0dfd-osd--db--8205b3cc--0bf4--4dd7--871e--fb88e806bdf9
                                                              253:30   0    45G  0 lvm 
sdm                                                             8:192  0 186.3G  0 disk
├─ceph--4d507fb9--8b81--41ba--928e--e7a42b50bcd1-osd--db--4a3ce943--21e9--4db6--bcdd--90b15975a3dd
│                                                             253:34   0    90G  0 lvm 
└─ceph--4d507fb9--8b81--41ba--928e--e7a42b50bcd1-osd--db--7681804f--ebb9--4b0d--8d5d--2e41424e9463
                                                              253:38   0    90G  0 lvm 
sdn                                                             8:208  0  93.2G  0 disk
├─ceph--762ccf9f--8292--41fc--99d9--ac286504bb58-osd--db--33362760--67fa--4f53--8954--849dcdd23229
│                                                             253:37   0    45G  0 lvm 
└─ceph--762ccf9f--8292--41fc--99d9--ac286504bb58-osd--db--7202027c--3c6e--4c72--b047--0ac4833cb172
                                                              253:44   0    45G  0 lvm 
sdo                                                             8:224  0  93.2G  0 disk
├─ceph--6b996d22--4678--4060--9afb--ed39a9e376cc-osd--db--c9b7f559--b6f0--4450--b5be--fa6678d59994
│                                                             253:48   0    45G  0 lvm 
└─ceph--6b996d22--4678--4060--9afb--ed39a9e376cc-osd--db--b4435f8c--7030--4e78--935e--9e72cee0d22c
                                                              253:51   0    45G  0 lvm 
sdp                                                             8:240  0  12.7T  0 disk
└─ceph--31d1ab8d--e821--4824--a74c--9979f243829f-osd--block--db525392--d5b6--4683--ae98--55c20416b43c
                                                              253:12   0  12.7T  0 lvm 
sr0                                                            11:0    1  1024M  0 rom 
sdq                                                            65:0    0   9.1T  0 disk
└─ceph--f2ed64d4--3442--465c--b426--208b08b7b0c0-osd--block--aabf6d1d--33df--47e1--af49--88d07dd761db
                                                              253:58   0   9.1T  0 lvm 
sdr                                                            65:16   0   7.3T  0 disk
└─ceph--bf83e4c7--6681--468d--8b08--913e4f4a7fe5-osd--block--b3682731--3ad5--460b--ae58--106ec15c19f1
                                                              253:50   0   7.3T  0 lvm 
sds                                                            65:32   0   7.3T  0 disk
└─ceph--a5d3a26a--981c--4a37--9497--77e3427bd79c-osd--block--2777f2fc--c566--4a2e--8cdb--4d997f337349
                                                              253:52   0   7.3T  0 lvm 
sdt                                                            65:48   0   9.1T  0 disk
└─ceph--34723abb--49cc--4d26--8a34--e88ef26eb712-osd--block--ec25cd4d--6348--46ea--ba50--8e7183d10049
                                                              253:23   0   9.1T  0 lvm 
sdu                                                            65:64   0   9.1T  0 disk
└─ceph--2f1ad17b--b53c--4225--a2e9--7be80248d6bb-osd--block--6a43a912--225e--46ee--a6f9--c274118f5aef
                                                              253:26   0   9.1T  0 lvm 
sdv                                                            65:80   0   9.1T  0 disk
└─ceph--9199c4c6--3b66--4fce--8c15--d6ffa0ac5f73-osd--block--b533a975--6e9a--448c--9e86--31098af21101
                                                              253:43   0   9.1T  0 lvm 
sdw                                                            65:96   0   7.3T  0 disk
└─ceph--cfefeb28--b2bc--43df--b708--e348804e34ee-osd--block--6a3b6b69--6447--4e1a--9b48--181015ad403e
                                                              253:24   0   7.3T  0 lvm 
sdx                                                            65:112  0   7.3T  0 disk
└─ceph--2a3f449a--99c4--4499--ac64--bff48debf604-osd--block--875295db--78b0--4f53--a326--ad0f339053e2
                                                              253:72   0   7.3T  0 lvm 
sdy                                                            65:128  0   9.1T  0 disk
└─ceph--e8cd4d9c--a9e0--45ce--a220--86979fc3aa05-osd--block--ff838faa--d25c--494a--8666--3816fb808a65
                                                              253:74   0   9.1T  0 lvm 
sdz                                                            65:144  0   9.1T  0 disk
└─ceph--e802c28e--4da3--4c4d--8be1--48739423dc29-osd--block--11f4af1a--d68c--4102--9708--060103e649ad
                                                              253:59   0   9.1T  0 lvm 
sdaa                                                           65:160  0   7.3T  0 disk
└─ceph--ecbe6d86--b287--4255--aa9e--ad7de4ae91c8-osd--block--319507bb--84cc--4fee--ac55--12a4f48937e9
                                                              253:22   0   7.3T  0 lvm 
sdab                                                           65:176  0  12.7T  0 disk
└─ceph--c4ffb511--a275--4607--a33a--802afc761016-osd--block--85db1754--90a7--4410--9d64--929719d48641
                                                              253:54   0  12.7T  0 lvm 
sdac                                                           65:192  0   7.3T  0 disk
└─ceph--1e39fb5c--2432--42ad--9d5c--58969f664bdb-osd--block--d072deb5--8b82--411d--886f--568e2c92e5dd
                                                              253:47   0   7.3T  0 lvm 
sdad                                                           65:208  0   7.3T  0 disk
└─ceph--bb6dc662--a494--4d6b--9241--fdbaaca6900f-osd--block--ed34515a--46d5--46ec--bed1--5308118c9cc7
                                                              253:5    0   7.3T  0 lvm 
[REDACTED DUE TO CHARACTER LIMIT, but does down to sdbk]
--> as I said, I've worked around the issue now, but we should probably track down the cause for the next person.
 
Last edited:
a couple of things:
ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDataDevice object at 0x7f5ca278bbb0> value: '/dev/sdad'

the errors the gui was showing you was command output- the gui is of no consequence in this case. It is interesting in any case- what versions of ceph packages are you on?
 
we have the same issue on fresh installed cluster

Bash:
root@hsc-proxmox-01:~# pveversion
pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve)

ceph-volume lvm create --cluster-fsid e465e8be-5b26-4410-84ee-91339dade6e2 --data /dev/sdb usage: ceph-volume lvm create [-h] --data DATA [--data-size DATA_SIZE] [--data-slots DATA_SLOTS] [--osd-id OSD_ID] [--osd-fsid OSD_FSID] [--cluster-fsid CLUSTER_FSID] [--crush-device-class CRUSH_DEVICE_CLASS] [--dmcrypt] [--no-systemd] [--bluestore] [--block.db BLOCK_DB] [--block.db-size BLOCK_DB_SIZE] [--block.db-slots BLOCK_DB_SLOTS] [--block.wal BLOCK_WAL] [--block.wal-size BLOCK_WAL_SIZE] [--block.wal-slots BLOCK_WAL_SLOTS] [--filestore] [--journal JOURNAL] [--journal-size JOURNAL_SIZE] [--journal-slots JOURNAL_SLOTS] ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDataDevice object at 0x7f7b42e249d0> value: '/dev/sdb'

Here is a list of our current used versions

proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve) pve-manager: 7.3-3 (running version: 7.3-3/c3928077) pve-kernel-5.15: 7.2-14 pve-kernel-helper: 7.2-14 ceph: 17.2.5-pve1 ceph-fuse: 17.2.5-pve1 ceph-volume/stable,now 17.2.5-pve1 all [installed]

Any idea?
 
there are two possibilities- either this is a systemwide issue, or specific to /dev/sdb.

IF its a systemwide issue, there is an issue either with your proxmox install OR your storage bus that would need to be diagnosed.

however, if its specific to sdb-
1. smartctl --test=long /dev/sdb
2. ceph-volume lvm disk zap /dev/sdb
and try again assuming smart shows no errors. You can also try to precreate the lv as the OP did.
 
we checked this

it seems to be an clusterwide issue
no smart errors found

we tried different disks on different nodes

ceph-volume lvm disk zap /dev/sdb
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID] [--no-systemd]
[DEVICES ...]
ceph-volume lvm zap: error: argument DEVICES: invalid <ceph_volume.util.arg_validators.ValidZapDevice object at 0x7fea72e967f0> value: '/dev/sdb'

ceph-volume lvm disk zap /dev/sdc
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID] [--no-systemd]
[DEVICES ...]
ceph-volume lvm zap: error: argument DEVICES: invalid <ceph_volume.util.arg_validators.ValidZapDevice object at 0x7fcd4a96a670> value: '/dev/sdc'
 
I found an reported bug at ceph, which could be lead to same result:
https://github.com/ceph/ceph/pull/49171
https://tracker.ceph.com/issues/58189#change-229068

[2023-01-01 11:30:14,243][ceph_volume.process][INFO ] Running command: lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdb -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2023-01-01 11:30:14,359][ceph_volume.process][INFO ] Running command: /bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL [2023-01-01 11:30:14,376][ceph_volume.process][INFO ] stdout NAME="sdb" KNAME="sdb" PKNAME="" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="ST2000VN004-2E41" SIZE="1.8T" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" ... [2023-01-01 11:47:07,988][ceph_volume.main][INFO ] Running command: ceph-volume lvm disk zap /dev/sdb [2023-01-01 11:47:07,994][ceph_volume.util.system][WARNING] Executable pvs not found on the host, will return pvs as-is

I receive following error at executing ceph-volume inventory

cephadm ceph-volume inventory Inferring fsid 16739c4e-f33d-4064-8903-ef659628b24a Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-17 23:41:41 +0000 UTC quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 -e NODE_NAME=hsc-proxmox-01 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/log/ceph/16739c4e-f33d-4064-8903-ef659628b24a:/var/log/ceph:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpzuhvfb88:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 inventory /usr/bin/podman: stderr Traceback (most recent call last): /usr/bin/podman: stderr File "/usr/sbin/ceph-volume", line 11, in <module> /usr/bin/podman: stderr load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__ /usr/bin/podman: stderr self.main(self.argv) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc /usr/bin/podman: stderr return f(*a, **kw) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main /usr/bin/podman: stderr terminal.dispatch(self.mapper, subcommand_args) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch /usr/bin/podman: stderr instance.main() /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 53, in main /usr/bin/podman: stderr with_lsm=self.args.with_lsm)) /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 39, in __init__ /usr/bin/podman: stderr all_devices_vgs = lvm.get_all_devices_vgs() /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 797, in get_all_devices_vgs /usr/bin/podman: stderr return [VolumeGroup(**vg) for vg in vgs] /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 797, in <listcomp> /usr/bin/podman: stderr return [VolumeGroup(**vg) for vg in vgs] /usr/bin/podman: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 517, in __init__ /usr/bin/podman: stderr raise ValueError('VolumeGroup must have a non-empty name') /usr/bin/podman: stderr ValueError: VolumeGroup must have a non-empty name Traceback (most recent call last): File "/usr/sbin/cephadm", line 9468, in <module> main() File "/usr/sbin/cephadm", line 9456, in main r = ctx.func(ctx) File "/usr/sbin/cephadm", line 2083, in _infer_config return func(ctx) File "/usr/sbin/cephadm", line 2028, in _infer_fsid return func(ctx) File "/usr/sbin/cephadm", line 2111, in _infer_image return func(ctx) File "/usr/sbin/cephadm", line 1986, in _validate_fsid return func(ctx) File "/usr/sbin/cephadm", line 6093, in command_ceph_volume out, err, code = call_throws(ctx, c.run_cmd(), verbosity=CallVerbosity.QUIET_UNLESS_ERROR) File "/usr/sbin/cephadm", line 1788, in call_throws raise RuntimeError('Failed command: %s' % ' '.join(command)) RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 -e NODE_NAME=hsc-proxmox-01 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/log/ceph/16739c4e-f33d-4064-8903-ef659628b24a:/var/log/ceph:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpzuhvfb88:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 inventory
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!