Can't create OSDs with shared disk for DB/WALL (NVME) in same host.

Jota V.

Well-Known Member
Jan 29, 2018
59
9
48
50
We're testing Proxmox 6 with ZFS and CEPH. We have three nodes, all with four 2 TB disks, one SSD for Proxmox and a NVME 250G disk for DB/WALL

Code:
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0   1.8T  0 disk
└─ceph--30b94e73--f96c--4ee0--9da9--252782501fce-osd--block--b1c867b0--488c--4c87--b015--fa4e346f2bae 253:3    0   1.8T  0 lvm
sdb                                                                                                     8:16   0   1.8T  0 disk
sdc                                                                                                     8:32   0   1.8T  0 disk
├─sdc1                                                                                                  8:33   0   1.8T  0 part
└─sdc9                                                                                                  8:41   0     8M  0 part
sdd                                                                                                     8:48   0   1.8T  0 disk
├─sdd1                                                                                                  8:49   0   1.8T  0 part
└─sdd9                                                                                                  8:57   0     8M  0 part
sde                                                                                                     8:64   0 232.9G  0 disk
├─sde1                                                                                                  8:65   0  1007K  0 part
├─sde2                                                                                                  8:66   0   512M  0 part
└─sde3                                                                                                  8:67   0 232.4G  0 part
  ├─pve-swap                                                                                          253:0    0    32G  0 lvm  [SWAP]
  └─pve-root                                                                                          253:1    0 200.4G  0 lvm  /
nvme0n1                                                                                               259:0    0 232.9G  0 disk
└─ceph--1d699647--039c--45f6--8233--f7c09e1b51f8-osd--db--2216dcd3--0e73--4f68--9f56--6d2e37819862    253:2    0 186.3G  0 lvm

Disks:
- sda and sdb are for testing Ceph in all three nodes
- sdc and sdd are used by ZFS (Production)
- sde is Proxmox disk
- nvme is used for DB/WALL

From GUI create first OSD and set 50 GB and it was created successfully.

1574691340969.png

But when we create second OSD as first OSD we got

Code:
lvcreate 'ceph-87c50c6e-8256-4958-a697-18c582a93766/osd-db-0c2ce102-9177-4365-97c5-95831a9a9014' error: Volume group "ceph-87c50c6e-8256-4958-a697-18c582a93766" has insufficient free space (11924 extents): 47694 required.

We think system is not using the parameter 50 GB size as we can see in lsblock

Code:
nvme0n1                                                                                               259:0    0 232.9G  0 disk
└─ceph--1d699647--039c--45f6--8233--f7c09e1b51f8-osd--db--2216dcd3--0e73--4f68--9f56--6d2e37819862    253:2    0 186.3G  0 lvm

How can we create two (and in a few days four OSD) sharing DB/WALL on one nvme disk???
 
Last edited:
this should already be fixed with pve-manager version 6.0-5 (gonna update that bug)
 
we have pve-manager 6.0-4. Can we patch manually or upgrade using pve-no-subscription packages??

or create using command line parameters ..
 
Thanks. Upgraded with non-subscription packages and it works ok! :)

We're using spinning disks (4 x 2TB) and a 250 GB NVME for DB disk and using Bluestore.

Can we use filestore (using commands) with Proxmox 6?? We have read filestore have more power with non ssd disks.