Today I updated one of my 4 Proxmox nodes from PVE 7 to 8, and everything is mostly working well, but my 3 OSD's on the upgraded machine are missing.
lsblk still shows the disks present:
But when I run pvscan, lvscan, and vgscan nothing is detected. My OSD's won't come online as they are reporting missing keyrings, but I assume that's just because they can't find their associated disks. Running fdisk on the mpio--xxx devices shows no partition table and a crypto_luks signature, the same as it does when I do the same on the nodes with working OSD's.
I'm running hyper-converged on a Dell PowerEdge VRTX system with multipath. I verified that my lvm.conf is the same as my other working nodes running on proxmox 7, multipath config is the same as other nodes, sort of at a loss on where to go next.
multipath -ll gives no output on any of my nodes, even the working ones which I don't really understand either. If I can't figure this out, I can probably complete the upgrade by zapping my OSD's, recreating, waiting for a rebalance, etc, but I'm a little nervous that won't even work as I don't understand the reasoning behind the disks not being seen by lvm.
lsblk still shows the disks present:
Code:
root@hypervisor-4:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 930.5G 0 part
├─pve-swap 252:2 0 16G 0 lvm [SWAP]
├─pve-root 252:3 0 96G 0 lvm /
├─pve-data_tmeta 252:4 0 8G 0 lvm
│ └─pve-data-tpool 252:7 0 786.4G 0 lvm
│ └─pve-data 252:8 0 786.4G 1 lvm
└─pve-data_tdata 252:5 0 786.4G 0 lvm
└─pve-data-tpool 252:7 0 786.4G 0 lvm
└─pve-data 252:8 0 786.4G 1 lvm
sdb 8:16 0 2.7T 0 disk
└─sdb1 8:17 0 2.7T 0 part
└─mpio--3tb-ceph--osd--2 252:6 0 2.7T 0 lvm
sdc 8:32 0 5.5T 0 disk
└─sdc1 8:33 0 5.5T 0 part
└─mpio--6tb-ceph--osd--0 252:1 0 5.5T 0 lvm
sdd 8:48 0 9.1T 0 disk
└─sdd1 8:49 0 9.1T 0 part
└─mpio--10tb-ceph--osd--11 252:0 0 9.1T 0 lvm
But when I run pvscan, lvscan, and vgscan nothing is detected. My OSD's won't come online as they are reporting missing keyrings, but I assume that's just because they can't find their associated disks. Running fdisk on the mpio--xxx devices shows no partition table and a crypto_luks signature, the same as it does when I do the same on the nodes with working OSD's.
I'm running hyper-converged on a Dell PowerEdge VRTX system with multipath. I verified that my lvm.conf is the same as my other working nodes running on proxmox 7, multipath config is the same as other nodes, sort of at a loss on where to go next.
multipath -ll gives no output on any of my nodes, even the working ones which I don't really understand either. If I can't figure this out, I can probably complete the upgrade by zapping my OSD's, recreating, waiting for a rebalance, etc, but I'm a little nervous that won't even work as I don't understand the reasoning behind the disks not being seen by lvm.