Hi,
I have a 4 nodes cluster and ceph working well for few months now, one machine has a bit older hardware, cpu = x5670 (about 2010 cpu)
Last week, I upgrade to new Kernel 5.13.19-4, then I reboot, Proxmox failed to find boot drive. At the end, I have pull out all ceph osd disks, then reboot success, however, after Proxmox is up, I plug in all osd disks, but they shows up mismatch version, and this node can not connect to ceph network drive for VMs.
Here is my set up:
1. I have two raid cards, card #1 is a older raid card #1, has two virtual disks (raid 0 each), and zfs-1 to have proxmox system on them.
2. my 2nd raid card is a IT mode lsi card, and two ssd connected to it, and as two OSD ceph disk.
If I have two OSD plugged in, the system will not boot because it can not find zfs-1 disks. the first attachment is the output of "zpool status", it shows disk id for two zfs disks, that is why I do not understand why the system is confused with osd disks at boot.
Once it booted up, I plugged in OSD disks, ceph shows two mismatched version OSD, and this node lost connection to ceph drives (can not run VM on it)
please see screen shot for OSD screen.
My question:
1. why proxmox can not find bootable zfs-1 when osd disks are plugged in?
2. how to fix ceph OSD mismatch version issue?
many thanks.
I have a 4 nodes cluster and ceph working well for few months now, one machine has a bit older hardware, cpu = x5670 (about 2010 cpu)
Last week, I upgrade to new Kernel 5.13.19-4, then I reboot, Proxmox failed to find boot drive. At the end, I have pull out all ceph osd disks, then reboot success, however, after Proxmox is up, I plug in all osd disks, but they shows up mismatch version, and this node can not connect to ceph network drive for VMs.
Here is my set up:
1. I have two raid cards, card #1 is a older raid card #1, has two virtual disks (raid 0 each), and zfs-1 to have proxmox system on them.
2. my 2nd raid card is a IT mode lsi card, and two ssd connected to it, and as two OSD ceph disk.
If I have two OSD plugged in, the system will not boot because it can not find zfs-1 disks. the first attachment is the output of "zpool status", it shows disk id for two zfs disks, that is why I do not understand why the system is confused with osd disks at boot.
Once it booted up, I plugged in OSD disks, ceph shows two mismatched version OSD, and this node lost connection to ceph drives (can not run VM on it)
please see screen shot for OSD screen.
My question:
1. why proxmox can not find bootable zfs-1 when osd disks are plugged in?
2. how to fix ceph OSD mismatch version issue?
many thanks.