Hello,
We have a cluster which we have upgraded to 6.4, everything went smooth, thank you for your continous hard work.
We have been reading the known issues and are aware of the following instructions:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool
We have switched to proxmox-boot-tool on all hypervisors except the 3 oldest ones. Unfortunately those were installed with a Proxmox VE ISO before 5.4 and those have a different setup on the disks:
We have Ceph active on our cluster and those 3 hypervisors are actually the Ceph monitors, managers and meta data servers. They also have 6 OSD's each from a total of 70 OSD's.
So what is the best way of switching those machines to proxmox-boot-tool? If a reinstallation should be done, what are the steps to take into consideration and in which order?
Any help is appreciated.
We have a cluster which we have upgraded to 6.4, everything went smooth, thank you for your continous hard work.
We have been reading the known issues and are aware of the following instructions:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool
We have switched to proxmox-boot-tool on all hypervisors except the 3 oldest ones. Unfortunately those were installed with a Proxmox VE ISO before 5.4 and those have a different setup on the disks:
Code:
# lsblk -o +FSTYPE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT FSTYPE
sda 8:0 0 238.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 238.5G 0 part zfs_member
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 238.5G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 238.5G 0 part zfs_member
└─sdb9 8:25 0 8M 0 part
We have Ceph active on our cluster and those 3 hypervisors are actually the Ceph monitors, managers and meta data servers. They also have 6 OSD's each from a total of 70 OSD's.
So what is the best way of switching those machines to proxmox-boot-tool? If a reinstallation should be done, what are the steps to take into consideration and in which order?
Any help is appreciated.