Migration PVE 6.x to 7.x and Ceph nautilus to octopus

avbuuren

Member
Feb 3, 2020
16
1
23
36
hello,

I'm using actually a PVE Cluster 6.2.15 with 4 nodes and an external ceph storage based on a CEPH Nautilus Cluster 14.2.11 with 3 nodes where Disks VM's are stored.

I would like to upgrade my PVE cluster from 6.x to 7.x and my ceph cluster nodes from nautilus to octopus. What i ve plan to do :

Upgrade my PVE cluster and my CEPH cluster to the latest version of Proxmox VE 6.4
Upgrade the Ceph Nautilus cluster from nautilus to Octopus
upgrade PVE Cluster nodes and Ceph Cluster nodes from 6.x to 7.x.

i've read the upgrade procedure and I have somes questions and doubts because there is major features with ZFS 2.x / debian11 / last version of Nautilus :

1- After I will upgrade to PVE 6.4 , for ZFS and the Known Issues , i will be in that case : https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool because :

- I initially install all my Proxmox nodes with ISO 5.x
- I use LEGACY BOOT MODE
- I use ZFS RAID1 as the root filesystem

Can you confirm that : If i never use the "zpool upgrade" command afer upgrade to ZFS 2.0 , there is no risk to break my pool ?
I only need to execute a "zpool replace , zpool offline/online" command if i have a crash disk.


2- After I will upgrade my 3 Ceph nodes to last version of Nautilus , I will be it that case : https://forum.proxmox.com/threads/c..._id-reclaim-cve-2021-20288.88038/#post-385756
I don't really understand what i have to do ? do I have to restart all my VMs ? or I only have to execute the command "ceph config set mon auth_allow_insecure_global_id_reclaim false" ?

3- Before I Upgrade from PVE 6.x to 7.x : I see the chapter about https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Check_Linux_Network_Bridge_MAC.

Actually I'm using lots of vmbr / bound / vlans in my/etc/network/interfaces Same question; I don't really understand what i have to do .
I don't use the ifupdown2 command; so do I have to add the "hwaddress MAC " of all my physical NIC in all my vmbr bridge section ??

In following example my vmbr are linked to a bond of 2 NIC ( eno1 eno2)

auto bond2
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode active-backup

auto vmbr0
iface vmbr0 inet static
address 10.X.X.X
netmask 255.255.255.0
gateway 10.X.X.Y
bridge-ports bond2
bridge-stp off
bridge-fd 0

so the result should be ? :

auto vmbr0
iface vmbr0 inet static
address 10.X.X.X
hwaddress "MAC ENO1"
hwaddress "MAC ENO2"
netmask 255.255.255.0
gateway 10.X.X.Y
bridge-ports bond2
bridge-stp off
bridge-fd 0

Thanks for your reply .
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!