revert from proxmox 8 to 7

You cannot revert back. Your only option is full backup, both bare metal hypervisor OS data and the virtual data.
You may not need VM data restore as there are no disk image changes between 7 and 8. However the Hypervisor OS upgrade (Debian 11 to 12) is not reversible.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
the cluster is installed in three dell r750 with ceph, the proxmox is installed in dell BOSS card, zfs is configured in two local nvme storage, but not in use, all vm is in ceph storage, root file system is in LVM, in this architecture, what precaution should be taken for smooth upgrade, all Nic's are 10g fibber.
 
The process is the same with a cluster.

The better question is, unless you have the time and resources to properly lab the upgrade, why are you upgrading at all? is there a feature/problem that you have that would be solved by upgrading? pve7 is still supported and will remain for another 9 months, you have plenty of time to plan this out.
 
  • Like
Reactions: wallacio
@alexskysilk, i already done practices' in VMware workstation, but do not have a spare bare metal to do the upgrade testing at this moment, tryes to copy the /etc folder, but not all the files is copying due to open file in memory i think, so now what will be the way out
 
@@alexskysilk , i am planning the upgrade coz of holiday season at this moment here in India, now not much load there in the system, if i defer this, next available time will be next year November, which is far away and version 7 will reach end of support
 
tryes to copy the /etc folder, but not all the files is copying due to open file in memory i think, so now what will be the way out
dont do that. there's no need and thats not at all what I mean.

even if you dont have spare hardware, you can spin up a virtual pve node, either in your infrastructure or in a cloud instance; restore a sampling of your virtual machines and verify they work. its not a perfect test but it should be good enough for your purposes.
 
keep finger crossed and try .... hope for the best .... rest leave on Destiney

but last question: should i move all the vm to local zfs storage from ceph before upgrading ? is that any help ?
 
today i upgraded one server to version 8, but ceph failed to start, two of the OSd in the node is down, after a search for few hours, what i found is that the bond LACP (balanced tcs) is not working after the upgrade, there is 4 bond interface in the node, out of these one is on mesh network for ceph in broadcast protocol, which is working and the management interface in lacp is working, rest of the two bond is down, coz of that ceph cluster network in a lacp bond is not pinging and the ceph cluster is gone down, i also found that in network tab, "Apply Configuration" is also not working, ifupdown2 is installed and working in version 7, but after upgrade all r dead....... any help
 
i found out the interface name of the extra card are changed...... i am attaching the interface

i tried to change the interface name and click apply config ..... but it is giving error of "can not change"

also the local disk in the node filling up by itself, now at 90% , i do not keep any ISO images or any other backup in the small 100 g partition, now it is deep RED ....



# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno12399np0
iface eno12399np0 inet manual

auto enp177s0f2
iface enp177s0f2 inet manual

auto enp177s0f3
iface enp177s0f3 inet manual

auto eno8303
iface eno8303 inet manual

auto eno8403
iface eno8403 inet manual

auto eno12409np1
iface eno12409np1 inet manual

auto enp178s0f0np0
iface enp178s0f0np0 inet manual

auto enp178s0f1np1
iface enp178s0f1np1 inet manual

auto enp177s0f0
iface enp177s0f0 inet manual

auto enp177s0f1
iface enp177s0f1 inet manual

iface ens4f0np0 inet manual

iface ens4f1np1 inet manual

iface ens5f0 inet manual

iface ens5f1 inet manual

iface ens5f2 inet manual

iface ens5f3 inet manual

auto mgnt
iface mgnt inet static
address 192.168.137.112/24
gateway 192.168.137.1
ovs_type OVSIntPort
ovs_bridge vmbr0

auto data
iface data inet static
address 192.168.137.119/24
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_mtu 9000

auto pvecluster1
iface pvecluster1 inet static
address 192.168.30.112/24
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_mtu 9000
ovs_options tag=30
#pvecluster-1

auto pvecluster2
iface pvecluster2 inet static
address 10.10.30.112/24
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_mtu 9000
ovs_options tag=30
#pvecluster-2

auto pvecluster3
iface pvecluster3 inet static
address 192.168.30.12/24
ovs_type OVSIntPort
ovs_bridge vmbr2
ovs_mtu 9000
ovs_options tag=30
#pvecluster-3

auto pveceph
iface pveceph inet static
address 192.168.20.112/24
ovs_type OVSIntPort
ovs_bridge vmbr2
ovs_mtu 9000
ovs_options tag=20
#ceph-cluster

auto migration
iface migration inet static
address 10.10.50.112/24
ovs_type OVSIntPort
ovs_bridge vmbr2
ovs_mtu 9000
ovs_options tag=50
#HA-Migration

auto ups
iface ups inet static
address 192.168.50.112/24
ovs_type OVSIntPort
ovs_bridge vmbr3

auto wan
iface wan inet manual
ovs_type OVSIntPort
ovs_bridge vmbr3
ovs_options tag=10
#BSNL

auto airtel
iface airtel inet manual
ovs_type OVSIntPort
ovs_bridge vmbr3
#Airtel

auto nas
iface nas inet static
address 192.168.0.112/24
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_mtu 9000

auto bond0
iface bond0 inet manual
ovs_bonds eno12399np0 eno12409np1
ovs_type OVSBond
ovs_bridge vmbr0
ovs_mtu 9000
ovs_options bond_mode=balance-tcp lacp=active

auto bond1
iface bond1 inet manual
ovs_bonds enp177s0f0 enp177s0f1
ovs_type OVSBond
ovs_bridge vmbr1
ovs_mtu 9000
ovs_options lacp=active bond_mode=balance-tcp

auto bond2
iface bond2 inet manual
ovs_bonds enp178s0f0np0 enp178s0f1np1
ovs_type OVSBond
ovs_bridge vmbr2
ovs_mtu 9000
ovs_options lacp=active bond_mode=balance-tcp

auto bond3
iface bond3 inet static
address 10.10.20.112/24
bond-slaves enp177s0f2 enp177s0f3
bond-miimon 100
bond-mode broadcast
mtu 9000
#ceph-public

auto bond4
iface bond4 inet manual
ovs_bonds eno8303 eno8403
ovs_type OVSBond
ovs_bridge vmbr3
ovs_mtu 9000
ovs_options lacp=active bond_mode=balance-tcp
#1G

auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 mgnt pvecluster1 nas
ovs_mtu 9000

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports bond1 data pvecluster2
ovs_mtu 9000

auto vmbr2
iface vmbr2 inet manual
ovs_type OVSBridge
ovs_ports bond2 pvecluster3 pveceph migration
ovs_mtu 9000

auto vmbr3
iface vmbr3 inet manual
ovs_type OVSBridge
ovs_ports bond4 ups wan airtel
ovs_mtu 9000
 
Last edited:
I'm having a hard time understanding all the intricacies of your config. I'd begin with simplifying it some.

I think your config uses enp177s0f2 and enp177s0f3 for ceph public (subnet 10.10.20.0/24) ,but I cant figure out what ports (vlan) you're using for the private traffic.

There are two ways of dealing with this, namely:
1. do you intend to use dedicated interface(s) for ceph public, private, or both?
2. why are you using a vmbr for your ceph private traffic?
3. I see you're using broadcast (I assume you're cabled for mesh mode) on your ceph public interface. You should probably vlan this off and use it for your private, something like so:

Code:
auto bond3
iface bond3 inet manual
    bond-slaves enp177s0f2 enp177s0f3
    bond-miimon 100
    bond-mode broadcast
    mtu 9000

#ceph-public
auto bond3.120
iface bond3.120 inet static
    address 10.10.20.112/24
    
#ceph-private
auto bond3.121
iface bond3.121 inet static
    address 192.168.20.112/24

If you ARE using ceph in your guests-
#ceph-public
auto vmbrx
iface vmbrx inet static
bridge-ports bond3.120
address 10.10.20.112/24
bridge-stp off
bridge-fd 0
 
hi
this setup is perfectly working with version 7.x and rest of the two node still in production and ceph is still working, i am using two separate set of nic for ceph public and private, 10.10.20.xx is in mesh network and do not have vlan, private network is in a different ovs port in bond 2 in vlan 20 to isolate the traffics from migration and data, the problem is (enp177s0f2 enp177s0f3} changed its name to (ens4f0np0 ens4f1np1), and also four more interface changed its name, and it stopped, now so many interface is showing in this file, i don't know how to go fwd, should i use a # in front of those old interface name ? and reconfigure them, but after two days later the biggest problem is that the i am unable to shell in to the CLI, the disk is full now with log, scared to restart now, /dev/mapper/pve-root is 100% full .... now what to do .... please help if possible ..... or i loose my job

Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 2.6G 23G 10% /run
/dev/mapper/pve-root 94G 94G 0 100% /
tmpfs 126G 60M 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 304K 72K 228K 24% /sys/firmware/efi/efivars
/dev/sda2 511M 352K 511M 1% /boot/efi
zpool-ha 825G 128K 825G 1% /zpool-ha
/dev/fuse 128M 56K 128M 1% /etc/pve
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-4
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-5
192.168.0.100:/mnt/ser-hdd/pve 14T 1.6T 13T 12% /mnt/pve/NFS
192.168.0.44:/mnt/NFSShare 3.6T 1.6T 2.0T 45% /mnt/pve/NFS-Storage
tmpfs 26G 0 26G 0% /run/user/0

message is :=

connection failed (Error 500: closing file '/var/tmp/pve-reserved-ports.tmp.650163' failed - No space left on device)

what to do now ?

these r the new interface which changed its name, but unable to figure out corresponding name in v 7

iface ens4f0np0 inet manual

iface ens4f1np1 inet manual

iface ens5f0 inet manual

iface ens5f1 inet manual

iface ens5f2 inet manual

iface ens5f3 inet manual
 
Last edited: