how to roll back / remove updates from p-ve: 5.0-21 to pve: 5.0-18

Rais Ahmed

Active Member
Apr 14, 2017
50
4
28
37
Hi,
I want to revert back pve updates to previous state, please guide

Current Version
#pveverison -v
proxmox-ve: 5.0-21 (running kernel: 4.10.17-3-pve)
pve-manager: 5.0-31 (running version: 5.0-31/27769b1f)
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.17-3-pve: 4.10.17-21
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-5
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.11-pve17~bpo90
libpve-apiclient-perl: 2.0-2

want to revert back to this
proxmox-ve: 5.0-18 (running kernel: 4.10.17-1-pve)
pve-manager: 5.0-29 (running version: 5.0-29/6f01516)
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-14
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
libpve-apiclient-perl: 2.0-2
 
What are you actually trying to achieve?
 
actually i am facing problem while power off 2 nodes in a 3 node cluster, my last node behaving abnormally showing "rejecting I/O error". see below post. after enabling the qourum by command "pvecm expected 1" & reboot the last node. it work normally.
https://forum.proxmox.com/threads/pve-5-rejecting-i-o-to-offline-device.36889/

i have found that node1 which is creating problem have newer running kernel: 4.10.17-3-pve, while other 2 nodes have running kernel: 4.10.17-1-pve.
 
I doubt that the kernel is causing that issue. But you can test without reverting the packages, just by selecting the old kernel in the grub menu.
 
  • Like
Reactions: Rais Ahmed
switched to old kernal but now every time i turned off node2 or node3 and run cluster on a single one its getting reboot automatically did'nt understand what is going on :[
 
switched to old kernal but now every time i turned off node2 or node3 and run cluster on a single one its getting reboot automatically did'nt understand what is going on :[
have you ha turned on ? if yes turning off 2 of 3 nodes will trigger the self fencing
 
  • Like
Reactions: Rais Ahmed
yes HA configured, and as you said turning off 2nodes(node2 & node3) node1 reboot. after checking logs before rebooting node1 shows
client watchdog expired.
but when i down node1 & node2, node3 did not restart same happend with node2. but node1 getting reboot automatically.
Note: we have created cluster on node1 and added node2 & node3 in cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!