1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

how to roll back / remove updates from p-ve: 5.0-21 to pve: 5.0-18

Discussion in 'Proxmox VE: Installation and configuration' started by Rais Ahmed, Sep 13, 2017.

  1. Rais Ahmed

    Rais Ahmed New Member

    Joined:
    Apr 14, 2017
    Messages:
    20
    Likes Received:
    2
    Hi,
    I want to revert back pve updates to previous state, please guide

    Current Version
    #pveverison -v
    proxmox-ve: 5.0-21 (running kernel: 4.10.17-3-pve)
    pve-manager: 5.0-31 (running version: 5.0-31/27769b1f)
    pve-kernel-4.10.17-2-pve: 4.10.17-20
    pve-kernel-4.10.15-1-pve: 4.10.15-15
    pve-kernel-4.10.17-3-pve: 4.10.17-21
    libpve-http-server-perl: 2.0-6
    lvm2: 2.02.168-pve3
    corosync: 2.4.2-pve3
    libqb0: 1.0.1-1
    pve-cluster: 5.0-12
    qemu-server: 5.0-15
    pve-firmware: 2.0-2
    libpve-common-perl: 5.0-16
    libpve-guest-common-perl: 2.0-11
    libpve-access-control: 5.0-6
    libpve-storage-perl: 5.0-14
    pve-libspice-server1: 0.12.8-3
    vncterm: 1.5-2
    pve-docs: 5.0-9
    pve-qemu-kvm: 2.9.0-5
    pve-container: 2.0-15
    pve-firewall: 3.0-2
    pve-ha-manager: 2.0-2
    ksm-control-daemon: 1.2-2
    glusterfs-client: 3.8.8-1
    lxc-pve: 2.0.8-3
    lxcfs: 2.0.7-pve4
    criu: 2.11.1-1~bpo90
    novnc-pve: 0.6-4
    smartmontools: 6.5+svn4324-1
    zfsutils-linux: 0.6.5.11-pve17~bpo90
    libpve-apiclient-perl: 2.0-2

    want to revert back to this
    proxmox-ve: 5.0-18 (running kernel: 4.10.17-1-pve)
    pve-manager: 5.0-29 (running version: 5.0-29/6f01516)
    pve-kernel-4.10.15-1-pve: 4.10.15-15
    pve-kernel-4.10.17-1-pve: 4.10.17-18
    libpve-http-server-perl: 2.0-5
    lvm2: 2.02.168-pve3
    corosync: 2.4.2-pve3
    libqb0: 1.0.1-1
    pve-cluster: 5.0-12
    qemu-server: 5.0-14
    pve-firmware: 2.0-2
    libpve-common-perl: 5.0-16
    libpve-guest-common-perl: 2.0-11
    libpve-access-control: 5.0-5
    libpve-storage-perl: 5.0-12
    pve-libspice-server1: 0.12.8-3
    vncterm: 1.5-2
    pve-docs: 5.0-9
    pve-qemu-kvm: 2.9.0-2
    pve-container: 2.0-15
    pve-firewall: 3.0-2
    pve-ha-manager: 2.0-2
    ksm-control-daemon: 1.2-2
    glusterfs-client: 3.8.8-1
    lxc-pve: 2.0.8-3
    lxcfs: 2.0.7-pve2
    criu: 2.11.1-1~bpo90
    novnc-pve: 0.6-4
    smartmontools: 6.5+svn4324-1
    zfsutils-linux: 0.6.5.9-pve16~bpo90
    libpve-apiclient-perl: 2.0-2
     
  2. pabernethy

    pabernethy Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    128
    Likes Received:
    14
    What are you actually trying to achieve?
     
  3. Rais Ahmed

    Rais Ahmed New Member

    Joined:
    Apr 14, 2017
    Messages:
    20
    Likes Received:
    2
    actually i am facing problem while power off 2 nodes in a 3 node cluster, my last node behaving abnormally showing "rejecting I/O error". see below post. after enabling the qourum by command "pvecm expected 1" & reboot the last node. it work normally.
    https://forum.proxmox.com/threads/pve-5-rejecting-i-o-to-offline-device.36889/

    i have found that node1 which is creating problem have newer running kernel: 4.10.17-3-pve, while other 2 nodes have running kernel: 4.10.17-1-pve.
     
  4. pabernethy

    pabernethy Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    128
    Likes Received:
    14
    I doubt that the kernel is causing that issue. But you can test without reverting the packages, just by selecting the old kernel in the grub menu.
     
    Rais Ahmed likes this.
  5. Rais Ahmed

    Rais Ahmed New Member

    Joined:
    Apr 14, 2017
    Messages:
    20
    Likes Received:
    2
    switched to old kernal but now every time i turned off node2 or node3 and run cluster on a single one its getting reboot automatically did'nt understand what is going on :[
     
  6. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    1,274
    Likes Received:
    114
    have you ha turned on ? if yes turning off 2 of 3 nodes will trigger the self fencing
     
    Rais Ahmed likes this.
  7. Rais Ahmed

    Rais Ahmed New Member

    Joined:
    Apr 14, 2017
    Messages:
    20
    Likes Received:
    2
    yes HA configured, and as you said turning off 2nodes(node2 & node3) node1 reboot. after checking logs before rebooting node1 shows
    client watchdog expired.
    but when i down node1 & node2, node3 did not restart same happend with node2. but node1 getting reboot automatically.
    Note: we have created cluster on node1 and added node2 & node3 in cluster.
     

Share This Page