4.4 to 5.2 upgrade. Why is pve-data corrupt?

Discussion in 'Proxmox VE: Installation and configuration' started by jimmyjoe, Jun 18, 2018.

Tags:
  1. jimmyjoe

    jimmyjoe Member
    Proxmox Subscriber

    Joined:
    Jan 12, 2015
    Messages:
    80
    Likes Received:
    2
    I upgraded from 4.4 to 5.2 recently and noticed my /var/lib/vz ext4 filesystem didn't survive the upgrade. Is this a known issue with Debian 9 or something? On 4.4 this system was rebooted, and fsck'd, practically every weekend for either Proxmox or SAN upgrades so I'm wondering if the error is due to some feature not preset in 4.4 (kernel, ext4, etc) or just.. sunspots.

    # fsck -Cf /dev/pve/data
    fsck from util-linux 2.29.2
    e2fsck 1.43.4 (31-Jan-2017)
    ext2fs_open2: Bad magic number in super-block
    fsck.ext2: Superblock invalid, trying backup blocks...
    The filesystem size (according to the superblock) is 460950528 blocks
    The physical size of the device is 448367616 blocks
    Either the superblock or the partition table is likely to be corrupt!
    Abort<y>? cancelled!

    /dev/mapper/pve-data: ***** FILE SYSTEM WAS MODIFIED *****

    /dev/mapper/pve-data: ********** WARNING: Filesystem still has errors **********
     
  2. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    Hi,
    that's not normal.

    Do you have hardware-error on the PV? What kind of storage is it?
    SAN-Upgrade? You are sure, that the lvm pve-data is used by one node only? Sounda little bit, that you mount the FS from more than one host.

    Udo
     
  3. jimmyjoe

    jimmyjoe Member
    Proxmox Subscriber

    Joined:
    Jan 12, 2015
    Messages:
    80
    Likes Received:
    2
    No hardware errors that I know of. pve-data is on Areca HW RAID1 with a hotspare and battery backup.

    pve-data is only used locally on each Proxmox node. it is not shared over any network filesystem. We have Infiniband backend serving gluster and iSCSI for virtual machines. When storage must reboot for kernel upgrades, it requires proxmox hosts to power down all VMs and unmount shared volumes. The proxmox hosts are rebooted by the storage server after it as finished upgrading kernel -- that's when fsck happens (touch /forcefsck or grub parameter) on Proxmox hosts.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice