Very bad update experience - NFS won't mount

Republicus

Well-Known Member
Aug 7, 2017
137
20
58
40
So I'm in a huge rut. I updated my nodes and everything seems to have broke.

ZFS wont mount encrypted datasets (separate post for this created)
NFS wont mount
NFS wont export
No syslog (in GUI)
Local storage wont load (communication failure)

Datacenter shows quorum and active nodes -- all which appear offline, all shared (and local) storage is broken, and I'm dead in the water...

Code:
root@pvesan:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
 
One of my servers also hung on boot after running the updates.

IMG_20200525_103021.jpg

Try booting with the previous 5.4.34-1-pve kernel.

IMG_20200525_104024.jpg

After booting with the 5.4.34-1-pve kernel once, I tried 5.4.41-1-pve again and it booted successfully.

Edit:

5.4.41-1-pve still hanging on multiple servers so will stick with 5.4.34-1-pve for now.
 
Last edited:
I have as well a problem booting the newest kernel 5.4.41-1-pve. As a guess, maybe it's related to the encrpted LVM2 disk - It hangs even before being asked to enter the crypt passphrase.

Booting the previous kernel 5.4.34-1-pve then works as expected again. PVE had been installed over Debian 10.x on an encrypted LVM2 volume.

# cat /etc/debian_version
10.4

# blkid
/dev/loop0: TYPE="squashfs"
/dev/sda1: UUID="xxxx-xxxx" TYPE="vfat" PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
/dev/sda2: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="ext2" PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx"
/dev/sda3: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="crypto_LUKS" PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
/dev/mapper/sda3_crypt: UUID="xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx" TYPE="LVM2_member"
/dev/mapper/wrk--debi--01--vg-root: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="ext4"
/dev/mapper/wrk--debi--01--vg-swap_1: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="swap"

# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-network-perl: 0.4-4
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Attached a picture of trying to boot kernel 5.4.41-1-pve
 

Attachments

  • IMG_20200525_pve.jpg
    IMG_20200525_pve.jpg
    310.8 KB · Views: 10
Thanks for your replies. It may be related but its not exactly what I am experiencing. I tried the previous kernel with no joy.

All of my nodes boot. I can access each node Summary page on PVE GUI.

>>> This appears to be storage related.
I am having trouble accessing ALL storage (including local) from the PVE GUI.
Netoworking is reachable, I have quorum. etc..

1590411392318.png

>>> node02 and node03 are intentionally offline while I resolve this issue. Notice the cluster is communicating fine:

1590411556214.png

>>> nodes see the NFS exports fine

pvesm nfsscan
Code:
root@node01:~# pvesm nfsscan 10.12.12.170
/raid50vol/plex-media         10.12.12.0/24
/raid50vol/pve/iso-images     10.12.12.0/24
/raid50vol/pve/nfs-storage    10.12.12.0/24
/raid50vol/pve/lxc-templates  10.12.12.0/24
/raid50vol/pve/lxc-storage    10.12.12.0/24
/raid50vol/encrypted_data/pve 10.12.12.0/24

mount.nfs is being blocked by apparmor

syslog
Code:
May 25 08:28:36 node01 audit[29643]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/mnt/pve/nfs-async/" pid=29643 comm="mount.nfs" fstype="nfs" srcname="10.12.12.170:/raid50vol/pve/nfs-storage" flags="rw, noatime"
May 25 08:28:36 node01 kernel: audit: type=1400 audit(1590409716.523:1089): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/mnt/pve/nfs-async/" pid=29643 comm="mount.nfs" fstype="nfs" srcname="10.12.12.170:/raid50vol/pve/nfs-storage" flags="rw, noatime"
May 25 08:28:36 node01 audit[29646]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/mnt/pve/nfs-async/" pid=29646 comm="mount.nfs" fstype="nfs" srcname="10.12.12.170:/raid50vol/pve/nfs-storage" flags="rw, noatime"
May 25 08:28:36 node01 kernel: audit: type=1400 audit(1590409716.539:1090): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/mnt/pve/nfs-async/" pid=29646 comm="mount.nfs" fstype="nfs" srcname="10.12.12.170:/raid50vol/pve/nfs-storage" flags="rw, noatime"
May 25 08:28:36 node01 audit[29649]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/mnt/pve/nfs-async/" pid=29649 comm="mount.nfs" fstype="nfs" srcname="10.12.12.170:/raid50vol/pve/nfs-storage" flags="rw, noatime"
May 25 08:28:36 node01 kernel: audit: type=1400 audit(1590409716.555:1091): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/mnt/pve/nfs-async/" pid=29649 comm="mount.nfs" fstype="nfs" srcname="10.12.12.170:/raid50vol/pve/nfs-storage" flags="rw, noatime"



 
@Alwin local storage is accessible in console/CLI; but not NFS

I am noticing strange things.
NFS remains unmounted but yet I see PVE claims HA is successfully starting HA containers with resources on NFS storage (that wont mount)

Timestamp in picture are minutes ago.

1590413635923.png
 
@Alwin local storage is accessible in console/CLI; but not NFS
What does pvesm status show? And the /mnt/pve/<storage_id>/ is sure not mounted?
 
pvesm status hangs

NFS not mounting on all PVE nodes

Code:
cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=99011432k,nr_inodes=24752858,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=19813836k,mode=755 0 0
/dev/mapper/pve-root / ext4 rw,relatime,errors=remount-ro 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
none /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=2267 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
sunrpc /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
raid10bak /raid10bak zfs rw,xattr,noacl 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=19813832k,mode=700 0 0
raid10bak/encrypted_data /raid10bak/encrypted_data zfs rw,xattr,noacl 0 0
raid10bak/encrypted_data/pve /raid10bak/encrypted_data/pve zfs rw,xattr,noacl 0 0

Still not sure why local and ZFS filesystems are not accessible either

Further, nfs-kernel-server is working

I just tested a Windows NFS client:

1590416019546.png
 
raid10bak/encrypted_data /raid10bak/encrypted_data zfs rw,xattr,noacl 0 0
raid10bak/encrypted_data/pve /raid10bak/encrypted_data/pve zfs rw,xattr,noacl 0 0
So the NFS server is local? If so, we don't support this type of setup.

But anyhow, check if pve-cluster & corosync services are running and don't display any errors.
 
It's semi-local.
Its a PVE installation with ZFS exported as NFS.
Of course the other systems access it remotely. Having it join the cluster allows me to monitor the storage through PVE like a PVE Storage Appliance.


Thanks for taking the time to respond @Alwin

>>> Both services are running

Code:
root@node05:~# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-05-25 08:48:39 EDT; 1h 34min ago
  Process: 1386 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
 Main PID: 1396 (pmxcfs)
    Tasks: 7 (limit: 9830)
   Memory: 67.7M
   CGroup: /system.slice/pve-cluster.service
           └─1396 /usr/bin/pmxcfs

May 25 10:17:29 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:29 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:29 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:29 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:29 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:29 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:39 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:39 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:39 node05 pmxcfs[1396]: [status] notice: received log
May 25 10:17:39 node05 pmxcfs[1396]: [status] notice: received log

Not sure if pmxcfs[1396]: [status] notice: received log is indicative or anything or not. I don't see anything more than that notice as a log, though.

Code:
root@node05:~# systemctl status corosync
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-05-25 08:48:40 EDT; 1h 34min ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
 Main PID: 1512 (corosync)
    Tasks: 9 (limit: 9830)
   Memory: 149.4M
   CGroup: /system.slice/corosync.service
           └─1512 /usr/sbin/corosync -f

May 25 08:51:42 node05 corosync[1512]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
May 25 08:51:42 node05 corosync[1512]:   [KNET  ] pmtud: PMTUD link change for host: 4 link: 0 from 469 to 1397
May 25 08:51:42 node05 corosync[1512]:   [TOTEM ] A new membership (1.327b) was formed. Members joined: 4
May 25 08:51:42 node05 corosync[1512]:   [CPG   ] downlist left_list: 0 received
May 25 08:51:42 node05 corosync[1512]:   [CPG   ] downlist left_list: 0 received
May 25 08:51:42 node05 corosync[1512]:   [CPG   ] downlist left_list: 0 received
May 25 08:51:42 node05 corosync[1512]:   [CPG   ] downlist left_list: 0 received
May 25 08:51:42 node05 corosync[1512]:   [QUORUM] This node is within the primary component and will provide service.
May 25 08:51:42 node05 corosync[1512]:   [QUORUM] Members[4]: 1 4 5 6
May 25 08:51:42 node05 corosync[1512]:   [MAIN  ] Completed service synchronization, ready to provide service.
 
The services seem ok. Might also be a network issue. If the storage isn't needed on node1, then you could limit the nodes (datacenter -> storage -> storage_id -> nodes) and have it truly remote.
 
Thanks for that suggestion :)

Because it's happening to local storage and not only network attached storage it would seem to me something else is at play?

I cannot see the contents of local /var/lib/vz or of any other storage be it local or networked.
After clicking Summary or Contents of (local) storage -- the GUI hangs on Loading... until I see Connection timed out (596)
 
@Alwin Okay I am starting to get somewhere now

It does appear to be a kernel issue -- or an issue with pvesm / corosync and the later kernels. That, I am not sure.

What I did was bring up ONE node (node04) with pve-kernel-5.3.18-3-pve (all other nodes remained on latest pve-kernel-5.4.41-1-pve)
Once node04 reached the PVE console --- all of the storages came online on ALL OTHER nodes nearly instantly.

I then shut down node04 (pve-kernel-5.3.18-3-pve)
All other nodes that remained online (pve-kernel-5.4.41-1-pve) could still access the shared storages.

I then started node02 normally with pve-kernel-5.4.41-1-pve and it would NOT mount storage again
I then brought node04 back online again with pve-kernel-5.3.18-3-pve --- now node04 and all other nodes (including node02) could mount storage again.

It seems something is wrong with the later two kernels.

Oddly, right now, as long as only ONE node is running pve-kernel-5.3.18-3-pve then PVE services cluster-wide will come online.

This would make sense why it took days for me to notice; because, my node called pvesan (storage only) was not restarted until yesterday after I had believed the other nodes succeeded in the update (as all seemed well, only because pvesan was the one node still running pve-kernel-5.3.18-3-pve).
 
Last edited:
I cannot see the contents of local /var/lib/vz or of any other storage be it local or networked.
After clicking Summary or Contents of (local) storage -- the GUI hangs on Loading... until I see Connection timed out (596)
Please don't use the GUI to debug. The data in the GUI is provided by the pvedaemon and pveproxy service. If those hang for whatever reason, nothing will get updated on the GUI.

Oddly, right now, as long as only ONE node is running pve-kernel-5.3.18-3-pve then PVE services cluster-wide will come online.
What? This sounds very much like black magic. ;) But what is in the log files (syslog/journal)?
 
Haha! Yes I know. ‍♂️

I ran top before the cluster came back online and noticed a kworker was using a lot of CPU.

kworker/u132:3-ib-comp-unb-wq

My systems are connected to storage via 40Gbe infiniband.

After getting the cluster back online and storage mouted cluster-wide, that kworker is not listed in top any longer.

I wonder if the single system coming online with the older kernel somehow made IB's subnet manager happy (mine is integrated on my IB switch) and that is why the other systems came online at the same time.

So my use case may not be experienced by anyone not using infiband hardware.

@Alwin can you tell me where best to report this issue?


PS. Using the GUI is sufficient to say something was wrong. Naturally, all of my troubleshooting has been done at console ;)
 
I wonder if the single system coming online with the older kernel somehow made IB's subnet manager happy (mine is integrated on my IB switch) and that is why the other systems came online at the same time.
The IB kernel module used may have changed and that's why you exhibit the issue.

@Alwin can you tell me where best to report this issue?
That depends on what is broken. You may need to go through all Kernel versions from Ubuntu (yes, Proxmox Kernel) to find the last version where it is working. Best start with the 5.4 release candidate, as most changes between 5.3.18-3 & 5.4.41 landed there.
https://kernel.ubuntu.com/~kernel-ppa/mainline/

EDIT: and have a look here
https://forum.proxmox.com/threads/proxmox-ve-6-2-released.69647/post-316043
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!