Upgrading PVE Tries to Remove proxmox-ve package

The new kernel seems to have some sort of issue with Thunderbolt or at least my QNAP QNA-T310G1S on my Intel NUC 13 . With 6.8.4-3-pve it works fine:

[ 1.740375] atlantic 0000:2e:00.0: enabling device (0000 -> 0002)
[ 2.148198] atlantic 0000:2e:00.0 enp46s0: renamed from eth0
[ 5.815641] atlantic 0000:2e:00.0 enp46s0: entered allmulticast mode
[ 8.189958] atlantic 0000:2e:00.0 enp46s0: atlantic: link change old 0 new 10000
[ 17.399990] atlantic 0000:2e:00.0 enp46s0: entered promiscuous mode

And it is alive and well too, with valid firmware:

root@pve-3:~# ethtool -i enp46s0
driver: atlantic
version: 6.8.4-3-pve
firmware-version: 3.1.57
expansion-rom-version:
bus-info: 0000:2e:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: yes
root@pve-3:~#

But with 6.8.8 it fails: (keeping all other packages on the latest build as above - flipping the kernel versions for boot only)

[ 1.725221] atlantic 0000:2e:00.0: enabling device (0000 -> 0002)
[ 2.010209] atlantic: Bad FW version detected: ffffffff
[ 2.010379] atlantic: probe of 0000:2e:00.0 failed with error -95

No more network. Still trying to determine if this is an atlantic driver problem (given the .ko module file is the same size on both versions) or Thunderbird related. Any ideas/hints welcome!
same here with an AQC107 over thunderbolt. from what I've checked in kernel git logs there weren't any changes to the atlantic driver from 6.8.4 to 6.8.8
 
Last edited:
same here with an AQC107 over thunderbolt

The new kernel seems to have some sort of issue with Thunderbolt

Seems similar to this.


Edit: Searching around I've found on an Arch Linux forum site what appears to be the same Thunderbolt issue.
According to the advice given there adding the kernel parameter thunderbolt.host_reset=false appears to fix the issue.
 
Last edited:
Hello. I got the original issue and 'touch' the file without knowing. I was able to get things up again using code below. However, in the WebUI my proxmox server is showing a Grey state with '?' icon and I can't migrate anything to it now.

Code:
apt install --reinstall pve-manager pve-docs pve-cluster pve-qemu-kvm pve-container
systemctl status pveproxy
proxmox1.png
 
After installing all updates from repository problem exists still:
"
root@ZotacNew:~# apt update && apt upgrade
Hit:1 http://ftp.us.debian.org/debian bookworm InRelease
Hit:2 http://ftp.us.debian.org/debian bookworm-updates InRelease
Hit:3 http://security.debian.org bookworm-security InRelease
Hit:4 https://repo.zabbix.com/zabbix/6.0/debian bookworm InRelease
Hit:5 http://download.proxmox.com/debian/pve bookworm InRelease
Hit:6 http://download.proxmox.com/debian/ceph-quincy bookworm InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up initramfs-tools (0.142) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.142) ...
update-initramfs: Generating /boot/initrd.img-6.8.8-1-pve
/etc/initramfs/post-update.d//proxmox-boot-sync: 10: /usr/sbin/proxmox-boot-tool: not found
run-parts: /etc/initramfs/post-update.d//proxmox-boot-sync exited with return code 127
dpkg: error processing package initramfs-tools (--configure):
installed initramfs-tools package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@ZotacNew:~# "

Edit:
I solved this problem and just info for others:
apt --reinstall install proxmox-kernel-helper
 
Last edited:
Hello. I got the original issue and 'touch' the file without knowing. I was able to get things up again using code below. However, in the WebUI my proxmox server is showing a Grey state with '?' icon and I can't migrate anything to it now.

Code:
apt install --reinstall pve-manager pve-docs pve-cluster pve-qemu-kvm pve-container
systemctl status pveproxy
View attachment 69946
Same Ran upgrades on my Test Cluster this morning and seeing all 4 nodes in that state after a bit..
betting I got some of yesterdays mess in my upgrade because I didn't hit refresh before updating....
whoops - On me!

Reboot brings them back momentarily then back to all Grey state after about 5-10m running.
All normal VMs and LXCs running though.

Code:
journalctl -u pvestatd


Jun 18 09:01:04 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
Jun 18 09:01:29 pve01 pvestatd[1077]: modified cpu set for lxc/150: 0-1
Jun 18 09:09:51 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:09:52 pve01 pvestatd[1077]: status update time (518.585 seconds)
Jun 18 09:09:53 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:02 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:06 pve01 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Jun 18 09:10:06 pve01 pvestatd[1077]: received signal TERM
Jun 18 09:10:06 pve01 pvestatd[1077]: server closing
Jun 18 09:10:06 pve01 pvestatd[1077]: server stopped
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Deactivated successfully.
Jun 18 09:10:07 pve01 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Consumed 3.930s CPU time.
-- Boot 450b8f43a7a744198ad77359273d9a92 --
Jun 18 09:11:11 pve01 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Jun 18 09:11:15 pve01 pvestatd[1104]: starting server
Jun 18 09:11:15 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
root@pve01:/# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-1
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.5-1
proxmox-backup-file-restore: 3.2.5-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Currently:
Code:
systemctl restart pvestatd.service

Brings them back - so will be monitoring for how long they're back online.
------ ~20m still going good -------
think my issue was different, but did make the mistake of updating too soon after an update, but that is what a test cluster is for!
 
Last edited:
  • Like
Reactions: UweV and lomo
Still can't access http://server-pve:8006.
After download
wget http://download.proxmox.com/debian/...amd64/proxmox-backup-client_3.2.5-1_amd64.deb
dpkg -i proxmox-backup-client_3.2.5-1_amd64.deb

Please let me know if anybody have solved.
After that, I had to run:

apt-get install proxmox-ve

After a reboot, I was able to access the node trough the web interface and install the remaining updates.
All VM's and containers on the node came back as normal afterward.

Just in case: my error was that I purged the (proxmox-ve) package following the warning message.
 
Same Ran upgrades on my Test Cluster this morning and seeing all 4 nodes in that state after a bit..
betting I got some of yesterdays mess in my upgrade because I didn't hit refresh before updating....
whoops - On me!

Reboot brings them back momentarily then back to all Grey state after about 5-10m running.
All normal VMs and LXCs running though.

Code:
journalctl -u pvestatd


Jun 18 09:01:04 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
Jun 18 09:01:29 pve01 pvestatd[1077]: modified cpu set for lxc/150: 0-1
Jun 18 09:09:51 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:09:52 pve01 pvestatd[1077]: status update time (518.585 seconds)
Jun 18 09:09:53 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:02 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:06 pve01 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Jun 18 09:10:06 pve01 pvestatd[1077]: received signal TERM
Jun 18 09:10:06 pve01 pvestatd[1077]: server closing
Jun 18 09:10:06 pve01 pvestatd[1077]: server stopped
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Deactivated successfully.
Jun 18 09:10:07 pve01 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Consumed 3.930s CPU time.
-- Boot 450b8f43a7a744198ad77359273d9a92 --
Jun 18 09:11:11 pve01 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Jun 18 09:11:15 pve01 pvestatd[1104]: starting server
Jun 18 09:11:15 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
root@pve01:/# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-1
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.5-1
proxmox-backup-file-restore: 3.2.5-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Currently:
Code:
systemctl restart pvestatd.service

Brings them back - so will be monitoring for how long they're back online.
------ ~20m still going good -------
think my issue was different, but did make the mistake of updating too soon after an update, but that is what a test cluster is for!

I performed an upgrade 1h ago and also saw the "grey" status. running "systemctl restart pvestatd.service" fixed it for now. Thanks.
 
  • Like
Reactions: anclement
I performed an upgrade 1h ago and also saw the "grey" status. running "systemctl restart pvestatd.service" fixed it for now. Thanks.
FYI - After a reboot of one node (test cluster) the ZFS and PBS entries of that node are grey again after 10min (status unknown)
 
I still have the problem. what should he do?

Code:
o you want to continue? [Y/n] y
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really want to permanently remove 'proxmox-ve' from your system, run the following command
W: (pve-apt-hook)       touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) run apt purge proxmox-ve to remove the meta-package
W: (pve-apt-hook) and repeat your apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please verify
W: (pve-apt-hook)       - your APT repository settings
W: (pve-apt-hook)       - that you are using 'apt full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook

System not fully up to date (found 24 new packages)
 
After that, I had to run:

apt-get install proxmox-ve

After a reboot, I was able to access the node trough the web interface and install the remaining updates.
All VM's and containers on the node came back as normal afterward.

Just in case: my error was that I purged the (proxmox-ve) package following the warning message.
Rafael thank for replay. but still I am getting error after apt-get install proxmox-ve and reboot

root@pve:~# apt-get install proxmox-ve
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
fonts-font-logos libjs-sencha-touch proxmox-default-kernel proxmox-firewall proxmox-kernel-helper
proxmox-mail-forward proxmox-offline-mirror-docs proxmox-offline-mirror-helper pve-container pve-manager
Suggested packages:
systemd-boot
The following packages will be REMOVED:
pve-kernel-5.15.30-2-pve
The following NEW packages will be installed:
fonts-font-logos libjs-sencha-touch proxmox-default-kernel proxmox-firewall proxmox-kernel-helper
proxmox-mail-forward proxmox-offline-mirror-docs proxmox-offline-mirror-helper proxmox-ve pve-manager
The following packages will be upgraded:
pve-container
1 upgraded, 10 newly installed, 1 to remove and 1 not upgraded.
2 not fully installed or removed.
Need to get 0 B/5,880 kB of archives.
After this operation, 354 MB disk space will be freed.
Do you want to continue? [Y/n] y
Reading changelogs... Done
(Reading database ... 73713 files and directories currently installed.)
Removing pve-kernel-5.15.30-2-pve (5.15.30-3) ...
Examining /etc/kernel/postrm.d.
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 5.15.30-2-pve /boot/vmlinuz-5.15.30-2-pve
update-initramfs: Deleting /boot/initrd.img-5.15.30-2-pve
run-parts: executing /etc/kernel/postrm.d/proxmox-auto-removal 5.15.30-2-pve /boot/vmlinuz-5.15.30-2-pve
/etc/kernel/postrm.d/proxmox-auto-removal: 4: .: cannot open /usr/share/proxmox-kernel-helper/scripts/functions: No such file
run-parts: /etc/kernel/postrm.d/proxmox-auto-removal exited with return code 2
Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/pve-kernel-5.15.30-2-pve.postrm line 14.
dpkg: error processing package pve-kernel-5.15.30-2-pve (--remove):
installed pve-kernel-5.15.30-2-pve package post-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
pve-kernel-5.15.30-2-pve
Processing was halted because there were too many errors.
E: Sub-process /usr/bin/dpkg returned an error code (1)

And still I can't access http://server-pve:8006
 
Same Ran upgrades on my Test Cluster this morning and seeing all 4 nodes in that state after a bit..
betting I got some of yesterdays mess in my upgrade because I didn't hit refresh before updating....
whoops - On me!

Reboot brings them back momentarily then back to all Grey state after about 5-10m running.
All normal VMs and LXCs running though.

Code:
journalctl -u pvestatd


Jun 18 09:01:04 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
Jun 18 09:01:29 pve01 pvestatd[1077]: modified cpu set for lxc/150: 0-1
Jun 18 09:09:51 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:09:52 pve01 pvestatd[1077]: status update time (518.585 seconds)
Jun 18 09:09:53 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:02 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:06 pve01 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Jun 18 09:10:06 pve01 pvestatd[1077]: received signal TERM
Jun 18 09:10:06 pve01 pvestatd[1077]: server closing
Jun 18 09:10:06 pve01 pvestatd[1077]: server stopped
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Deactivated successfully.
Jun 18 09:10:07 pve01 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Consumed 3.930s CPU time.
-- Boot 450b8f43a7a744198ad77359273d9a92 --
Jun 18 09:11:11 pve01 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Jun 18 09:11:15 pve01 pvestatd[1104]: starting server
Jun 18 09:11:15 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
root@pve01:/# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-1
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.5-1
proxmox-backup-file-restore: 3.2.5-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Currently:
Code:
systemctl restart pvestatd.service

Brings them back - so will be monitoring for how long they're back online.
------ ~20m still going good -------
think my issue was different, but did make the mistake of updating too soon after an update, but that is what a test cluster is for!
Oddly, mine's been up for 25m plus after running the 'systemctl restart pvestatd.service'
 
I keep getting the following when I try to upgrade again...
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (127)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook

Yes I did the touch command and uninstalled ve.

It's not that big of a deal to reinstall as this was a fresh install that I did the update at the end. (should have realized the odd behavior & stopped) but I'd like to find the non-reinstall option just to know for the future,
 
Seems similar to this.


Edit: Searching around I've found on an Arch Linux forum site what appears to be the same Thunderbolt issue.
According to the advice given there adding the kernel parameter thunderbolt.host_reset=false appears to fix the issue.
just tried it, didn't work. same bad FW message
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!