same here with an AQC107 over thunderbolt. from what I've checked in kernel git logs there weren't any changes to the atlantic driver from 6.8.4 to 6.8.8The new kernel seems to have some sort of issue with Thunderbolt or at least my QNAP QNA-T310G1S on my Intel NUC 13 . With 6.8.4-3-pve it works fine:
[ 1.740375] atlantic 0000:2e:00.0: enabling device (0000 -> 0002)
[ 2.148198] atlantic 0000:2e:00.0 enp46s0: renamed from eth0
[ 5.815641] atlantic 0000:2e:00.0 enp46s0: entered allmulticast mode
[ 8.189958] atlantic 0000:2e:00.0 enp46s0: atlantic: link change old 0 new 10000
[ 17.399990] atlantic 0000:2e:00.0 enp46s0: entered promiscuous mode
And it is alive and well too, with valid firmware:
root@pve-3:~# ethtool -i enp46s0
driver: atlantic
version: 6.8.4-3-pve
firmware-version: 3.1.57
expansion-rom-version:
bus-info: 0000:2e:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: yes
root@pve-3:~#
But with 6.8.8 it fails: (keeping all other packages on the latest build as above - flipping the kernel versions for boot only)
[ 1.725221] atlantic 0000:2e:00.0: enabling device (0000 -> 0002)
[ 2.010209] atlantic: Bad FW version detected: ffffffff
[ 2.010379] atlantic: probe of 0000:2e:00.0 failed with error -95
No more network. Still trying to determine if this is an atlantic driver problem (given the .ko module file is the same size on both versions) or Thunderbird related. Any ideas/hints welcome!
same here with an AQC107 over thunderbolt
The new kernel seems to have some sort of issue with Thunderbolt
thunderbolt.host_reset=false
appears to fix the issue.apt install --reinstall pve-manager pve-docs pve-cluster pve-qemu-kvm pve-container
systemctl status pveproxy
Same Ran upgrades on my Test Cluster this morning and seeing all 4 nodes in that state after a bit..Hello. I got the original issue and 'touch' the file without knowing. I was able to get things up again using code below. However, in the WebUI my proxmox server is showing a Grey state with '?' icon and I can't migrate anything to it now.
View attachment 69946Code:apt install --reinstall pve-manager pve-docs pve-cluster pve-qemu-kvm pve-container systemctl status pveproxy
journalctl -u pvestatd
Jun 18 09:01:04 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
Jun 18 09:01:29 pve01 pvestatd[1077]: modified cpu set for lxc/150: 0-1
Jun 18 09:09:51 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:09:52 pve01 pvestatd[1077]: status update time (518.585 seconds)
Jun 18 09:09:53 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:02 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit>
Jun 18 09:10:06 pve01 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Jun 18 09:10:06 pve01 pvestatd[1077]: received signal TERM
Jun 18 09:10:06 pve01 pvestatd[1077]: server closing
Jun 18 09:10:06 pve01 pvestatd[1077]: server stopped
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Deactivated successfully.
Jun 18 09:10:07 pve01 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Consumed 3.930s CPU time.
-- Boot 450b8f43a7a744198ad77359273d9a92 --
Jun 18 09:11:11 pve01 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Jun 18 09:11:15 pve01 pvestatd[1104]: starting server
Jun 18 09:11:15 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon.
root@pve01:/# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-1
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.5-1
proxmox-backup-file-restore: 3.2.5-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
systemctl restart pvestatd.service
After that, I had to run:Still can't access http://server-pve:8006.
After download
wget http://download.proxmox.com/debian/...amd64/proxmox-backup-client_3.2.5-1_amd64.deb
dpkg -i proxmox-backup-client_3.2.5-1_amd64.deb
Please let me know if anybody have solved.
Same Ran upgrades on my Test Cluster this morning and seeing all 4 nodes in that state after a bit..
betting I got some of yesterdays mess in my upgrade because I didn't hit refresh before updating....
whoops - On me!
Reboot brings them back momentarily then back to all Grey state after about 5-10m running.
All normal VMs and LXCs running though.
Code:journalctl -u pvestatd Jun 18 09:01:04 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon. Jun 18 09:01:29 pve01 pvestatd[1077]: modified cpu set for lxc/150: 0-1 Jun 18 09:09:51 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit> Jun 18 09:09:52 pve01 pvestatd[1077]: status update time (518.585 seconds) Jun 18 09:09:53 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit> Jun 18 09:10:02 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit> Jun 18 09:10:06 pve01 systemd[1]: Stopping pvestatd.service - PVE Status Daemon... Jun 18 09:10:06 pve01 pvestatd[1077]: received signal TERM Jun 18 09:10:06 pve01 pvestatd[1077]: server closing Jun 18 09:10:06 pve01 pvestatd[1077]: server stopped Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Deactivated successfully. Jun 18 09:10:07 pve01 systemd[1]: Stopped pvestatd.service - PVE Status Daemon. Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Consumed 3.930s CPU time. -- Boot 450b8f43a7a744198ad77359273d9a92 -- Jun 18 09:11:11 pve01 systemd[1]: Starting pvestatd.service - PVE Status Daemon... Jun 18 09:11:15 pve01 pvestatd[1104]: starting server Jun 18 09:11:15 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon. root@pve01:/# pveversion -v proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve) pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325) proxmox-kernel-helper: 8.1.0 pve-kernel-6.2: 8.0.5 proxmox-kernel-6.8: 6.8.8-1 proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1 proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3 proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2 proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5 proxmox-kernel-6.5: 6.5.13-5 proxmox-kernel-6.2.16-20-pve: 6.2.16-20 proxmox-kernel-6.2: 6.2.16-20 pve-kernel-6.2.16-3-pve: 6.2.16-3 ceph: 18.2.2-pve1 ceph-fuse: 18.2.2-pve1 corosync: 3.1.7-pve3 criu: 3.17.1-2 glusterfs-client: 10.3-5 ifupdown2: 3.2.0-1+pmx8 ksm-control-daemon: 1.5-1 libjs-extjs: 7.0.0-4 libknet1: 1.28-pve1 libproxmox-acme-perl: 1.5.1 libproxmox-backup-qemu0: 1.4.1 libproxmox-rs-perl: 0.3.3 libpve-access-control: 8.1.4 libpve-apiclient-perl: 3.3.2 libpve-cluster-api-perl: 8.0.7 libpve-cluster-perl: 8.0.7 libpve-common-perl: 8.2.1 libpve-guest-common-perl: 5.1.3 libpve-http-server-perl: 5.1.0 libpve-network-perl: 0.9.8 libpve-rs-perl: 0.8.9 libpve-storage-perl: 8.2.2 libspice-server1: 0.15.1-1 lvm2: 2.03.16-2 lxc-pve: 6.0.0-1 lxcfs: 6.0.0-pve2 novnc-pve: 1.4.0-3 proxmox-backup-client: 3.2.5-1 proxmox-backup-file-restore: 3.2.5-1 proxmox-firewall: 0.4.2 proxmox-kernel-helper: 8.1.0 proxmox-mail-forward: 0.2.3 proxmox-mini-journalreader: 1.4.0 proxmox-widget-toolkit: 4.2.3 pve-cluster: 8.0.7 pve-container: 5.1.12 pve-docs: 8.2.2 pve-edk2-firmware: 4.2023.08-4 pve-esxi-import-tools: 0.7.1 pve-firewall: 5.0.7 pve-firmware: 3.12-1 pve-ha-manager: 4.0.5 pve-i18n: 3.2.2 pve-qemu-kvm: 9.0.0-3 pve-xtermjs: 5.3.0-3 qemu-server: 8.2.1 smartmontools: 7.3-pve1 spiceterm: 3.3.0 swtpm: 0.8.0+pve1 vncterm: 1.8.0 zfsutils-linux: 2.2.4-pve1
Currently:
Code:systemctl restart pvestatd.service
Brings them back - so will be monitoring for how long they're back online.
------ ~20m still going good -------
think my issue was different, but did make the mistake of updating too soon after an update, but that is what a test cluster is for!
FYI - After a reboot of one node (test cluster) the ZFS and PBS entries of that node are grey again after 10min (status unknown)I performed an upgrade 1h ago and also saw the "grey" status. running "systemctl restart pvestatd.service" fixed it for now. Thanks.
o you want to continue? [Y/n] y
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really want to permanently remove 'proxmox-ve' from your system, run the following command
W: (pve-apt-hook) touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) run apt purge proxmox-ve to remove the meta-package
W: (pve-apt-hook) and repeat your apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please verify
W: (pve-apt-hook) - your APT repository settings
W: (pve-apt-hook) - that you are using 'apt full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook
System not fully up to date (found 24 new packages)
the systemctl restart pvestatd.service seemed to fix the issue for now.I performed an upgrade 1h ago and also saw the "grey" status. running "systemctl restart pvestatd.service" fixed it for now. Thanks.
Rafael thank for replay. but still I am getting error after apt-get install proxmox-ve and rebootAfter that, I had to run:
apt-get install proxmox-ve
After a reboot, I was able to access the node trough the web interface and install the remaining updates.
All VM's and containers on the node came back as normal afterward.
Just in case: my error was that I purged the (proxmox-ve) package following the warning message.
Oddly, mine's been up for 25m plus after running the 'systemctl restart pvestatd.service'Same Ran upgrades on my Test Cluster this morning and seeing all 4 nodes in that state after a bit..
betting I got some of yesterdays mess in my upgrade because I didn't hit refresh before updating....
whoops - On me!
Reboot brings them back momentarily then back to all Grey state after about 5-10m running.
All normal VMs and LXCs running though.
Code:journalctl -u pvestatd Jun 18 09:01:04 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon. Jun 18 09:01:29 pve01 pvestatd[1077]: modified cpu set for lxc/150: 0-1 Jun 18 09:09:51 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit> Jun 18 09:09:52 pve01 pvestatd[1077]: status update time (518.585 seconds) Jun 18 09:09:53 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit> Jun 18 09:10:02 pve01 pvestatd[1077]: failed to spawn fuse mount, process exited wit> Jun 18 09:10:06 pve01 systemd[1]: Stopping pvestatd.service - PVE Status Daemon... Jun 18 09:10:06 pve01 pvestatd[1077]: received signal TERM Jun 18 09:10:06 pve01 pvestatd[1077]: server closing Jun 18 09:10:06 pve01 pvestatd[1077]: server stopped Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Deactivated successfully. Jun 18 09:10:07 pve01 systemd[1]: Stopped pvestatd.service - PVE Status Daemon. Jun 18 09:10:07 pve01 systemd[1]: pvestatd.service: Consumed 3.930s CPU time. -- Boot 450b8f43a7a744198ad77359273d9a92 -- Jun 18 09:11:11 pve01 systemd[1]: Starting pvestatd.service - PVE Status Daemon... Jun 18 09:11:15 pve01 pvestatd[1104]: starting server Jun 18 09:11:15 pve01 systemd[1]: Started pvestatd.service - PVE Status Daemon. root@pve01:/# pveversion -v proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve) pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325) proxmox-kernel-helper: 8.1.0 pve-kernel-6.2: 8.0.5 proxmox-kernel-6.8: 6.8.8-1 proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1 proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3 proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2 proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5 proxmox-kernel-6.5: 6.5.13-5 proxmox-kernel-6.2.16-20-pve: 6.2.16-20 proxmox-kernel-6.2: 6.2.16-20 pve-kernel-6.2.16-3-pve: 6.2.16-3 ceph: 18.2.2-pve1 ceph-fuse: 18.2.2-pve1 corosync: 3.1.7-pve3 criu: 3.17.1-2 glusterfs-client: 10.3-5 ifupdown2: 3.2.0-1+pmx8 ksm-control-daemon: 1.5-1 libjs-extjs: 7.0.0-4 libknet1: 1.28-pve1 libproxmox-acme-perl: 1.5.1 libproxmox-backup-qemu0: 1.4.1 libproxmox-rs-perl: 0.3.3 libpve-access-control: 8.1.4 libpve-apiclient-perl: 3.3.2 libpve-cluster-api-perl: 8.0.7 libpve-cluster-perl: 8.0.7 libpve-common-perl: 8.2.1 libpve-guest-common-perl: 5.1.3 libpve-http-server-perl: 5.1.0 libpve-network-perl: 0.9.8 libpve-rs-perl: 0.8.9 libpve-storage-perl: 8.2.2 libspice-server1: 0.15.1-1 lvm2: 2.03.16-2 lxc-pve: 6.0.0-1 lxcfs: 6.0.0-pve2 novnc-pve: 1.4.0-3 proxmox-backup-client: 3.2.5-1 proxmox-backup-file-restore: 3.2.5-1 proxmox-firewall: 0.4.2 proxmox-kernel-helper: 8.1.0 proxmox-mail-forward: 0.2.3 proxmox-mini-journalreader: 1.4.0 proxmox-widget-toolkit: 4.2.3 pve-cluster: 8.0.7 pve-container: 5.1.12 pve-docs: 8.2.2 pve-edk2-firmware: 4.2023.08-4 pve-esxi-import-tools: 0.7.1 pve-firewall: 5.0.7 pve-firmware: 3.12-1 pve-ha-manager: 4.0.5 pve-i18n: 3.2.2 pve-qemu-kvm: 9.0.0-3 pve-xtermjs: 5.3.0-3 qemu-server: 8.2.1 smartmontools: 7.3-pve1 spiceterm: 3.3.0 swtpm: 0.8.0+pve1 vncterm: 1.8.0 zfsutils-linux: 2.2.4-pve1
Currently:
Code:systemctl restart pvestatd.service
Brings them back - so will be monitoring for how long they're back online.
------ ~20m still going good -------
think my issue was different, but did make the mistake of updating too soon after an update, but that is what a test cluster is for!
in /etc/default/grub (GRUB_CMDLINE_LINUX) and then ran update-grub to commit the changes to the boot config file.How & where did you add the kernel parameter?