Hast du die VM neu gestartet? Für mich wird das Problem durch den Eintrag gelöst. Habe es auf Server 2019 und Server 2025 VMs getestet. Allerdings wirkt die Einstellung bei mir erst nach VM reboot.
Compiled new versions here:
https://github.com/PwrBank/pve-esxi-import-tools/releases/tag/1.1.0
Here's the notable changes:
8 concurrent TCP connections to ESXi (up from 1 with HTTP/2 multiplexing)
Round-robin distribution of requests across...
I was able to trick it into working by putting in a dummy record in host file for shop.proxmox.com does not need to be the actual IP just something for it to get an Ipv4 address it appears. subscription check succeed afterwards.
Thanks Chris, I've attached them both here. FWIW, the journalctl output references a missing chunk (/mnt/dd04/.chunks/0c94/0c941ae1160a30872af57ac18dbae2a5b413c9450f8f50f6a87dee22abf3cf6d), I checked on the PBS and the /mnt/dd04/.chunks/0c94/...
Ty for the recommendation. Last night my backup at 0:30 did start and I will see if this fixed it permanently. If it fails again, I might implement a hookscript to retry X times if the job failed.
I ran into the same issue and fixed it by setting the VM's network interface to use the correct bridge and making sure the physical NIC is properly connected to that bridge in `/etc/network/interfaces`.
8 to 9 involves an OS upgrade of Deb12 to Deb13. I would not recommend attempting to rollback. You will likely end up in a bigger trouble than you are now.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
I recently had to move a cluster to a new physical location and tested a "naive" shutdown of the running 3 node cluster with some HA VMs (with shutdown policy = migrate) + ceph.
I Just ran "shutdown now" via ssh on all the nodes (within one...
It seems like your upgrade was not successful. Do you still have output from upgrade process?
What do these show:
pveversion -v
debsums -s
dmesg
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
I recently had to move a cluster to a new physical location and tested a "naive" shutdown of the running 3 node cluster with some HA VMs (with shutdown policy = migrate) + ceph.
I Just ran "shutdown now" via ssh on all the nodes (within one...
I ran those two commands but still same
Oct 10 09:46:24 pve04 systemd[1]: Failed to start pvedaemon.service - PVE API Daemon.
Oct 10 09:46:24 pve04 systemd[1]: pvedaemon.service: Failed with result 'exit-code'.
Oct 10 09:46:24 pve04 systemd[1]...
Maybe "hook scripts" could be used.
https://pve.proxmox.com/wiki/Backup_and_Restore#_hook_scripts
and e.g. https://forum.proxmox.com/threads/vzdump-hook-script-and-clustering.140684/
Hi,
Realtek NICs using the r8169 driver often trigger transmit queue timeout errors due to driver instability with newer kernels.
Try to switch to the r8168 driver or disable offloads.