Sorry for the mixup! those posts were created yesterday but i didn't see them in my feed. Today morning i checked the forum again to ensure that these posts are not visible before creating a new one.
yes, please open a new thread (feel free to mention @ me) and include a journal of the working kernel and details about your setup and hardware, in particular where your rootfs is stored ;)
It seems that there is a race condition regarding systemd-tmpfiles and several ZFS related services. We are using ZFS on LUKS and a systemd service to unlock the LUKS partitions. Curiously this didn't happened before. I'll try to fix this.
As discussed on the 7.0 kernel thread, I'm opening a new thread to investigate the issue.
Yesterday I updated pve from 9.1.6 to 9.1.9, which installed a 7.0.x kernel. However on reboot I got a Kernel Panic :
unable to mount root fs on...
The PBS is a standalone machine and according to the statistics page (see image below), there are no memory constraints. The s3 garbage collect runs at 2:30 everyday and here is the system summary from the Server administration page. Both cpu and...
Hi @r.wegmann,
the second orange line just means that the change cannot be applied immediately while the VM is running. It will be applied the next time, the VM is started or rebooted from the UI (reboot within the VM is not enough). If the...
Hi, i´m having the same behaviour on my machine:
Lenove ThinkCentre 715
64 GB RAM
When running kernel Linux 6.8.12-13-pve it absolutely runs smooth, when booting into any other kernel the system freezes after some minutes or ours.
so what was...
We have an issue: the data transfer rate during sync job is extremely slow, 20-50 MB/s.
The hard drives used are a 3.5" Toshiba MG10-F 22 TB MG10SFA22TE SAS 12Gb/s hard drive.
Connected via a 12Gb/s HBA
There is no load on them during the data...
Sure! To check some aspects, usual problems and possible showstoppers read this forum ;-) and those wiki pages:
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE
https://pve.proxmox.com/wiki/Advanced_Migration_Techniques_to_Proxmox_VE
Good...
@celemine1gig Thank you for the update. I know that this problem occurs primarily with INtel NIC, but as ou can see it happens again and again - so this is why I am also looking for a permanent solution even if I have no Intel NIC. Maybe the...
Hi,
please share the VM configuration of an affected VM qm config ID with the numerical ID of the VM as well as the output of pveversion -v and full system journal from the source and target node from the boot until and including the problematic...
Is this node in a cluster?
Please share journalctl log include ~4 before the reboot observed, for example:
root@pve ~# journalctl --since "2026-05-09 00:00" --until "2026-05-09 12:00" | gzip > /tmp/$(hostname)-syslog.txt.gz
For a cluster, it is...
Thanks for reply. Hostbus change is what very obvious thing to try. As same problem was resolved with new Linux Version i.e. by changing the Hostbus to IDE as I mentioned in my first post. But that solution is not working here anymore. Actually...
3 nodes (HPE DL360 Gen10)
On-board 1GE link to oob/mgmt switch for "link 0".
2 2-port 25/10GE NICs with 1 port each in an LACP (802.3ad) bonding to Nexus switches (in a VPC setup) with 10GE DAC cables 802.1q. "link 1" is running on an SDN VLAN...
We are currently setting up two separate PVE clusters. Both are HCI clusters running Ceph. The servers are distributed across two colocation facilities in the same city, connected via a low-latency data centre interconnect.
One cluster consists...