Attempting to update 2 nodes to the latest PVE7. Both are running ZFS for their OS partition.
During the upgrade they fail with messages about no space left on device. However / has more than enough free space on both of them.
cp: error writing...
Well I figured it out.
The following file was causing the issue with vmbr1 and all the weirdness.
/etc/network/if-up.d/vzifup-post
What lead me onto the issue was this line in the ifupdown2 logs.
2024-02-21 07:08:58,290: MainThread: ifupdown: scheduler.py:331:run_iface_list(): error: vmbr1...
Pulling my hair out on a front end that has been through Proxmox4 -> Proxmox5 -> Proxmox6 -> Proxmox7 -> Proxmox8 upgrades.
This front end was using the old style NIC naming scheme of eth0, eth1, eth2 etc.
Typically not a huge deal.
- Move the /etc/udev/rules.d/ file out of the way and reboot...
Good catch, but its still acting much different than before.
I will have to do more testing, but something seems different. Our hardware department has reported similar issues as well. I will keep digging.
Its happening on all nodes, so maybe its not a "bug" per say. Its certainly different than we are used to though.
root@ccsprox1:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface ens5f0 inet manual
auto bond0
iface bond0 inet manual
slaves eno50 ens5f0...
Whats up with all these bugs in the GUI for Proxmox8 and bonding/bridging?
Getting that just trying to remove the bridge.
We expect better than this for proxmox.
This is a fresh 8.1 install from ISO and updated to the non enterprise repo's.
Never had any of these issues with Proxmox3...
Hopefully it gets some traction, the work arounds don't seem to work when live migrating the VM. I have to stop/start the VM or use prlimit to change the values manually.
This is a new one for us.
Jan 23 15:29:39 QEMU[284972]: kvm: virtio_bus_set_host_notifier: unable to init event notifier: Too many open files (-24)
Jan 23 15:29:39 QEMU[284972]: virtio-blk failed to set host notifier (-24)
Jan 23 15:29:39 QEMU[284972]: kvm: virtio_bus_start_ioeventfd...
Another update. Hit lockups last night again, except this time the host had some interesting lines in the logs. These 3 lines took place right before the VM hit all kinds of kernel panics.
[Thu Nov 9 02:47:05 2023] workqueue: blk_mq_run_work_fn hogged CPU for >10000us 4 times, consider...
Here is some more interesting stuff. This has been a issue since the 6.x kernel hit the streets for proxmox.
Host: HP DL 560 Gen10
Storage: iSCSI Alletra 6000 NVMe
I have 2 very active CentOS7 VM's running httpd/java/tomcat, typically seeing 2k-3k httpd sessions at any given time.
As...
We hit soft lockups again with the VM set to 192 cores and our compiles set to use 128 of those cores (NUMA on and CPU hotplug disabled).
No benchmark's, these are production, we don't have time to do that kind of stuff. Hence the reason we pay for enterprise repo's.
Just updated the...
Good questions.
Just checked Supermicro's site and the host is on the latest Bios. They don't seem to offer any microcode updates as of yet for this model.
Doing some testing without CPU hotplug enabled and I found the following.
CPU Hotplug Enabled
- VM Boots aok with all the cores from the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.