@fiona
Would it be possible to clarrify this patch for us?
https://lists.proxmox.com/pipermail/pve-devel/2023-December/061043.html
We have a enviroment that is pushing the limits with NOFILE.
root@proxhost1:~# for pid in $(pidof kvm); do prlimit -p $pid | grep NOFILE; ls -1 /proc/$pid/fd/ |...
Hey guys working on setting up Authentik for Proxmox logins and running into a issue with http vs https.
If I utilize http for the issuer URL on the proxmox realm configuration everything works as expected.
If I utilize https for the issuer URL on the proxmox realm configuration I get the...
I can't debate that, at this point we are talking very basic topics here.
With kernel 5.15+ we started having KSM issues (KSM basically pointless), we stopped being able to live migrate between hosts with different generations etc. Proxmox took their stance on all of this and I do feel they...
Sounds like the proxmox dev's need to make some decisions then. We need stability.
The last good kernel was 5.15 and at this point 6.5 has been the next solid kernel.
We are a real production enviroment (700 VM's running over 7 front ends), from KSM issues to stability, to just basic live...
Moved to Proxmox 8.2 yesterday and had a VM crash just now.
[Thu May 16 13:29:43 2024] mce: Uncorrected hardware memory error in user-access at 10ac2b6b880
[Thu May 16 13:29:43 2024] {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 0
[Thu May 16 13:29:43 2024]...
Attempting to update 2 nodes to the latest PVE7. Both are running ZFS for their OS partition.
During the upgrade they fail with messages about no space left on device. However / has more than enough free space on both of them.
cp: error writing...
Well I figured it out.
The following file was causing the issue with vmbr1 and all the weirdness.
/etc/network/if-up.d/vzifup-post
What lead me onto the issue was this line in the ifupdown2 logs.
2024-02-21 07:08:58,290: MainThread: ifupdown: scheduler.py:331:run_iface_list(): error: vmbr1...
Pulling my hair out on a front end that has been through Proxmox4 -> Proxmox5 -> Proxmox6 -> Proxmox7 -> Proxmox8 upgrades.
This front end was using the old style NIC naming scheme of eth0, eth1, eth2 etc.
Typically not a huge deal.
- Move the /etc/udev/rules.d/ file out of the way and reboot...
Good catch, but its still acting much different than before.
I will have to do more testing, but something seems different. Our hardware department has reported similar issues as well. I will keep digging.
Its happening on all nodes, so maybe its not a "bug" per say. Its certainly different than we are used to though.
root@ccsprox1:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface ens5f0 inet manual
auto bond0
iface bond0 inet manual
slaves eno50 ens5f0...
Whats up with all these bugs in the GUI for Proxmox8 and bonding/bridging?
Getting that just trying to remove the bridge.
We expect better than this for proxmox.
This is a fresh 8.1 install from ISO and updated to the non enterprise repo's.
Never had any of these issues with Proxmox3...
Hopefully it gets some traction, the work arounds don't seem to work when live migrating the VM. I have to stop/start the VM or use prlimit to change the values manually.
This is a new one for us.
Jan 23 15:29:39 QEMU[284972]: kvm: virtio_bus_set_host_notifier: unable to init event notifier: Too many open files (-24)
Jan 23 15:29:39 QEMU[284972]: virtio-blk failed to set host notifier (-24)
Jan 23 15:29:39 QEMU[284972]: kvm: virtio_bus_start_ioeventfd...
Another update. Hit lockups last night again, except this time the host had some interesting lines in the logs. These 3 lines took place right before the VM hit all kinds of kernel panics.
[Thu Nov 9 02:47:05 2023] workqueue: blk_mq_run_work_fn hogged CPU for >10000us 4 times, consider...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.