Perhaps some helpful numbers ..
We have a Dell Server with dual 18 core processors that normally runs at a CPU load of 18 to 22 at this time of the morning (8:40am EST) and right now is only running at 14
We have a couple of other Dell servers with dual 12 core processors that usually run at...
Honestly, I know everyone always wants to see some sort of metric but without context, that would be no better than standard anecdotal evidence. I've now taken all of our KVM VMs off of ballooning and they are all noticeably faster. I know this is anecdotal and perceptual but believe you me...
So it appears no one has said anything about this so I'll make a comment. If more people can back this up, even better.
I turned off memory ballooning on all VMs on our weakest node. After doing so, those VMs had performance like what they'd have on the stronger nodes. This may seem anecdotal...
@midsize_erp - it would seem @Richard isn't answering .. perhaps because they already know about the regressions and are actively working the issue? I'm really hoping so because I'm not liking being stuck on the 5.13.19-6-pve kernel when there are already 2 newer kernel versions available in the...
To add to this .. since the Proxmox 7.2 update .. we've been seeing this weird issue .. after rebooting, servers come up and they give an error when trying to view them through the WebGUI .. can't remember the error exactly, maybe error 595? Anyway, you have to ssh to the server and do
systemctl...
Something to add to this OSD heartbeat problem with the new kernel ... on nodes that only execute VMs, with the new kernel, VMs will just shutdown for no reason .. bluescreen sometimes .. backed those nodes down to kernel 5.13.19-6 and that weirdness stopped too
Not happy to hear you're having the same issue but glad to know someone else is having it. It's also interesting to note that you are running a slightly newer kernel version than we were when we had the issue which tells me I made the right choice to not update to the latest kernel yet .. we are...
FYI .. I reverted all Ceph nodes to 5.13.19-6 kernel and the issue of the OSD heartbeat on front and back disappeared. Clearly something changed with the new kernel 5.15. It'd be nice for anyone else that's having this issue to pipe up and say so to make it clear we aren't the only ones.
No, because there have been no network issues. I have also restarted various OSDs and that has done nothing. The issue didn't appear on any previous kernels or versions of Proxmox and now it comes up with Kernel 5.15
This is absolutely related to the new Kernel or new Proxmox binaries brought...
Bump .. also, messed up on Ceph version .. 16.2.7 not 16.2.6 .. has been since that was released by Proxmox team
Also, all servers in question are on Proxmox Enterprise repo.
This problem has arisen with our Hyperconverged Ceph too .. when? Immediately after upgrading to Proxmox 7.2 and rebooting to the new 5.15 Kernel. It's come up on all Ceph nodes (Pacific 16.2.6). They are Dell R740s running with Mellanox 25Gb fiber cards and 100% NVMe drives. Everything was...
Is there any movement on this? I have the exact same error. I tried renaming the "acme" folder and that didn't do anything .. still getting the locking error. No network errors as everything else works as it should.
pveversion -v output is ---
I've never needed to use "exportfs -a" .. it's just a high-level way of making the changes in /etc/exports active which can also be done with
systemctl restart nfs-kernel-server
... just saying ..
Friendodevil - did you get this working?
It seems that using SPICE with Proxmox is broken .. I've tried it in the past (years ago) and it didn't work. I just tried it now and it still doesn't work. I have the 3128 port forwarded for IPv4 and a rule for IPv6 to allow and still no dice. It would be helpful if there were some clear...
No, until OFED is fully certified for Debian Bullseye, we can't fool with that .. all our servers are crucial to day to day operations for our customers .. we can't use them for testing ... and we don't have extra equipment to try it on to see
If you decide to go that route, we'd be happy to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.