I do appreciate your time, im not trying to beat you guys up. It was a bug we didn't expect, when you go a decade without live migration issues, it left a bad taste in our mouth.
Testing this one right now
http://download.proxmox.com/temp/pve-kernel-5.15.83-fpu-amx-revert/
I will report...
Its simply not what we were sold on with proxmox. Or my experience seeing as I have been here since Proxmox 2.x. This is definitely a new position for you guys within the last couple of years, but its not a position you were in 5-6-7 years ago. The idea was we could run without homogeneous...
Can confirm that both the 5.19.x and 6.1 kernels correct the issue 100%.
I am a bit confused at this point.
@t.lamprecht
What is your stance on this? In one thread and this one you indicate homogeneous is the best bet, but the fact is some of my clusters are now a decade old and have never...
Well I got a bit ahead of myself. Doing more testing with kvm64 it has the same issue as other CPU types.
I can migrate from older CPU types to newer CPU types, but not from newer CPU's to older CPU's.
Working on testing 5.19.x now.
Appreciate the input.
I am going to do some testing with this.
https://pve.proxmox.com/wiki/Manual:_cpu-models.conf
Mainly because kvm64 is working 100%, makes me think its a specific flag I might be able to narrow down and simply do away with.
Do you think this is something that will get resolved. I understand using homogenous HW makes sense, but in the real world it doesn't at all. The Gen9's are still solid servers.
Its also worth mentioning.
DL 380 Gen9 = E5-2670
DL 560 Gen10 (2nd Gen Intel) = Xeon(R) Gold 6254
Supermicro (3rd gen Intel) = Xeon(R) Gold 6348H
I can live migrate from the E5-2670 to both the 2nd & 3rd gen intel's. I can live migrate between the 2nd and 3rd gen Intel's with no issue...
I like to check with the community as a first step, its never lead me wrong in the 10+ years we have been using proxmox.
I did find those bug as well. Getting chrony on all my nodes doesn't seem to help.
Hey guys. We have a 7 node cluster with roughly 600 VM's running. Cluster has been in place since Proxmox5.
3x DL 380 Gen9's
1x DL 380 Gen10
2x DL 560 Gen10's
1x Supermicro quad socket server with 3rd Gen Intel
We recently upgraded from Proxmox 7.1 to 7.3 which was also a move from the...
Appreciate the info.
This is pretty funny tbh. Lets have a button called "Stop" and "Shutdown", but then "Bullk Stop" does a shutdown. Just um, im not even sure what to say tbh.
Ill mark this solved....I guess lol
Seems like this should be a easy one.
Is there a easy way to shutdown all VM's on a given node gracefully? I see there is a bulk stop, ideally I am looking for a bulk shutdown without shutting down the front end itself.
Hey guys I have some HP DL 560 Gen10's that have dual onboard Broadcom 10G copper network adapters which are throwing call traces on Proxmox 7.2 at every boot. Networking is still working, but not sure if this could cause other issues. It seems to only happen at boot and there are 3 of them...
I have also found that on Gen8 HP servers the only way I can get any 5.15.x kernel to boot is to disable VT-D in the bios. We have roughly 15 gen8's out in the field and so far ive had to do that to each one.
Ive been around Proxmox since 2.x, so far this kernel is not giving me a very warm...
The host side logs look to be pretty clean.
I will say we run a non standard mount option in our VM's with ext4 which is data=journal.
I partially suspect some odd issues with io_uring and using data=journal in the 5.15 kernel. We had some filesystem errors with io_uring and data=journal...
As of late we have started having some odd crashes in our CentOS7 VM's which so far is only happening on various hosts that are on 7.2 running one of the 5.15.x kernels.
I have seen it happen on 5.15.35-1-pve and 5.15.39-1-pve.
Alot of the time it will happen after the VM is rebooted from...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.