That was my thought as well. We'll be using "Broadwell-noTSX-IBRS" for now. There are big performance gains vs. the "KVM 64" CPU, and I can drop a new cluster node without having to match the CPU down to the microcode exactly.
@bbgeek17, thank you. What you suggest is next on my list to understand. I stopped after seeing "If you care about live migration and security, and you have only Intel CPUs or only AMD CPUs, choose the lowest generation CPU model of your cluster." in the documentation here. Is this as simple as...
Hi All,
I want to use the newest processor type for our VMs. After some digging around on the Internet, reading this, and figuring out that our CPU ... an E5-2667 v4 ... is from the Broadwell family ... and running "kvm -cpu help" for the console, I see "x86 Broadwell-v4 Intel Core...
When I upgraded my PVE 7 to 8 cluster, I read and re-read the instructions over several days. I've never felt the need to do this for any sub-version (7.x, for example) upgrade. I'll now be reading upgrade notes. I'm grateful to have found this thread, but I'm definitely frustrated.
Hello everyone,
This is just a bit of encouragement for first-time Ceph upgraders on PVE7. About a week ago, I upgraded our 3-node cluster per the official instructions here. It went smoothly with no issues. Just be sure to read everything carefully.
Oh, and the bug described here is, of...
Hi All,
For anyone who installed OMSA on PVE7, as I did here, this is just a note to let you know that an in-place upgrade seems to work. Our hyper-converged Ceph cluster has been running well for 24 hours. I'll post back I there are issues.
best,
James
Love it!
My thanks to all for keeping this alive.
Proxmox 7 support is nearing its end, so I'm looking into a direct upgrade to 8.
I see two choices.
1) Remove OMSA 10.1 (see how I did this install here.) / move to SNMP monitoring IDRAC or learn checkmk.
2) Update to OMSA 10.3 via the...
Morning from PST Shanreich,
Thank you for the response. We're running Ceph 16 / Pacific. I posted all of our versions below.
Looks like David / #7 on the bug URL (thank you for that) is reporting this issue with the exact version we are using.
I've spent several hours looking through...
Hi All,
I just patched our Proxmox 7 cluster to the latest version. After this "ceph health detail" reports:
HEALTH_WARN 2 pools have too many placement groups
[WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups
Pool device_health_metrics has 1 placement groups, should have 1...
Thanks leesteken, that checks out. Here are updated numbers for column B:
For those who find this thread, I obtained the updated Exchange and DC VM numbers by opening Windows task manager on each and adding the RAM "In use" to "Cached". Now I can see that the numbers line up. The remaining...
Hi All,
Is there a command or two that illustrates where RAM is being consumed on our Proxmox systems that are using ZFS?
For example, here is the RAM usage on a new system:
Column B is my math showing expectations with my current understanding of what is using RAM ... This server has 128MB...
Interesting posts. Mine is crashing within a couple of hours of a 3:15AM backup ... every time.
I'll likely move to Win 2019 for this install but will watch this thread.
Throwing my hat in the ring for this issue ... well it looks like the same issue to me.
Let me know what I can contribute.
We're running a single Windows server 2022 VM on a new Dell T350 and just experienced this issue this morning, early AM.
A backup job finished at 3:39:52 and then the...
Hi All,
We just experienced a bug that caused us to switch to krbd.
Is there a good reason to switch back once the bug is resolved? It seems that krbd might be faster and I don't see any features that I'm giving up.
best,
James
Morning from PST all,
Just a note to perhaps help someone else experiencing this frustrating issue.
We experienced out 1.5 hour multi-VM backup (Ceph and Proxmox's built in backup, not PBS) suddenly changing to 12+ hours. On top of that the VMs with the largest disks (750GB) would drop in...
Hi All,
To keep with our timeline we're going to back up and restore from shared storage ... I'm not planning on troubleshooting this. Just an FYI to any posters trying to help.
best,
James
Hello all,
I'm trying to figure out why using move disk on a live VM causes one of our cluster nodes to reboot. We are looking to migrate a live VM from CEPH to LVM storage. The reason being that this will then enable us to live migrate the VM to a non-CEPH attached node.
When we do this...
Thanks avw,
Another backup triggered last night and swap is holding at about 6.5 or 8GB used. There is a significantly larger amount of storage on the node that has the 6.5GB of swap usage vs other cluster nodes. My guess is that this is why SWAP is used ... based on what you are saying...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.