thank you. unfortunately I couldn't take the risk of running the 5.19 kernel in a production cluster, so I downgraded to pve-kernel-5.13.19-6-pve and - let me touch wood - everything is working smoothly now.
Hello everyone,
I'm currently having freezes migrating Linux and Windows VMs between two Xeon HV:
Xeon E5-2698 v3 @ 2.30GHz
Xeon Silver 4108 @ 1.80GHz
I'm running Proxmox 7.2 with all latest updates installed as of yesterday. Virtual CPU is set as default (kvm64) on every VM and the problem...
I really can't explain. Consider that 99% of the VMs are created using APIs, and none of my templates use the legacy setting (all of them are less than a year old). I can't find a pattern.
Thank you for your help anyway. I'll update you if any clue comes up :)
Hi,
I read somewhere in the forum that disabling Ceph debug logs could improve my overall IO wait.
I've been reading the official Ceph docs but it's a little unclear whether I need to set the params per OSD (as it seems) or I can set them globally, and how. I can't see any of them in the...
None of this, at least it seems.
It happened in both very old and brand new VMs, the templates are always the same. Not a big issue anyway *shrugs*
Thank you so much!
hi @fiona , thanks for your reply. No major upgrades (6 -> 7). I have some cluster nodes with different minor version (7.2.11 vs 7.2.8 for example).
do you think it could be related?
thanks!
I'd be happy to do it any time, the problem is for some odd reason the nodes don't see new hotplugged disk, no matter what I do :confused: So I'm forced to restart.
They're all HPE DL360gen9/gen10. Any advice on that?
Anyway no big issue: I'll start restarting (no pun intended) as soon as...
Hi,
all of sudden, many of the VMs in my cluster show (no bootdisk) in the Boot Order options (see screenshot). If I open the configuration file in /etc/pve/qemu-server/ the "boot" row is configured as boot: c.
If I open the option, everything is unset. Sure, I can set the right drive and...
yes, autorebalance and autofill are active and I always see some backfilling running form time to time.
also, I see that the optimal number of PGs in the pool has doubled (1024 from the current 512) so I think it should autoscale the PGs sometime soon, shouldn't it?
Hi @alexskysilk ! Thanks for your advice. The growth rate is higher than usual lately because I'm moving VMs from the old virtualization system to Proxmox. It should stabilize soon.
Anyway, after the rebalance from osd.8 completed the used percentage dropped to 87%. Now I replaced osd.9 too and...
Hi @mira , thanks for your reply. Here's the df tree. You can safely ignore osd.8 as I just replaced the 1T drive with a 2T one.
Drives are all 2T, except osd.0 to osd.3 and osd.9 which are 1T and waiting to be replaced.
I can add another node with empty disks in 2-3 days if needed. What do...
Hi,
I'm running a Proxmox 7.2-7 cluster with Ceph 16.2.9 "Pacific".
I can't tell the difference between Ceph > Usage and Ceph > Pools > Used (see screenshots).
Can someone please explain what's the actual space used in my Ceph storage? Do you think that 90% used pool is potentially dangerous...
Hi Matthias,
thanks for your reply. I understood where I was wrong. I needed to set all the other current parameters to be the same.
For example, I had these params on the UI. As you can see, 'cache' is explicitly set to 'none' in my config:
and cache was not included in my command line.
So...
Hi,
whenever I set disk limits using pvesh or qm command line tools, the setting is not applied immediately. Instead, the new setting turns orange in the web UI meaning that it will be applied at the next reboot.
If I set the parameters directly in the web UI, they are immediately applied with...
Hi,
here's my /etc/network/interfaces:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory'...
Hi Aaron,
thank you very much! I didn't install any node (yet) but if I have any doubt I'll post /etc/network/interfaces for sure :)
Anyway here's what I'm going to do based on your suggestions:
VM public bridge: bond between two Gigabit interfaces, connected to distinct switches for redudancy...
Hi all,
pretty new to Proxmox and Ceph. We've been running a test cluster on three nodes, each node on Gigabit network for the Ceph network as well and so far we are satisfied with performance and resiliency. So we're planning to deploy a production cluster soon.
The new cluster is starting...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.