This is not a bug or an issue with Proxmox.
LXC does not currently support an individual load value per a container.
The load avg's shown in the container are node wide, it has been discussed on LXC multiple times but it is not yet found a simple and accurate way to report a container only load.
Is the VM itself sluggish or the Spice Console?
Have you tried RDP directly to the Windows VM? Is the performance the same then or does the slowness go away?
I would wait for all your data to move around and the repair to fully complete, then let a full set of deep scrubs rotate.
While an OSD is in repair state it won't be deep scrubbed, so could just be a "false" positive.
I would suggest marking the most full SSD OSD as out.
This will allow I/O to still hit the SSD (read) but CEPH will start to move the data from this SSD to the other OSD's, just stopping the OSD's will make any data on these SSD's unavailable.
Once the first SSD is completed you can continue...
From your ceph osd df you have couple of OSD over the full limit, this will stop any further re balancing otherwise these may hit the 100% mark and stop I/O.
Do you have a cache layer infront of the hdd pool using the 4 SSDs?
Any update on this?
I am struggling to see anywhere in GUI to set this.
Have tried manually to create a RBD setting the data pool and set in a KVM config but won't boot.
Hello,
So have been testing out raw EC (no change) with CEPHFS and RBD in latest CEPH 12.x release.
Works fine outside of Proxmox, however trying to use a RBD within Proxmox that is created on an EC pool fails.
1/ The RBD image does not show in Proxmox Storage content view however "rbd ls -p...
No HA resources, just a 4 node cluster for management.
However has been fine the past 2 night's, will continue to monitor.
"journalctl -u pve-ha-lrm" shows nothing since the last reboot
Sep 01 23:47:04 cn04 systemd[1]: Starting PVE Local HA Ressource Manager Daemon...
Sep 01 23:47:04 cn04...
Hello,
Have had a 3 node cluster running perfectly fine for month's, recently added a 4th node to this cluster (Same hardware DL160 G9, and same configuration)
For the last few nights every night around 11:40-50 the server will reboot it self, looking in the log's I am struggling to see...
You need to make sure within Proxmox discard is enabled on the disk in question.
Then within the VM run a fstrim to clear the 0 space, this will take a while for 1TB to clear.
I think your forgetting a hardware raid card requires absolutely no support in the Proxmox installer, as the installer just manages it as if you was installing directly to one single disk, what your asking is for them to make a system available to install and configure mdadm. Which again is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.