Thank you for that link @fiona
If I understand correctly, the proposed solution is to 're-enable notifications and kick the virt queue after draining'.
Would this not basically be what the qm suspend and resume workaround you posted does as well?
For a few of us on this thread this does not...
@mnih: same here. we started having the issues with SSD pools as well. For now I have removed iothreads from all VM's and disabled fs-freeze in qemu-agent. For now, all VM's have survived backup. It would be good if the proxmox team could confirm that this is the recommended workaround until...
I tried a few more things, the results may be of interest:
1) I had a VM that was showing higher load and CPU wait after backup. I had recently migrated some of its disks from ceph HDD to ceph SSD. The HDD were still attached but unused. I detached the HDD, and immediately load and wait time...
I have this issue as well on several of my VM's. Sometimes VM's freeze after backup, some VM's show increased load, without actual load on the VM, CPU wait timebecomes very large and the then VM freezes later or in the next backup. suspending and resuming the VM does nothing for me, only a...
I was able to figure it out. A few changes were required.
1) /etc/udev/rules.d/70-persistent-net.rules had a mapping of the interfaces to ethX
removed mappings and did update-initramfs -u
2) there were some keyword changes in /etc/network/interfaces
when configuring interfaces they need to...
Hi,
I upgraded the first node in my proxmox cluster from 8.0 to 8.1, and the GBE interfaces changed names from ethX to enoX. The interfaces show down and do not come up anymore:
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether...
Update:
I commented out the /etc/sysctl.conf inifiband tuning parameters about a week and a half ago:
#net.ipv4.tcp_mem=1280000 1280000 1280000
#net.ipv4.tcp_wmem = 32768 131072 1280000
#net.ipv4.tcp_rmem = 32768 131072 1280000
#net.core.rmem_max=16777216
#net.core.wmem_max=16777216...
Hi Alwin,
It is a standard proxmox hyperconverged cluster install. The hardware is 3 identical HP DL380 G9:
Dual CPU Xeon E5-2690
192GB RAM
1 x P840ar RAID controller in HBA mode
14x 12Gbps SAS SSD
10x 12gbps SAS 10k HDD
Dual port Mellanox ConnectX-3 QDR QSFP+ InfiniBand MCX354A-QCBT CX354A...
Hi,
I recently moved from proxmox + ISCSI ZFS storage to a 3-node hyper converged proxmox cluster running proxmox 6.3 and ceph octopus.
The cluster has 1GbE interfaces for VM traffic and leverages a 40Gbps infiniband network for the proxmox cluster and ceph cluster.
I have a redundant pair of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.