Recent content by bbn

  1. B

    VMs hung after backup

    Thank you for that link @fiona If I understand correctly, the proposed solution is to 're-enable notifications and kick the virt queue after draining'. Would this not basically be what the qm suspend and resume workaround you posted does as well? For a few of us on this thread this does not...
  2. B

    VMs hung after backup

    @mnih: same here. we started having the issues with SSD pools as well. For now I have removed iothreads from all VM's and disabled fs-freeze in qemu-agent. For now, all VM's have survived backup. It would be good if the proxmox team could confirm that this is the recommended workaround until...
  3. B

    VMs hung after backup

    I tried a few more things, the results may be of interest: 1) I had a VM that was showing higher load and CPU wait after backup. I had recently migrated some of its disks from ceph HDD to ceph SSD. The HDD were still attached but unused. I detached the HDD, and immediately load and wait time...
  4. B

    VMs hung after backup

    I have this issue as well on several of my VM's. Sometimes VM's freeze after backup, some VM's show increased load, without actual load on the VM, CPU wait timebecomes very large and the then VM freezes later or in the next backup. suspending and resuming the VM does nothing for me, only a...
  5. B

    [SOLVED] lost networking when upgrading from 8.0 to 8.1

    I was able to figure it out. A few changes were required. 1) /etc/udev/rules.d/70-persistent-net.rules had a mapping of the interfaces to ethX removed mappings and did update-initramfs -u 2) there were some keyword changes in /etc/network/interfaces when configuring interfaces they need to...
  6. B

    [SOLVED] lost networking when upgrading from 8.0 to 8.1

    Hi, I upgraded the first node in my proxmox cluster from 8.0 to 8.1, and the GBE interfaces changed names from ethX to enoX. The interfaces show down and do not come up anymore: 2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether...
  7. B

    Ceph OSD slow OPS on all OSD's

    Update: I commented out the /etc/sysctl.conf inifiband tuning parameters about a week and a half ago: #net.ipv4.tcp_mem=1280000 1280000 1280000 #net.ipv4.tcp_wmem = 32768 131072 1280000 #net.ipv4.tcp_rmem = 32768 131072 1280000 #net.core.rmem_max=16777216 #net.core.wmem_max=16777216...
  8. B

    Ceph OSD slow OPS on all OSD's

    Hi Alwin, Here is ceph config if this helps: [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.2.21.111/24 fsid = 907b139a-1bf2-4010-a3f6-d89fda2347e4 mon_allow_pool_delete = true mon_host =...
  9. B

    Ceph OSD slow OPS on all OSD's

    Hi Alwin, It is a standard proxmox hyperconverged cluster install. The hardware is 3 identical HP DL380 G9: Dual CPU Xeon E5-2690 192GB RAM 1 x P840ar RAID controller in HBA mode 14x 12Gbps SAS SSD 10x 12gbps SAS 10k HDD Dual port Mellanox ConnectX-3 QDR QSFP+ InfiniBand MCX354A-QCBT CX354A...
  10. B

    Ceph OSD slow OPS on all OSD's

    Hi, I recently moved from proxmox + ISCSI ZFS storage to a 3-node hyper converged proxmox cluster running proxmox 6.3 and ceph octopus. The cluster has 1GbE interfaces for VM traffic and leverages a 40Gbps infiniband network for the proxmox cluster and ceph cluster. I have a redundant pair of...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!