Search results

  1. D

    VzDump latest version ignores notification setting failure and send always notifications

    After updating to latest backup tool vzdump on 19.11.2023 the backup jobs always notify after backup. Config is vzdump 100 102 103 104 105 106 113 111 112 9998 101 109 9999 107 108 121 123 122 124 500 501 502 110 --notes-template '{{vmid}}.{{guestname}}' --mailnotification failure --mode...
  2. D

    Docker on LXC container faster than on VM

    Hello, we ran docker on LXC container and vm on a proxmox 8.x.x three node cluster with NVME ceph storage (24 NVMEs) on Dell R740XD servers. Docker runs on Debian booworkm latest version. Hypervisor nesting is activated for LXC and VM. We put our monitoring in a docker container in LXC and VM...
  3. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Hello, we plan to upgrade the network from 10GBe Base-T to 10/25GBe SFP+/SFP28. So in the past I still prefer intel cause the are stable an run out of the box. But the Dell R740XD Server offers as QuadPort daughter board only Broadcom, QLogic and Intel, I read there are compatibility issues...
  4. D

    Ceph Slow Ops if one node is rebooting (Proxmox 7.0-14 Ceph 16.2.6)

    Hello, I've upgraded a Proxmox 6.4-13 Cluster with Ceph 15.2.x - which works fine without any issues to Proxmox 7.0-14 and Ceph 16.2.6. The cluster is working fine without any issues until a node is rebooted. OSDs which generates the slow ops for Front and Back Slow Ops are not predictable...
  5. D

    Update PVE 6 to 7 with Installes Mellanox Connectx-6 Drivers DKMS Ceph not working

    Hello, I just made a in place upgrade from PVE 6.4-13 to PVE 7 with latest Mellanox OFED drivers (Debian 10.8). the Mellanox Connectx-6 dcards are used for a ceph nautilus cluster (latest version). The mellanox cards are running in ethernet mode with ROCEv2. I test a virtual pve cluster to...
  6. D

    Mellanox Connect-X 6 100G is limited to Bitrate ~34Gbits/s

    We have 3 Nodes (Proxmox 6.4-13 latest version) with Mellanox dual port Connect-x 6 cards 100G connected as mesh network with mode eth and ROCEv2, driver OFED-5.4-1.0.3. The uses PCI x16 gen 3.0 8GB/s. MTU is configured to 9000, so they should have more throughput. 3b:00.0 Ethernet controller...
  7. D

    3 Node switchless Infiniband Setup with mellanox

    Hello, I have installed dual-port connectx-6 cards (Mellanox) in each node (3 in cluster). Each node is connected via meshup to each node per DAC copper cable. I think I miss something with the opensm configuration. Does anyone have a working configuration for this setup with iboip and can post...
  8. D

    Mellanox MCX653106A-ECAT Support

    Hello, we have 3 nodes that uses 24 NVmes (8 drives per node) with Ceph and bonded 2x Intel 10GBe Adapters and we plan to buy the Mellanox MCX653106A-ECAT-SP (Connected as Meshup wit DAC cables for 200 Gbe). - Are these cards supported by Proxmox with debian MLNX_OFED driver? - So are there...
  9. D

    Proxmox 6.2-10 CEPH Cluster reboots if one node is shutdown or rebooting

    Hello, after update the proxmox nodes few days ago with latest CEPH-Version, something strange happens. If a node is rebooted, all HA cluster nodes are rebooted. In the log I saw something like that on Node that is not rebooted: Jul 23 13:36:28 hyperx-01 ceph-mon[2793]: 2020-07-23...
  10. D

    Proxmox 6.x: Nested VMX virtualization does not support live migration yet

    Hello, on a 3 node Ceph Cluster with Proxmox 6, I got following error while live migrating vm with cpu type host and turned on nested virtualization for physical nodes. On Proxmox 5.4.x we have no probs. All physical server are the same. On Proxmox 6.x we got following error: start migrate...
  11. D

    ERROR: online migrate failure - VM xx qmp command 'migrate' failed - Nested VMX virtualization does not support live migration yet

    Hello, we migrated a ceph Proxmox cluster from 5.4.1 to 6.0 with 3 nodes. Everything works fine, but live migration no longer works with nested virtualization activated and VMs with CPU type host. The three nodes are the same physical machines, nvmes, cpus, ram and so on. When I migrate a vm...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!