Hello,
we ran docker on LXC container and vm on a proxmox 8.x.x three node cluster with NVME ceph storage (24 NVMEs) on Dell R740XD servers. Docker runs on Debian booworkm latest version. Hypervisor nesting is activated for LXC and VM. We put our monitoring in a docker container in LXC and VM...
Hello,
we plan to upgrade the network from 10GBe Base-T to 10/25GBe SFP+/SFP28. So in the past I still prefer intel cause the are stable an run out of the box. But the Dell R740XD Server offers as QuadPort daughter board only Broadcom, QLogic and Intel, I read there are compatibility issues...
Hello,
I've upgraded a Proxmox 6.4-13 Cluster with Ceph 15.2.x - which works fine without any issues to Proxmox 7.0-14 and Ceph 16.2.6. The cluster is working fine without any issues until a node is rebooted. OSDs which generates the slow ops for Front and Back Slow Ops are not predictable...
Hello,
I just made a in place upgrade from PVE 6.4-13 to PVE 7 with latest Mellanox OFED drivers (Debian 10.8). the Mellanox Connectx-6 dcards are used for a ceph nautilus cluster (latest version). The mellanox cards are running in ethernet mode with ROCEv2.
I test a virtual pve cluster to...
We have 3 Nodes (Proxmox 6.4-13 latest version) with Mellanox dual port Connect-x 6 cards 100G connected as mesh network with mode eth and ROCEv2, driver OFED-5.4-1.0.3. The uses PCI x16 gen 3.0 8GB/s. MTU is configured to 9000, so they should have more throughput.
3b:00.0 Ethernet controller...
Hello,
I have installed dual-port connectx-6 cards (Mellanox) in each node (3 in cluster). Each node is connected via meshup to each node per DAC copper cable. I think I miss something with the opensm configuration. Does anyone have a working configuration for this setup with iboip and can post...
Hello,
we have 3 nodes that uses 24 NVmes (8 drives per node) with Ceph and bonded 2x Intel 10GBe Adapters and we plan to buy the Mellanox MCX653106A-ECAT-SP (Connected as Meshup wit DAC cables for 200 Gbe).
- Are these cards supported by Proxmox with debian MLNX_OFED driver?
- So are there...
Hello,
after update the proxmox nodes few days ago with latest CEPH-Version, something strange happens. If a node is rebooted, all HA cluster nodes are rebooted.
In the log I saw something like that on Node that is not rebooted:
Jul 23 13:36:28 hyperx-01 ceph-mon[2793]: 2020-07-23...
Hello,
on a 3 node Ceph Cluster with Proxmox 6, I got following error while live migrating vm with cpu type host and turned on nested virtualization for physical nodes. On Proxmox 5.4.x we have no probs. All physical server are the same. On Proxmox 6.x we got following error:
start migrate...
Hello,
we migrated a ceph Proxmox cluster from 5.4.1 to 6.0 with 3 nodes. Everything works fine, but live migration no longer works with nested virtualization activated and VMs with CPU type host. The three nodes are the same physical machines, nvmes, cpus, ram and so on.
When I migrate a vm...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.