The difference is probably due to uneven distribution, yes.
I just finished the migration from DRBD to Ceph and upgrade of all 'old' PVE5.2 nodes to latest PVE5.4. The upgrade to PVE6 and Nautilus is the next move. But it has to be carefully planned and tested as we're in a production...
In the past, I made a mistake : using the same 10G network to run proxmox cluster AND Ceph cluster. I know, that's bad practice.
Now, I'm trying to migrate corosync (aka pvecm) on a dedicated 1G link, recently added.
All my hosts are configured on a new subnet (10.152.13.0/24) and ping...
Sadly, the server is already installed in a datacenter. We could try with a stock Ubuntu kernel perhaps. Never tried this.
And of course, we don't have any IPMI access ...
Thanks for your answer, will try this tomorrow.
We're trying to setup a Supermicro X11SSW-F with X520-DA2 10Gbps NIC on a RSC-W-68-P riser card.
But the card isn't detected, not even shown on lspci output.
The motherboard is based on a C236 chipset but it doesn't seems really well detected. lspci shows :
00:00.0 Host bridge: Intel...
So I upgraded one of old nodes to 4.4. Still the migration process need to connect to the ipv6 address but once I added an ipv6 address, the migration worked flawlessly on the private subnet.
So if a node is identified on its ipv6 address, nevermind the subnet used for migration, it...
We just added 2 PVE 5.1 hosts into our 'old' 4.2 cluster in order to migrate and upgrade old nodes.
This old nodes are ipv4 only and new ones are dual-stacked (ipv4 & ipv6). We have setup a dedicated 10Gbps network between all nodes for DRBD replication. It would be great to use it for...
We just activate firewall on a Proxmox 4.4 installation.
We observed that proxmox automatically add a masquerade rule :
# iptables-save |grep MASQ
-A POSTROUTING -o vmbr0 -j MASQUERADE
So we need to delete this rule at every reboot or the VMs are seeing connections from the hypervisor...