Hi All.
It feels like I am going to complain about something with a golden spoon in my mouth, but this is our production servers...
When I migrate a VM form one node to another, in the cluster, it seems like it tops out at 800Mbps-ish. Some information first...
Hardware I am using:
AMD EPYC 7702P 64-Core Processor
512GB DDR4 3200Mhz
2x Micron 7300 480GB (Raid1 zfs for boot)
6x Kioxia KCD6XLUL960G 960GB NVMe
Mellanox ConnectX 4 (25Gbps) NICs, connecting with DAC Cables.
Yes, the screen shot above shows speed when migrating the memory, but the one above that should "show" that it is the same when migrating the HDD.
So my questions:
1. How can I figure out what would be the bottleneck here?
2. What happens to the data that is written to the HDD, while the migration takes place?
It feels like I am going to complain about something with a golden spoon in my mouth, but this is our production servers...
When I migrate a VM form one node to another, in the cluster, it seems like it tops out at 800Mbps-ish. Some information first...
Hardware I am using:
AMD EPYC 7702P 64-Core Processor
512GB DDR4 3200Mhz
2x Micron 7300 480GB (Raid1 zfs for boot)
6x Kioxia KCD6XLUL960G 960GB NVMe
Mellanox ConnectX 4 (25Gbps) NICs, connecting with DAC Cables.
Source node is using Proxmox 7.2 and Kernel 5.15 and Destination node is Proxmox 7.3 and Kernel 6.1.Disk setup for the 6x NVMe:
zpool create -f -o ashift=12 houmyvas mirror /dev/nvme0n1 /dev/nvme1n1 mirror /dev/nvme2n1 /dev/nvme3n1 mirror /dev/nvme6n1 /dev/nvme7n1
Yes, the screen shot above shows speed when migrating the memory, but the one above that should "show" that it is the same when migrating the HDD.
So my questions:
1. How can I figure out what would be the bottleneck here?
2. What happens to the data that is written to the HDD, while the migration takes place?