I just tried to recreate the osd.5 - after starting it, I get the full data from the other hosts (now it's filled with 1,6 TB data). So the nvme should be fine. I'm unsure whether I've got a network problem between the two nodes pve004 and pve005
I've got a small problem - I wanted to swap servers in my Proxmox 7.1 cluster. Removing node pve002 and adding the new pve005 was working fine. Ceph was healthy.
But now I try to shutdown pve004 and set the last nvme to out there, I get 19 PGs in inactive status because the new osd.5 in pve005...
That's only memory migration - storage is on ceph.
It's a 40GbE without RDMA - currently also used by ceph (not activated second port yet).
Here's the output from the migration:
2021-08-29 15:44:58 use dedicated network address for sending migration traffic (172.20.253.202)
2021-08-29 15:44:58...
After a small network upgrade, I try to tune the performance for the live migration. With the secure migration mode I currently get 300-400 MiB/s. In the insecure migration mode, I get around 1,6 GiB/s.
This is still half the speed I get with iperf on one parallel test transfer...
Are there...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.