Recent content by JDA

  1. J

    Ceph 17.2 Quincy Available as Stable Release

    Stupid question - do we need to follow the hint from the Quincy release notes?
  2. J

    Ceph - slow ops for nvme OSD

    I just tried to recreate the osd.5 - after starting it, I get the full data from the other hosts (now it's filled with 1,6 TB data). So the nvme should be fine. I'm unsure whether I've got a network problem between the two nodes pve004 and pve005
  3. J

    Ceph - slow ops for nvme OSD

    I've got a small problem - I wanted to swap servers in my Proxmox 7.1 cluster. Removing node pve002 and adding the new pve005 was working fine. Ceph was healthy. But now I try to shutdown pve004 and set the last nvme to out there, I get 19 PGs in inactive status because the new osd.5 in pve005...
  4. J

    Live migration network performance

    That's only memory migration - storage is on ceph. It's a 40GbE without RDMA - currently also used by ceph (not activated second port yet). Here's the output from the migration: 2021-08-29 15:44:58 use dedicated network address for sending migration traffic (172.20.253.202) 2021-08-29 15:44:58...
  5. J

    Live migration network performance

    After a small network upgrade, I try to tune the performance for the live migration. With the secure migration mode I currently get 300-400 MiB/s. In the insecure migration mode, I get around 1,6 GiB/s. This is still half the speed I get with iperf on one parallel test transfer... Are there...