Search results

  1. Ceph 17.2 Quincy Available as Stable Release

    Stupid question - do we need to follow the hint from the Quincy release notes?
  2. Ceph - slow ops for nvme OSD

    I just tried to recreate the osd.5 - after starting it, I get the full data from the other hosts (now it's filled with 1,6 TB data). So the nvme should be fine. I'm unsure whether I've got a network problem between the two nodes pve004 and pve005
  3. Ceph - slow ops for nvme OSD

    I've got a small problem - I wanted to swap servers in my Proxmox 7.1 cluster. Removing node pve002 and adding the new pve005 was working fine. Ceph was healthy. But now I try to shutdown pve004 and set the last nvme to out there, I get 19 PGs in inactive status because the new osd.5 in pve005...
  4. Live migration network performance

    That's only memory migration - storage is on ceph. It's a 40GbE without RDMA - currently also used by ceph (not activated second port yet). Here's the output from the migration: 2021-08-29 15:44:58 use dedicated network address for sending migration traffic ( 2021-08-29 15:44:58...
  5. Live migration network performance

    After a small network upgrade, I try to tune the performance for the live migration. With the secure migration mode I currently get 300-400 MiB/s. In the insecure migration mode, I get around 1,6 GiB/s. This is still half the speed I get with iperf on one parallel test transfer... Are there...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!