I ran out of space on a CEPH pool, and so I am moving data to a local partition. The data is a 512GB virt-io based harddrive I am using for one of my virtual machines. It's taken almost 40 minutes, and is only part way complete. I was doing this with some other storage/hard drives on some of my other virtual machines as well--- in that case I was moving the data from a SSD ceph pool to an NVME based one.
Those migrations were also oddly slow. Even a 128GB virtio block took 1-2 hours to migrate. I am not sure exactly what performance to expect, but this seems oddly slow.
I currently have 9 machines in my cluster. They are Dell R730's, and I have a dedicated 10 GB NIC specifically for the CEPH backend. Each machine has 384 GB of RAM, and all the drives I am referencing are SSD or NVME, so I am just trying to wrap my head around any potential mis-configurations I may have that is making things so glacial.
Those migrations were also oddly slow. Even a 128GB virtio block took 1-2 hours to migrate. I am not sure exactly what performance to expect, but this seems oddly slow.
I currently have 9 machines in my cluster. They are Dell R730's, and I have a dedicated 10 GB NIC specifically for the CEPH backend. Each machine has 384 GB of RAM, and all the drives I am referencing are SSD or NVME, so I am just trying to wrap my head around any potential mis-configurations I may have that is making things so glacial.