Hi Team,
We have a slow response while taking the Windows clone.
Let me explain our setup.
7 Proxmox nodes ceph storage cluster
Each node has 5 * 2 TB Crucial 500 SSD drives.
2 * 40 GB NIC goes to public network/cluster network.
2 * 10 G NIC goes to VM network/migration network.
2 * 1 G NIC goes to Corosync & Proxmox management.
Replica size & mini size is 2/2
When we take the multiple Windows clone(100 G size) at the same time(lets say 5 counts),its taking 20 to 25 minutes.
Also we notice the clone goes faster sometimes to 100 % and then its taking much time to complete the final sync.
Do we have any specific recommendations for Ceph to increase the performance?
We did these only sofar from command line.
echo 2048 > /sys/block/sda/queue/read_ahead_kb
echo 2048 > /sys/block/sdb/queue/read_ahead_kb
echo 2048 > /sys/block/sdc/queue/read_ahead_kb
echo 2048 > /sys/block/sdd/queue/read_ahead_kb
cat /sys/block/sda/queue/read_ahead_kb
We cannot add noop( delivers better performance for SSDs).Getting error while adding this.
We have so many recommmdations from the below link and not sure what are all the parameters are really required to gain the performance?
https://tracker.ceph.com/projects/c...l_Flash_Deployments#Ceph-Client-Configuration
Thanks,
Raj
We have a slow response while taking the Windows clone.
Let me explain our setup.
7 Proxmox nodes ceph storage cluster
Each node has 5 * 2 TB Crucial 500 SSD drives.
2 * 40 GB NIC goes to public network/cluster network.
2 * 10 G NIC goes to VM network/migration network.
2 * 1 G NIC goes to Corosync & Proxmox management.
Replica size & mini size is 2/2
When we take the multiple Windows clone(100 G size) at the same time(lets say 5 counts),its taking 20 to 25 minutes.
Also we notice the clone goes faster sometimes to 100 % and then its taking much time to complete the final sync.
Do we have any specific recommendations for Ceph to increase the performance?
We did these only sofar from command line.
echo 2048 > /sys/block/sda/queue/read_ahead_kb
echo 2048 > /sys/block/sdb/queue/read_ahead_kb
echo 2048 > /sys/block/sdc/queue/read_ahead_kb
echo 2048 > /sys/block/sdd/queue/read_ahead_kb
cat /sys/block/sda/queue/read_ahead_kb
We cannot add noop( delivers better performance for SSDs).Getting error while adding this.
We have so many recommmdations from the below link and not sure what are all the parameters are really required to gain the performance?
https://tracker.ceph.com/projects/c...l_Flash_Deployments#Ceph-Client-Configuration
Thanks,
Raj
Last edited: