ok - the nic labeled cluster is the proxmox cluster, not optional ceph cluster.
Right now it is on 10G but moving to 100G once I am 100% sure I have the 100G working properly.
later today adding cross connect cables between the two 100G switches...
ok - the nic labeled cluster is the proxmox cluster, not optional ceph cluster.
Right now it is on 10G but moving to 100G once I am 100% sure I have the 100G working properly.
later today adding cross connect cables between the two 100G switches...
Hmm. I will try that (already was planning on moving cluster to 100G), but the docs say the ceph interface handles both replication and RADOS and talks about moving replication to a different network for performance. If what you said is correct...
well, increasing num_PGs to 4096 helped. but not enough
now getting
WRITE: bw=6598MiB/s (6918MB/s), 217MiB/s-708MiB/s (228MB/s-743MB/s), io=476GiB (511GB), run=73787-73845msec
Disk stats (read/write):
dm-0: ios=0/33834, merge=0/0...
may have found it. based on install manual I had left num_pgs=auto, which resulted in 48 PGs...for 66 850Gb drives!
I just set it to 4096 and will test after it finishes rebuilding
one more note. after I moved the CEPH interface from one NIC to the other, CEPH seemed fine but the virt bridges for the vms were not working. ended up rebooting the servers one at a time. then those came back.
All tests done after the server...
so...been using proxmox for many years (almost 20). have several existing server clusters.
Just building a new server stack. 6x Dell R760 768G ram, dual 32 core cpu. each server with 11 850G 24GPS SA SCSI SSD, 4x 1G nic plus 2X dual 100G nic...