Hi,
Some time ago I used proxmox on 1 server with some vms and it worked just fine, so when recently I saw possibilities of clustering and ceph integration I got excited, looks awesome!
I have 3 servers with following configuration:
AMD Epyc
256GB memory
4 x 960GB NVMe datacenter edition
1gb lan dedicated for proxmox cluster and ceph
I tried config:
2 disks raid1 for system
2 disks as osd
but performance looks more like SATA then NVMe
and performance is not very nice..
I read suggestions to create 4 partitions per nvme disk. And other suggestion that setting bluestore_shard_finishers=true fixes problem of 1 osd on nvme device, but can't find where I could put that setting..?
Usage of cluster would be: 1 vm for ~200GB db (some heavy disk operations), and 2-3 more vms for jobs and/or nginx+php - so this part could be cached whole in memory and use disks for logs only.
Maybe should I put on system disks partition or two for osd? or maybe journal? What would you recommend?
I'm waiting to hear from my dc if they can upgrade my lan network to 10gb. Also I can add some disks, so if it would have big effect on performance I could add separate 2x240ssd just for system proxmox raid1 and 4x nvme would be all for ceph.
I'm looking right now for a way to boost performance, because ~140MB/s I know will not suffice for my needs.
Some time ago I used proxmox on 1 server with some vms and it worked just fine, so when recently I saw possibilities of clustering and ceph integration I got excited, looks awesome!
I have 3 servers with following configuration:
AMD Epyc
256GB memory
4 x 960GB NVMe datacenter edition
1gb lan dedicated for proxmox cluster and ceph
I tried config:
2 disks raid1 for system
2 disks as osd
but performance looks more like SATA then NVMe
Code:
root@ ~ # rados bench -p cephfs_data 10 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 66 50 199.94 200 0.144333 0.206032
2 16 98 82 163.965 128 0.651008 0.311796
3 16 127 111 147.972 116 0.347257 0.338726
4 16 154 138 137.976 108 0.119017 0.373592
5 16 194 178 142.376 160 0.112286 0.394227
6 16 229 213 141.977 140 0.0113202 0.409985
7 16 272 256 146.263 172 0.230196 0.412742
I read suggestions to create 4 partitions per nvme disk. And other suggestion that setting bluestore_shard_finishers=true fixes problem of 1 osd on nvme device, but can't find where I could put that setting..?
Usage of cluster would be: 1 vm for ~200GB db (some heavy disk operations), and 2-3 more vms for jobs and/or nginx+php - so this part could be cached whole in memory and use disks for logs only.
Maybe should I put on system disks partition or two for osd? or maybe journal? What would you recommend?
I'm waiting to hear from my dc if they can upgrade my lan network to 10gb. Also I can add some disks, so if it would have big effect on performance I could add separate 2x240ssd just for system proxmox raid1 and 4x nvme would be all for ceph.
I'm looking right now for a way to boost performance, because ~140MB/s I know will not suffice for my needs.