Search results

  1. D

    my NVMEs suck

    @zedicus : i just tested a standard grade samsung NVMe M2 HDD and it gives 270 fyncs per second and as it is nearly 60% full even iops are far from impressive. The here is a fio result : fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k...
  2. D

    my NVMEs suck

    You are using ZFS. Best results we had with ZFS are obtained with simple SSD + HBA. May i ask you why you based your perf test on fsync() perfs ? If you dont fsync() what results do you have with fio for an example. If you are planning to host MySQL DBs and you will have to rely on fsync()...
  3. D

    Best RAID configuration for PVE

    If you have 8 disks and uses raid 10 (1+0) you will have 4 disks per sub array netting you a 4TB data. (2+2) raid 1 and (2+2) raid striped so 4 TB usable. If you chose RAID 1 on a 8 HDD hardware raid or a SAN : aggregate disk distribution would probably be (1+1) (1+1)(1+1)(1+1) so 4TB also.
  4. D

    Proxmox VE Ceph Benchmark 2018/02

    @Alwin here are the raw perfs of the 64xSAS pool Destroyed the pools, created only an HDD pool Issued a lot of write threads : rados bench -p testsas 180 write -b 4M -t 1024 --no-cleanup 2018-07-06 14:51:45.695910 min lat: 3.33414 max lat: 4.06629 avg lat: 3.67097 sec Cur ops started...
  5. D

    Proxmox VE Ceph Benchmark 2018/02

    Thanks @Alwin. I will read this carefully
  6. D

    [SOLVED] Cehp timeout and losted disks

    Hello @kaltsi, it's a tough one. Slow requests can be caused by network issues, disks issues, or event controller issues as stated by @Alwin. Looks like you have a 12x7200RPMs SATA drives. Do you use filestore or bluestore ? In lab we tested filestore with 12 SATA drives too and it was not...
  7. D

    Proxmox VE Ceph Benchmark 2018/02

    Yes @Alwin you are right, We will need to tweak this to get a more 'real life' scenario. In fact it is written 'cache' in Ceph's documentation but it looks more like a tiering system. By default it seems that there is no dirty object evictions until the cache pool is full. So eventually...
  8. D

    Proxmox VE Ceph Benchmark 2018/02

    So here are the results : I/Os are issued on SSD pools : rados bench -p cache 60 write -b 4M -t 16 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 60 16 16251 16235 1082.18 1084 0.0415562 0.0590952 Total time run: 60.048231 Total...
  9. D

    Proxmox VE Ceph Benchmark 2018/02

    @Alwin no i just tested the vm. Il will use the same commands you used in the official benchmark and post the results.
  10. D

    Proxmox VE Ceph Benchmark 2018/02

    I did it like this : one pool with all the SSD one pool with all the HDD Then assign the SSD pool named cache to the HDD pool named data ceph osd tier add data cache Assign cache policy : ceph osd tier cache-mode cache writeback To issue I/O from the SSD pool : ceph osd tier set-overlay...
  11. D

    Proxmox VE Ceph Benchmark 2018/02

    Hi guys, Here's a short summary of some tests led in our lab. Hyperconverged setup. Server platform is 4x : Lenovo SR650 2x Intel Silver 4114 10 cores 256 GB RAM @2666Mhz 1X Embedded 2x10Gbps Base-T LOM (x722 Intel) #CEPH 1X PCI-E 2x10Gbps Base-T adpater (x550 Intel) #VMBR0 For each...
  12. D

    RDP performance

    Ok get it : Client is also a VM => Server is a Win 10 VM : RDP from client to Win 10 is a disaster. I just fired up a WIN10 VM , stock, no updates, no drivers... runs mostly smooth on 2560x1600 connecting from my laptop to the guest.
  13. D

    RDP performance

    I guess transferring to the hypervisor is not quite the same as loading the guest. As RDP is very sensitive to network conditions variations, i suppose clearing out that client ==> VM is ok might help.
  14. D

    RDP performance

    Have you tried to iperf from the client to the server ? Might have some unexpected results there
  15. D

    RDP performance

    Hello, May i ask what is the screen resolution you are using client side ? and what kind of RDP client is used too ? Best regards,
  16. D

    [SOLVED] Proxmox ZFS - Unusable

    Hi everyone, Hi Darren, you are right,ZFS is not the issue, hardware is. Maintaining an HCL is going to be tough due to all the setup options offered by PVE. Maye a tips and tricks sticky post ? I don't know. I have seen a lot of issues on the forum regarding ZFS and Raid Adapters...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!