Search results

  1. P

    [SOLVED] Windows VM Poor Performance on Ceph

    Write back cache, reduce network latency. And read the testing and optimization guide,tests, that's been done by others and published in the forum
  2. P

    perfDHCP , odd issue

    not a proxmox issue,just a general dhcp issue. Running a windows DC with dhcp, experiencing dhcp issues, clients not getting Ips taking long time etc. ran perfDHCP against the server and this is the result Running: perfdhcp -x i -t 2 -r 10 -R 100 10.a.b.c Scenario: basic. Multi-thread mode...
  3. P

    [Feature Request] GUI wizard for importing OVA files

    I really think proxmox would gain alot by having a GUI Wizard for importing OVA files. I know it's easy in CLI, but all the new customers are comming from VMware and but not adding such a small and simple feature lots of new customers are turning away. Considering the current state or vmware...
  4. P

    [SOLVED] Windows VM Poor Performance on Ceph

    That is slow seq read/write. But CEPH is really slow, you need to adhere to the standards, ie five nodes with multiple OSDs per node. Im new to CEPH myself, but my limited testings yields similar results, but not that bad. I have only used very olh HGST 200GB sas ssd, 6xOSD per node in a 3 node...
  5. P

    Node maintenance mode UI

    I agree! some basic functions need to be in place for operations
  6. P

    CEPH Recovery/ Rebalance back and forth ?

    Thank you for the information. You are correct # ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 1985k 3.0 3353G 0.0000...
  7. P

    CEPH Recovery/ Rebalance back and forth ?

    Recovery/ Rebalance What could cause this behaviour ? all disks are fine, no osd failing. One empty vm are using the ceph as storage so bascially no usage. I have verified smartctl on all ssds. why is the cluster rebalancing ? now and then ? is that expected behaviour?
  8. P

    [SOLVED] CEPH performance ok?

    lxc: Cbs: 16 (f=16): [m(16)][31.8%][r=2032KiB/s,w=368KiB/s][r=127,w=23 IOPS][eta 13m:51s] fio: terminating on signal 2 Jobs: 16 (f=16): [m(16)][31.8%][eta 13m:54s] randrw: (groupid=0, jobs=16): err= 0: pid=913: Wed Jan 10 22:15:24 2024 read: IOPS=3474...
  9. P

    [SOLVED] CEPH performance ok?

    Hm, what happened to my cpeh cluster ? ~# ceph -w cluster: id: 47a8ff1a-0599-4215-b268-b4c06ef9274e health: HEALTH_OK services: mon: 3 daemons, quorum pm3,pm1,pm2 (age 10h) mgr: pm3(active, since 10h), standbys: pm1, pm2 osd: 18 osds: 18 up (since 9h), 18 in...
  10. P

    [SOLVED] CEPH performance ok?

    Another observation, starting more benchmarks doesnt affect the performance of a single vm that much. but I can see the increased iops under DC->CEPH . There is high overhead in proxmox/vm . I created a new VM with more vcpu, but more importantly these settings: now the VM performance is...
  11. P

    [SOLVED] CEPH performance ok?

    Running the tests in parallell yileds higher total, so it seems that the VM is the most limiting factor here!
  12. P

    [SOLVED] CEPH performance ok?

    Just to compare I setup a lxc with 16vcpu Jobs: 3 (f=3): [_(1),m(2),_(3),m(1),_(1)][100.0%][r=73.2MiB/s,w=7872KiB/s][r=4683,w=492 IOPS][eta 00m:00s] randrw: (groupid=0, jobs=8): err= 0: pid=718: Wed Jan 10 21:17:07 2024 read: IOPS=5626, BW=87.9MiB/s (92.2MB/s)(14.4GiB/167758msec) slat...
  13. P

    [SOLVED] CEPH performance ok?

    overview during testing
  14. P

    [SOLVED] CEPH performance ok?

    changing vcpu doesnt matter much at these low speeds it seems. using sync=0 ( well no sync) yields 800ish write iops @test:~$ fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=4 --rwmixread=75 --size=512MB --runtime=100 --group_reporting randrw: (g=0): rw=randrw...
  15. P

    [SOLVED] CEPH performance ok?

    test with 8 vcpu @test:~$ fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --rwmixread=75 --size=512MB --runtime=100 --sync=1 --group_reporting randrw: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=libaio, iodepth=1...
  16. P

    [SOLVED] CEPH performance ok?

    Nexus 5548 lacp Ill try writeback! and i upped the cores to 8, altought cpu util was 50-75% during testing. nexus 5548UP, see above iperf benchmark, performance is 10Gbit, LACP/vPC
  17. P

    [SOLVED] CEPH performance ok?

    VM settings for the vm that runs fio
  18. P

    [SOLVED] CEPH performance ok?

    mostly differnt dbs in different vms, more io work, its a labb for servers/networking etc. but decent performance is preferred. Ill try and get more local storage for when performance is really needed. with the current setup with 6 sas ssds one each node, that can push (peak iops) 50k/25k iops...
  19. P

    [SOLVED] CEPH performance ok?

    Network benchmark: @pm2:~# iperf -c 172.16.50.10 -i 2 -e ------------------------------------------------------------ Client connecting to 172.16.50.10, TCP port 5001 with pid 89725 (1 flows) Write buffer size: 131072 Byte TOS set to 0x0 (Nagle on) TCP window size: 16.0 KByte (default)...
  20. P

    [SOLVED] CEPH performance ok?

    Running a a labb ( so no real prod usecase, except nice to have) 3 node ceph cluster a 6 hgst sas ssd 200GB, standard setup with 3 replikas. on a 2x10Gbps network, shared network with vms, but there are no vms here yet, so very little other traffic. Ran this fio benchmark in a ubuntu vm, is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!