Search results

  1. S

    IO delay improving

    IO delay just means your sending more requests than the underlying storage can complete in realtime, the high the value the more % of requests are pending IO availability. Values low such 0.06 are really nothing to worry about however, you are running consume grade SSD's so having some IO wait...
  2. S

    (Too?) Big IO Delay peaks

    Well migrating a VM is a large amount of data, end of the day think of it like a hosepipe and a bucket of water, the hose can only transfer so much water at once. You can limit the speed you pour the bucket so the water can travel through the pipe and not overflow, or you can pour the whole...
  3. S

    (Too?) Big IO Delay peaks

    The graph is showing you have huge I/O wait, if you have a process running in a VM which requires more IO than your underlying storage can provide, then yes without any limits it will use as much IO as possible and cause a backlog of requests.
  4. S

    (Too?) Big IO Delay peaks

    The VM isn't hanging the other VM's, it's saturating the disks meaning the other VMs then become slow due to slow / delayed IO requests. In Proxmox you can edit the VM in question and apply disk I/O limits that may help to make sure that particular VM never fully saturates 100% of the available...
  5. S

    Poor SSD performance in

    As tom said, cheap SSD's will have a small bit fast NAND as a cache layer and then the rest will be cheap NAND, hence you're facing the issue when writing a large file over 1GB you're filling the faster layer and moving into the slow area and getting large I/O wait.
  6. S

    (Too?) Big IO Delay peaks

    Have you checked per a VM graph during the same peak, to see if you can locate a certain VM that was completing a large amount of IO at the time? And then use this information to investigate anything running on that VM?
  7. S

    Node View - Network Traffic Graph

    The network traffic graph at node level is handy to get a quick overview of any peaks in traffic, however on setups with multiple NIC's / Networks (Ceph / Cluster networks) it shows all the traffic as one single graph. Which makes it difficult to just view the pure traffic used on the external...
  8. S

    perf. issue with LACP (2+3) : ceph poor performance (with powerfull hardware).

    The cluster network is used for one OSD to send its data to the other replicated OSD to do your 3-way replication, the public network is where the primary OSD will receive the data from the client. As you have multiple 1Gbps NIC's if you put 3 in public and 2 in the cluster it will allow you to...
  9. S

    How to migrate a ceph RBD pool to a new cluster

    How big is each VM? What kinda size of CEPH cluster/performance on each do you have? With such a big difference in versions, I really think you would be limited to powering off and migrating the disk manually and then powering a VM back on, or doing an upgrade on the cluster1 on both PvE and...
  10. S

    Degraded Windows VM-Performance on SSD with Ceph-Cluster

    What network do you have connecting each of the hosts? 1Gbps / 10Gbps?
  11. S

    Linux Kernel 5.4 for Proxmox VE

    Is there any big headliner features that we should look out for / benefit from when coming to PVE? I know I can check the upstream 5.4 changelog, was more if there anything particular from 5.3.x to 5.4.x or just moving to a LTS kernel.
  12. S

    [SOLVED] Expand Cluster or keep seperate

    Then yes you could nuke and add all the nodes/OSDs to one CEPH Cluster and just run two separate pool's one for SSD and one for HDD. Would mean you only need one set of mons and one dashboard/monitoring view.
  13. S

    [SOLVED] Expand Cluster or keep seperate

    I guess the best question is, do you need the data that is currently in these two CEPH environment's or can you nuke the environment fully and start fresh?
  14. S

    Common KVM processor

    On the VM Wizard change the CPU type to host, you can also do this by selecting the VM Hardware Tab - > Processor and then edit. You need to stop and start the VM for this to take effect.
  15. S

    New All flash Ceph Cluster

    Curently it goes 3/30/300GB max, but depends on the size of OSD, so 2 OSD should be more than fine.
  16. S

    New All flash Ceph Cluster

    Do the MAX not come in smaller sizes? 1.6TB is massively overkilled for WAL/DB especially for 2 OSD's.
  17. S

    Dell R920 VM Performance issues

    How many cores are you passing through to VM? What CPU type you selected?
  18. S

    Feature request : CPU temperature for each node

    My personal view, as much as it would be nice to add the features you say, it kinda bloats out Proxmox from being a VM software management solution to something that tries to cover areas where other solutions should be used, there is plenty of free open source software that can be used to...
  19. S

    ceph gone near full and cant start any vm now

    Quite the same or exactly the same?