Recent content by quanto11

  1. Q

    Inconsistent Disk Usage - VM Crashing

    10% is spare, you should never overprovisioning storage space, which could result into data loss. I think you are able to unmount while running the Server. You need to take the Volume offline via Windows, detach via Proxmox and recreate it with the needed settings, like discard on. Not 100% sure
  2. Q

    [SOLVED] VM langsam seit Umstellung auf host-cpu

    Habt ihr einen Link für mich, wo über dieses Problem diskutiert wird? Ich hatte ein ähnliches Verhalten auf einem sehr langsamen ceph storage nachvollziehen können, welches auf HDDs basiert und die Queue für HDDs auf ein nicht auszuhaltendes Niveau anhebt, sodass das Ceph Volume im Windows...
  3. Q

    [SOLVED] VM langsam seit Umstellung auf host-cpu

    Ich welcher Konstellation zickt v266 wie Sau? Hier laufen knapp 100 VMs seit Dezember mit iscsi problemfrei.
  4. Q

    Inconsistent Disk Usage - VM Crashing

    you should never set the storage of a VM higher or equal to that of the storage, always leave at least 10% free. In addition, your VM does not have “Discard On”, so that deleted data is not reported to the host, so that your storage is now displayed as full, although 1/4 of the storage is free...
  5. Q

    Planung Proxmox Cluster + Storage

    das ist meiner Meinung nach vollkommen Absurd, sowas wird man vielleicht in sehr kleinen Unternehmen mit kleinen VMs umsetzen können, jedoch alles was über 200Gb groß ist, ist mit einem erhöhtem Zeitlichen Verlust zu rechnen, ganz zu schweigen wenn es mal zu Hardware Problemen kommt.
  6. Q

    Ceph fails after power loss: SLOW_OPS, OSDs flip between down and up

    Are the time settings of each Cluster member correct, Mons and Manager Up? Can you provide the follwing Details: ceph status ceph osd tree ceph osd df ceph pg dump pgs_brief You can try to: ceph osd set norecover ceph osd set nobackfill ceph osd set noup ceph osd set nodown After that ceph...
  7. Q

    [SOLVED] CEPH OSDs Full, Unbalanced PGs, and Rebalancing Issues in Proxmox VE 8

    i think we got several problems here: 1. uneven alligement of HDDs only on 2 nodes. 2. 83 remapped pgs (which is working right now?) pool 4 'hdd-pool' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 120 pgp_num_target 128 autoscale_mode on last_change 4488...
  8. Q

    [SOLVED] CEPH OSDs Full, Unbalanced PGs, and Rebalancing Issues in Proxmox VE 8

    i think he does, because of : data: pools: 3 pools, 289 pgs 1. .mgr 2. hdd 3. ssd correct me if im wrong. This could help you, but should really only be the very last method with complete backups: https://forum.proxmox.com/threads/pve-ceph-issues-full-and-recovery.131257/
  9. Q

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    yes, works flawlessly the only thing i noticed with v266 is, that HDD backed Ceph with very high queue kills the specific volume and it because unresponsive and stuck forever. Best way to trigger that is having a Fileserver with deduplication, start a garbage collect, and now comes the most...
  10. Q

    Proxmox (Ceph) Cluster mit MS-01 - Thunderbolt vs SFP+

    hier hat jemand etwas ähnliches durchgesetzt, welches wohl ziemlich gut läuft. Vielleicht hilft es dir. https://gist.github.com/scyto/76e94832927a89d977ea989da157e9dc
  11. Q

    Ceph 19.2 adding storage for CephFS 'cephfs' failed

    Hey @gurubert, i tried proxmox-kernel-6.11 aswell, but still ran into the same issue.
  12. Q

    Ceph 19.2 adding storage for CephFS 'cephfs' failed

    Hi all, I am currently testing a Ceph 19.2 on a 3/2 test cluster, which has worked properly so far. Now I wanted to try CephFS, but failed because the pool cannot be mounted. I followed the documentation: https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pveceph_fs The metadata servers are...
  13. Q

    Poor WinServer 2022/2019 performance on Proxmox 8.2.4 / Ceph 18.2.2

    The read benchmark indicates that the bandwidth is being throttled. 10Gbit across 6 hosts is insufficient; I believe you need at least 25Gbit. As Wuwzy mentioned, you need at least 100 PGs per OSD to obtain reliable results. The variation will automatically improve, and the benchmark values for...