Search results

  1. W

    Guest has not initialized display

    We have same "Guest has not initialized display ..." if we set "freeze CPU at startup". Even freeze=1 does not show up in your qm config, maybe you have a problem with the cpu-type of the VM. Did you try "host" already?
  2. W

    vm id and vm name

    In our cluster there the VMs are renamed often. So "most of us administrators" - 1. Please do not put any part of the VM Name in the disk-names.
  3. W

    IO Performance auf VMs auf Ceph ist extrem schlecht

    Hey, wenn Ceph / rados bench "o.k." ist und die Leistung in der VM nicht o.k. ist, dann ist auch die Anbindung der VM hier zu betrachten. Welchen Controller benutzt die virtuelle Disk, welche Cache Einstellungen, sowas.
  4. W

    OSD reweight

    you should be able to edit your post.
  5. W

    OSD reweight

    I think that would make it worse. In ceph best is to use nearly identical capacities on the nodes. If you increase your "big nodes" further, ceph can not distribute data in "it´s way". I think best is to exchange the small ssd one by one with 2 TB ssd. Beginning in the node with the lowest...
  6. W

    OSD reweight

    hey, please use code-tags around the output. i can´t see any big mistake. Your pool seems to be near full. In ceph you can´t (by normal way) equalize all osd´s. That´s because of pg placement. It´s not on byte level or something similiar. But it doesn´t hurt. You can try to increase pg count i...
  7. W

    OSD reweight

    maybe your nodes are weighted unevenly (regarding disk capacity), you have hdd on your nodes as osd too. what das "ceph osd df tree" says?
  8. W

    [SOLVED] OSD down and in on one of 3 hosts

    i don´t exactly now where to look (maybe /var/log/ceph), but osds beeing down for itself is not an error. Any messages when you start them?
  9. W

    Proxmox VE Ceph/RBD Usage

    You´re right, it´s not clustered. I didn´t got the need for shared / clustered situation. I´m sorry.
  10. W

    Proxmox VE Ceph/RBD Usage

    It may also will be possible, to use zfs with iSCSI backed vdevs. But only for the features. Not for performance i think.
  11. W

    Windows server 2022 crash

    I found a hint in a german news portal, maybe it is related. https://www.golem.de/news/secure-boot-virtuelle-maschinen-mit-windows-server-2022-booten-nicht-2302-172076.html
  12. W

    Windows 11 Virtual Machine very slow performance

    When it comes to performance, consider using virtio. At first you should choose Hard-disk SCSI an not SATA. Network also: choose virtio instead of e1000.
  13. W

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    I think .mgr is an ceph-internal pool. Maybe it´s contents have been erased by the update.
  14. W

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    Hey cyp, as alyarb mentioned, i think you search in the wrong pool? Respectively using the .mgr for vm-disks may lead to problems.
  15. W

    [SOLVED] slow migrations

    i think, HA and replication (zfs) are two different things here. So far as i know you have to use shared storage for this.
  16. W

    Low disk subsystem performance

    Although not recommended generally, we use the HP 840 in RAID Mode with JBOD-RAID0, so we can use cache settings and battery/capacitor backuped cache. You must know what you do in this case and check your use case (e.g. data security) against this. Using the HP controller without it´s features...
  17. W

    Low disk subsystem performance

    We use HP G9 also. But with another HBA (P840), and we use raid mode. As far as i remember we did not had these problems with local attached storage (zfs e.g.). But we used the battery backuped write cache from the P840er. Without that, most performance was "ugly". I do not remember details...
  18. W

    Low disk subsystem performance

    so you are benchmarking ceph? Then there are thousands of parameters to look at. Network settings for ceph. Ceph version. Ceph storage / osd parameter, pool redundancy, number of nodes, switch settings, and many many aspects more.
  19. W

    Low disk subsystem performance

    If i see it right you test the ceph behavior (fsync=1) in the vm. That will be "nested" in some way i think. Normal processes in the vm will not use fsync normally. DBMS will use fsync=1 maybe, but then most sequential (redo logs e.g.).
  20. W

    Ceph HDD Pool zeigt unterschiedliche Größen an

    btw., bzgl. des Monitoring Tools ist es aber ja auch genau richtig, dass der freie Wert da genommen wird. Und das genügend frei ist. "Gleich" wirst Du m.E. nicht hinbekommen.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!