Search results

  1. W

    Increasing machine version in VM after major PVE upgrade

    Just wondering, is there any reason to increase VM machine version to the higest one after major PVE upgrade? In terms of VM stability/perfomance Thanks in advance,
  2. W

    CEPH-Log DBG messages - why?

    same story in our ceph installation. v17
  3. W

    QEMU/KVM + Ceph Librbd Performance tuning

    Just wondering whether changing allocator would have an impact on QEMU performance or not?
  4. W

    QEMU/KVM + Ceph Librbd Performance tuning

    With respect to blog at ceph.io https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/ Which memory allocator and libRBD are used in Proxmox? Are the suggested optimizations in article above suitable for PVE+CEPH optimization?
  5. W

    Proxmox 7.0 on HP Gen8 DMAR error

    Same on my HP gen8. iLo crashing. Disable intel_iommu does not help
  6. W

    [SOLVED] Upgrade PVE 6.4-13 to PVE 7 Failure

    Paste the content of: /etc/apt/sources.list by the way... do u have anything in /etc/apt/sources.list.d/ ?
  7. W

    [SOLVED] Upgrade PVE 6.4-13 to PVE 7 Failure

    Have u checked Troubleshooting section of the migration tutorial: https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Troubleshooting ?
  8. W

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Totally agree. If PVE maintainers managed to reproduce the issue they at least should get some more details on it and could provide them to pve community. It's not the rare issue. Lots of PVE users are facing it and it has already become a nightmare for dozen of them
  9. W

    Windows VM Swap

    In one of out installation we use small enterprise class SSD (or NVME) in each server to store VMs swap. Each VM (Windows) uses common shared storage (NFS) for system and data disks and local store (ext4) for additional (swap) disk.
  10. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Thanks! I will try to setup mirroring one more time with respect to your notes
  11. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Did you follow PVE Wiki or another tutorial? From my perspective PVE wiki is not 100% suitable for current CEPH version in order to setup one way mirroring. Tried several times and always got errors described in this post...
  12. W

    update to 7.2, to use VirGL

    Did you use Linux as guest. Am I correct that windows guest aren't supported by the moment?
  13. W

    update to 7.2, to use VirGL

    You were right - some modules had been blacklisted. Thanks for the hint
  14. W

    update to 7.2, to use VirGL

    Same story. Any ideas?
  15. W

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    All VMs in my cluster have: cpu: host Xeon(R) CPU E5-26xx (v2)
  16. W

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Same story on many Windows vms in our cluster (Windows Server 2012/2016/2019). NFS storage and SCSI disks
  17. W

    Error : 4 data errors, use '-v' for a list

    Try starting zpool scrub and cancel it 2-3 times. ex. zpool scrub HDD4TB zpool scrub -s zpool scrub HDD4TB zpool scrub -s zpool scrub HDD4TB zpool scrub -s ...
  18. W

    fstrim with NFS

    Even though fstrim could be run on classic rotation disks. In your case the problems are: - NFS (if I'm not mistaken does not support discard so far. NFS 4.2 and sparse files/hole could be solution but Im not sure) - (mainly) hardware RAID controller (only few models does really support discard)
  19. W

    fstrim with NFS

    1. Fstrim has nothing to do with VM disk size change. 2. The SSD in a RAID behind hardware controller is used to not to expose any DISCARD capabilities.