Search results

  1. W

    update to 7.2, to use VirGL

    Same story. Any ideas?
  2. W

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    All VMs in my cluster have: cpu: host Xeon(R) CPU E5-26xx (v2)
  3. W

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Same story on many Windows vms in our cluster (Windows Server 2012/2016/2019). NFS storage and SCSI disks
  4. W

    Error : 4 data errors, use '-v' for a list

    Try starting zpool scrub and cancel it 2-3 times. ex. zpool scrub HDD4TB zpool scrub -s zpool scrub HDD4TB zpool scrub -s zpool scrub HDD4TB zpool scrub -s ...
  5. W

    fstrim with NFS

    Even though fstrim could be run on classic rotation disks. In your case the problems are: - NFS (if I'm not mistaken does not support discard so far. NFS 4.2 and sparse files/hole could be solution but Im not sure) - (mainly) hardware RAID controller (only few models does really support discard)
  6. W

    fstrim with NFS

    1. Fstrim has nothing to do with VM disk size change. 2. The SSD in a RAID behind hardware controller is used to not to expose any DISCARD capabilities.
  7. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Unfortunately not. I've tried to switch from journaling to snapshot replication mode but was unable to set it up with current wiki manual: https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
  8. W

    zfs 2.1 roadmap

    Thomas, thanks for clarification. Are they (user-space 2.1.1 and kernel 2.0.6 100% compatible?
  9. W

    Very Slow Performance after Upgrade to Proxmox 7 and Ceph Pacific

    Enabling rbd_cache should help a lot Ex in my setup [client] keyring = /etc/pve/priv/$cluster.$name.keyring rbd_cache_size = 134217728
  10. W

    zfs 2.1 roadmap

    Are you sure that pve-kernel 5.11 test ships with ZFS 2.0.6? On my test node I see:
  11. W

    Two-ways mirroring CEPH cluster how to?

    Dominik, thanks for the hint I've already tried do so but was facing errors and warnings in syslog related to rbd-mirror From my perspective RBD mirroring solution fro PVE Wiki only suitable for journal mirroring and not for image one. It would be extremely useful if Proxmox team will extend...
  12. W

    Ceph replication setup via GUI ?

    Any plans to integrate ceph replication (RBD mirroring) functionality into GUI? (with both snapshot and journaling modes) Current wiki tutorial (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) covers only journaling one and not fully suitable for recent Pacific ceph distro(
  13. W

    Ceph rbd mirroring Snapshot-based not working :'(

    Did you manage to get snapshot mirroring working with PVE wiki howto? I'm facing the same issue and result: 1 starting_replay
  14. W

    Two-ways mirroring CEPH cluster how to?

    In PVE Wiki (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) written Could anyone advice how to extend one-way mirroring to two-ways with respect to original PVE Wiki howto? Is it enough to install rbd-mirror in master (source) ? If so is it enough to install on one node in source CEPH...
  15. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Do I understand you correctly that two way mirroring requres installing rdb-mirror deamon on both sides? (muster and backup cluster) However in PVE WIki is clearly written: rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror). With PVE 6.4 I still get health: WARNING...
  16. W

    [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    Yeap, I did some progress indeed Unfortunately, I didn't manage to find out what caused "Device busy" - my assumption is that it somehow related to ZFS import (scan?) procedure that occurs on PVE (OS) start up (all the disks were a part of another ZFS pool from different storage without...
  17. W

    [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE) I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a I've installed multipath-tools and changed multiapth.conf accordantly ...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!