Search results

  1. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Do I understand you correctly that two way mirroring requres installing rdb-mirror deamon on both sides? (muster and backup cluster) However in PVE WIki is clearly written: rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror). With PVE 6.4 I still get health: WARNING...
  2. W

    [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    Yeap, I did some progress indeed Unfortunately, I didn't manage to find out what caused "Device busy" - my assumption is that it somehow related to ZFS import (scan?) procedure that occurs on PVE (OS) start up (all the disks were a part of another ZFS pool from different storage without...
  3. W

    [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE) I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a I've installed multipath-tools and changed multiapth.conf accordantly ...
  4. W

    [SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

    It's not sctually correct! If you set zfs_arc_min to zfs_arc_max it does not use zfs_arc_min as zfs_arc_max! It sets zfs_arc_min to desired value and ignores value for zfs_arc_max (so it's kept as default - half of RAM)
  5. W

    [SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

    I can confirm that setting zfs_arc_min equal to zfs_arc_max breaks old behavior and reset upper limit to default half of RAM Very painful - this was my default setup for years(
  6. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    Seems you ignored quoted text and cut my response to your colleague from the context. I was replying him and saying that I was not able to check his suggestion and workaround at that moment
  7. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    Unfortunately, all the affected VMs were from production environments and had to be fixed ASAP. Hopefully, I've managed to reproduce this issue on one of our client's test system and here the information I've collected so far: PVE host: root@pve:~# dumpe2fs $(mount | grep 'on \/ ' | awk...
  8. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    This package in only available in test and non-subscription repos - so nothing to complain about)
  9. W

    Proxmox VE ZFS Benchmark with NVMe

    Have anybody checked the results after ZFS upgrade to 2.x ?
  10. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    On the third PVE host (after PVE upgrade) Windows 10 Pro booted without any issues (no missed devices observer)
  11. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    If I'm not mistaken even if old PCI devices id will be restored new old devices will remain as missed. Correct?
  12. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    Same story on 2 different PVE hosts with Windows Server 2016 Standard and Windows Server 2019 Standard Windows Server 2016 Standard root@pve-node2:~# cat /etc/pve/qemu-server/204.conf agent: 1 boot: c bootdisk: scsi0 cores: 12 cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush ide2...
  13. W

    Packet loss on some guests

    Do you have ifupdown2 package installed?
  14. W

    [PVE 5.4.15] Ceph 12.2 - Redunced Data Availibility: 1 pg inactive, 1 pg down

    I have the same error but after and an upgrade to CEPH 15.2.6 recently
  15. W

    VLAN tagging

    We have been facing strange behavior with VLAN tagged connections inside VM with ifupdown2 package been installed on host. After removing this package from PVE and host reboot everything started working as before (and expected)
  16. W

    Proxmox VE 6.3 available

    Shouldn't the following wiki tutorial be updated with respect to Ceph 15.x? After the upgrade main and backup PVE and CEPH clusters to 6.3/15.2.6 mirroring stopped working( https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
  17. W

    Configure Proxmox to allow for 2 minutes of shared storage downtime?

    How about NFS hard mount option and pause all vm-s before nfs server upgrade/reboot. After NFS server boots up resume all vms?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!