Search results

  1. W

    Proxmox VE ZFS Benchmark with NVMe

    Have anybody checked the results after ZFS upgrade to 2.x ?
  2. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    On the third PVE host (after PVE upgrade) Windows 10 Pro booted without any issues (no missed devices observer)
  3. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    If I'm not mistaken even if old PCI devices id will be restored new old devices will remain as missed. Correct?
  4. W

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    Same story on 2 different PVE hosts with Windows Server 2016 Standard and Windows Server 2019 Standard Windows Server 2016 Standard root@pve-node2:~# cat /etc/pve/qemu-server/204.conf agent: 1 boot: c bootdisk: scsi0 cores: 12 cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush ide2...
  5. W

    Packet loss on some guests

    Do you have ifupdown2 package installed?
  6. W

    [PVE 5.4.15] Ceph 12.2 - Redunced Data Availibility: 1 pg inactive, 1 pg down

    I have the same error but after and an upgrade to CEPH 15.2.6 recently
  7. W

    VLAN tagging

    We have been facing strange behavior with VLAN tagged connections inside VM with ifupdown2 package been installed on host. After removing this package from PVE and host reboot everything started working as before (and expected)
  8. W

    Proxmox VE 6.3 available

    Shouldn't the following wiki tutorial be updated with respect to Ceph 15.x? After the upgrade main and backup PVE and CEPH clusters to 6.3/15.2.6 mirroring stopped working( https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
  9. W

    Configure Proxmox to allow for 2 minutes of shared storage downtime?

    How about NFS hard mount option and pause all vm-s before nfs server upgrade/reboot. After NFS server boots up resume all vms?
  10. W

    dmeg shows many : fuse: Bad value for 'source'

    Same story root@pve-node3:~# pveversion -v proxmox-ve: 6.2-1 (running kernel: 5.4.44-1-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-3 pve-kernel-helper: 6.2-3 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-1-pve: 5.4.44-1 pve-kernel-5.3.18-3-pve...
  11. W

    Proxmox VE 6.2 released!

    Do you mind to move this package to pvetest repo?
  12. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Tomas, could you please check/confirm that with one way mirroring next command: rbd mirror pool status rbd --verbose gives normal output running on backup node: root@pve-backup:~# rbd mirror pool status rbd health: OK images: 18 total 18 replaying and warning running on main cluster...
  13. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    I had to down Replica/min to 2/1 from 3/2 to get some "extra space". Any ideas why journaling data are not wiped after pulling from backup node? If I was not mistaken I did one-way mirror. How could I check that? Thanks
  14. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    After an upgrade to PVE 6 and CEPH to 14.2.4 I enabled pool mirroring to independent node (following PVE wiki) From that time my pool usage is growing up constantly even-though no VM disk changes are made Could anybody help to sort out where my space is flowing out? Pool usage size is going to...
  15. W

    Web UI cannot create CEPH monitor when multiple pulbic nets are defined

    Well, this report is mine) This post mainly about asking for any advice to workaround
  16. W

    Web UI cannot create CEPH monitor when multiple pulbic nets are defined

    According to CEPH docs (https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#id1) several public nets could be defined (useful in case of rdb mirroring when slave CEPH cluster is located in separate location or/and monitors need to be created on different network...
  17. W

    [SOLVED] Ghost monitor in CEPH cluster

    Thanks. Managed to delete ceph and reinstall it
  18. W

    [SOLVED] Ghost monitor in CEPH cluster

    Alwin, thanks. Will give a try