Recent content by iwik

  1. I

    [TUTORIAL] Understanding QCOW2 Risks with QEMU cache=none in Proxmox

    PVE dev team knows about this, right? Will be there some change for default settings when QCOW on lvm is used?
  2. I

    live migration: ram_save_setup failed: Input/output error

    Same here :) EDIT: Also, when you are using veeam, backup fails with general error Failed to connect the NBD server to the hypervisor host So let's it write here, so others can find it, since I did not google it anywhere, this can be root cause.
  3. I

    proxmox-backup-client notification on failed/successful backup

    That is due backups are "pushed" into PBS, not scheduled and managed by PBS. PBS works just like storage space, it does not know if job was run or not.
  4. I

    LDAP Sync with nested Groups

    Inspired by a forum solution for FreeIPA, I created patch for AD using AI Claude. Code is attached in this bug report.
  5. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    So, now I know why GFS2 is unsupported from proxmox side... It works and performance was also very good, but it breaks after some time (more than half a year). Then it is pain, when everything is stuck and all pve nodes need to be restarted and then fsck run. This is what you don't want from VM...
  6. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    Hi using gfs2 a today hit bug: [2479853.036266] ------------[ cut here ]------------ [2479853.036509] kernel BUG at fs/gfs2/inode.h:58! [2479853.036721] Oops: invalid opcode: 0000 [#1] PREEMPT SMP PTI need to restart pve node. :-/
  7. I

    Monitoring ceph with Zabbix 6.4

    Latest ceph release (20) removes both: https://ceph.io/en/news/blog/2025/v20-2-0-tentacle-released/#changes MGR Users now have the ability to force-disable always-on modules. The restful and zabbix modules (deprecated since 2020) have been officially removed.
  8. I

    What is the default migration network ?

    When 3 node CEPH cluster is setup using full mesh - routed mode, is it possible to use this ceph network for migration as well? In GUI I have to select interface, but actually there are two interfaces in this case, each to different node.
  9. I

    [SOLVED] Question about running Proxmox on a single consumer SSD

    I think it is fine, if you will not use zfs. Personaly I am using mdadm raid 1 (yes I know...), 2x MX500 2TB and LVM on top for VM. Running fine for 3 years.
  10. I

    Fibre Channel SAN connectivity.

    You can do it by disk passthrough (cli). https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM). Since FC SAN devices usually have multiple paths, you have to configure multi path https://pve.proxmox.com/wiki/Multipath#Set_up_multipath-tools and then pass to vm this dm...
  11. I

    Monitoring ceph with Zabbix 6.4

    This official doc https://docs.ceph.com/en/reef/mgr/zabbix/ can be used, but instructions are not correct. Especially This is not optional, but required step. Also, Plugins.Ceph.InsecureSkipVerify=true in zabbix_agent2.conf is required. Guide is here...
  12. I

    Debian update with open-vswitch stopped networking

    Seems we hit something like this when upgrading from pve8 to pve9...Anyone else?
  13. I

    TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled

    I've been hitting this on multiple disks, not just small one (efi). Seem root cause was running vm have cpu type 'host' but not identical CPUs were in cluster. Fixed by setting different cpu profile for vm (x86-64-v4). This error is very confusing to find root case! mirror-scsi0: Completing...
  14. I

    [SOLVED] Snapshots as volume chains problem

    I have manually cleared (lvremove) invalid snapshots & updated to latest from pve-test. Then I had to set "10.0+pve1" as machine version. After this, create snapshots is working again. If they break it again on current I will open Bugzilla ticket.
  15. I

    [SOLVED] Snapshots as volume chains problem

    Nobody? Should I open ticket in bugzilla?