[SOLVED] Problem during Migration with gluster filesystem.

Hi,
Could you please tell how can I get the patch?
the fix is included in versions pve-qemu-kvm>=6.2.0-6. You can check with pveversion -v if a newer version is installed. Otherwise, check for updates. If there are no updates, check your package repository configuration (for the currently installed version of Proxmox VE).
 
Hi,
Hi, would just like to add that the problem is also recreatable with "ZFS over iscsi" LUNs.
please post the full migration log, any messages from /var/log/syslog around the time the issue happens and the output of pveversion -v and qm config <ID> for the affected VM.
 
Code:
Dec 12 14:14:20 node1 pmxcfs[2381]: [status] notice: received log
Dec 12 14:14:30 node1 QEMU[38582]: kvm: ../block/io.c:2847: bdrv_co_pdiscard: Assertion `num < max_pdiscard' failed.

This is happaning on a "zpool trim" command inside a VM, where the VM has a "ZFS over iSCSI" Disk attached.

pve-qemu-kvm: 5.2.0-6

With pve-qemu-kvm: 7.1.0-4 I cannot reproduce the issuce, so it seems to be fixed with > 6.2
 
pve-qemu-kvm: 5.2.0-6

With pve-qemu-kvm: 7.1.0-4 I cannot reproduce the issuce, so it seems to be fixed with > 6.2
Well, that's good then :)
Proxmox VE 6.x has been end-of-life for about half a year now, so I'm afraid no fix is getting backported there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!