Bonjour,
Everything was working fine ( almost 2 years now), But recently ( since the last update ?) i have a problem with my gluster storage.
The other day i try update a vm ( dist-upgrade inside the vm ), and during writting files --> the VM shutdown
same after a restauration from a pbs backup.
I decide to move the vm disk to a local storage ( not gluster ), i do the upgrade en everything was fine.
But when i want to move back to the storage on Gluster --> no way :
after some investigation, i have the same error on the log when i was try to upgrade th vm directly on the glusterfs storage. ( bl.request_alignment' failed )
I was thinking of a problem with the lastest update oh the qemu program.
Any idea ?
Bonne journée.
Dark26
Everything was working fine ( almost 2 years now), But recently ( since the last update ?) i have a problem with my gluster storage.
The other day i try update a vm ( dist-upgrade inside the vm ), and during writting files --> the VM shutdown
same after a restauration from a pbs backup.
I decide to move the vm disk to a local storage ( not gluster ), i do the upgrade en everything was fine.
But when i want to move back to the storage on Gluster --> no way :
create full clone of drive scsi1 (P1_SSDinterne:170/vm-170-disk-1.qcow2)
Formatting 'gluster://10.10.5.92/SSDinterne/images/170/vm-170-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=5368709120 lazy_refcounts=off refcount_bits=16
[2022-04-30 18:09:33.815373 +0000] I [io-stats.c:3706:ios_sample_buf_size_configure] 0-SSDinterne: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0
[2022-04-30 18:09:33.943578 +0000] E [MSGID: 108006] [afr-common.c:6140:__afr_handle_child_down_event] 0-SSDinterne-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.
[2022-04-30 18:09:43.821021 +0000] I [io-stats.c:4038:fini] 0-SSDinterne: io-stats translator unloaded
[2022-04-30 18:09:44.835240 +0000] I [io-stats.c:3706:ios_sample_buf_size_configure] 0-SSDinterne: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0
[2022-04-30 18:09:45.505523 +0000] E [MSGID: 108006] [afr-common.c:6140:__afr_handle_child_down_event] 0-SSDinterne-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.
[2022-04-30 18:09:54.841835 +0000] I [io-stats.c:4038:fini] 0-SSDinterne: io-stats translator unloaded
transferred 0.0 B of 5.0 GiB (0.00%)
[2022-04-30 18:09:55.997707 +0000] I [io-stats.c:3706:ios_sample_buf_size_configure] 0-SSDinterne: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0
qemu-img: ../block/io.c:3118: bdrv_co_pdiscard: Assertion `max_pdiscard >= bs->bl.request_alignment' failed.
TASK ERROR: storage migration failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O qcow2 /Data/SSDinterne/P1_SSDinterne/images/170/vm-170-disk-1.qcow2 zeroinit:gluster://10.10.5.92/SSDinterne/images/170/vm-170-disk-0.qcow2' failed: got signal 6
after some investigation, i have the same error on the log when i was try to upgrade th vm directly on the glusterfs storage. ( bl.request_alignment' failed )
I was thinking of a problem with the lastest update oh the qemu program.
Any idea ?
[TABLE]
[TR]
[TD]Kernel Version
Linux 5.13.19-6-pve #1 SMP PVE 5.13.19-15 (Tue, 29 Mar 2022 15:59:50 +0200)
[/TD]
[/TR]
[TR]
[TD]PVE Manager Version
pve-manager/7.1-12/b3c09de3[/TD]
[/TR]
[/TABLE]
Bonne journée.
Dark26