This is true. As soon as you have applied a disk to a vm then the content of the storage shifts from 'none' to 'images' in which case the storage will show as available for 'images'
It is this feature: https://www.illumos.org/issues/1701 which can be found in this commit: https://github.com/openzfs/zfs/commit/1b939560be5c51deecf875af9dada9d094633bf7
But as the documentation states this feature will only be available for SSD's so if the pool does not contain SSD's this...
Another thing to take into consideration when choosing between (stripped) mirrors or (stripped) raidz[x] is the overhead in calculating parity for raidz[x] and especially when doing resilvering of the pool. Calculating parity tend to require a CPU with higher clock speeds since parity...
Use stackable switches and create a LACP with connections to more than one switch (obviously the storage box should likewise have connections to more than one switch) and you should be failure proof.
Yes, it uses the same migration features as any other supported storage in Proxmox.
Replication is handled on the storage server, not by Proxmox.
What to you mean by 'shared storage controller fails over to another controller'?
I cannot replicate it now. It was must likely a combination of a VM started with different versions of kernel and various packages which for some reason was not able to start again ;-(
No, same error:
lvchange -ay qnap/vm-153-disk-0
device-mapper: create ioctl on qnap-vm--153--disk--0 LVM-RCevIXI8i5huDYro1QZ0fdlcZWWqYxm7DIe1JfkZcJK5iskK4TCa7rX8g5Kvwi3c failed: Device or resource busy
This specific setup has been working unchanged since Proxmox 1.9. Did you test with an LVM marked 'Shared'?
esx1:~# lvs |grep qnap
vm-153-disk-0 qnap -wi------- 8.00g
esx2:~# lvs |grep qnap
vm-153-disk-0 qnap -wi-ao---- 8.00g
esx1:~# lvdisplay qnap
--- Logical volume ---
LV Path...
Hi all,
The previous upgrade seems to have broken online and offline migration with shared lvm storage on iscsi. This is the upgrade:
Start-Date: 2020-02-11 01:24:26
Commandline: apt upgrade
Requested-By: mir (1000)
Install: pve-kernel-5.3.18-1-pve:amd64 (5.3.18-1, automatic)
Upgrade...
Hi all,
Today I took the plunge and upgraded my cluster from 5.4 to 6.1 using apt update, apt dist-upgrade. The upgrade in itself was painless however on small annoyance when upgrading the corosync qdevice package as explained here...