Were are getting this same error sometimes (on a multi-VM PVE 6.2 with NUMA) when starting VMs via the GUI, but a manual start usually succeeds if we run the command shown by qm showcmd <id>.
I don't quite follow the explanation above on why this happens, but: if you can work around it by...
Is this (ovs_extra) sufficient and recommended way of enabling RSTP now?
https://pve.proxmox.com/wiki/Open_vSwitch states that: 'In order to configure a bridge for RSTP support, you must use an "up" script as the "ovs_options" and "ovs_extras" options do not emit the proper commands'
...and...
Yes, I've had it without forcing a migration. Not sure about when exactly this happens, seems a bit arbitrary so far. Maybe only with VMs that have multiple disks..?
Anyway, I'd like to know how to recover manually when this occurs. Any idea what that "already exists" actually refers to? Is...
Anyone know what exactly is complaining "already exists", btw?
It seems nonsensical -- of course vm-117-disk-0 already exists; otherwise incremental sync wouldn't be possible at all, right? Or what am I missing here?
Often, especially when migrating VMs back and forth (A -> B and then back to B -> A) during maintenance, my ZFS replication gets in a state where it fails with errors like:
"volume 'ssdtank/vmdata/vm-117-disk-0' already exists"
or claims target and source don't have a common ancestor version...
Progress report: replicas apparently will always use the same blocksize as the original ZFS volume, so the original volume needs to be (re)created with the desired volblocksize.
Simply changing /etc/pve/storage.cfg to this...
zfspool: local-zfs-hdd-images
pool hddtank/vmdata...
Sorry, not sure I understood @guletz . You mean, in my hddtank/vmdata/vm-117-nas-files-0 case, create a new vmdata with 16/32k volblocksize, and migrate Proxmox VM volumes under it? Or create a new vm-117-nas-files-0 manually and edit qemu config files?
If the former, which property should I...
Yeah, this sort of does seem to be a Proxmox problem after all: Since replica volumes are automatically created by Proxmox, the ZFS storage plugin would apparently need to calculate and set the block size. I tried setting volblocksize on the parent volume "vmdata" manually, but it failed:
# zfs...
Thank you @guletz !
For future reference, this happens with all zvols, so replication or Proxmox are of no consequence here.
An in-depth discussion of the problem: https://github.com/zfsonlinux/zfs/issues/548
The outcome on the Github thread is that ZoL/ZFS devs have no plans to fix this issue...
Volblocksize reports 8k and ashift 12 on all the pools:
NAME PROPERTY VALUE SOURCE
hddtank/vmdata/vm-117-nas-files-0 volblocksize 8K default
NAME PROPERTY VALUE SOURCE
hddtank ashift 12 local
The raidz2 pool has 6 x 6T disks, which should yield 0.666% storage efficiency, and 24T of usable storage (36T for data minus 12T for parity).
This is what I actually see:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hddtank 32.5T 29.5T 3.02T - 0%...
On my setup, a VM that takes 9.86T of ZFS pool on active host, grows to 19.6T when replicated to one of the other hosts. On yet another host, it takes 9.86T as expected. Every one of them has ZFS "copies=1" enabled. Data is already compressed (it's a nested zpool inside a zvol).
The only thing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.