I will travel to and setup a PVE host in another country. In the meantime, I have setup its storage (ZFS pool) in PVE the current country. So, there are two ZFS "storages" on my PVE host defined in storage.cfg, one [current-host-data] and one [future-host-data].
Both contain a mix of VM disks...
I migrated my containers from an old host to a new one and they wouldn't even restore, complaining about uid errors:
lxc 20240303081721.668 ERROR conf - ../src/lxc/conf.c:lxc_map_ids:3701 - newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [1000-1001) not allowed"...
I temporarily set up a PVE host on my home network and physically moved it to a different target network afterwards. Both networks are A.B.C.D/24 format but different IP addresses.
In the new network, I changed the static IP in /etc/hosts and the static IP and gateway in...
I set up a Windows VM with EFI disk and TPM. After setup, the disk order was
1. Boot HDD (vm-XXX-disk-0)
2. EFI Disk (vm-XXX-disk-1)
3. TPM Disk (vm-XXX-disk-2)
During cloning, the EFI disk is always cloned first in the task viewer, the order then becomes:
1. EFI Disk (vm-XXX-disk-0)
2. Boot...
I wanted to have a redundant zfs rpool so if either drive fails the system will still boot. However my boot volumes are not identically sized. Apparently the installer does not allow this. There exists an older thread where a user describes installing proxmox's zfs rpool on differently sized...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.