To follow up on this, I've settled on the following:
[If running pbs in vm] Set USB pass through to use a specific USB port (use this same port for all backup drives)
Create separate data stores (one for each external drive)
Update the following in /etc/default/zfs if you have physical control...
Just to follow up on this, pve-container 4.1-5 is now available for upgrade via the proxmox web ui.
I can confirm that upgrading to this version indeed fixed this `no storage ID specified` container migration issue
Would someone be willing to try to install Ubuntu latest in a VM (no special options) and try to run the following script?
wget HTTPS://raw.githubusercontent.com/nextcloud/vm/master/nextcloud_install_production.sh
sudo bash nextcloud_install_production.sh
The script is tested to work in KVM...
It's probably a hack, but this usually works for me:
# Make sure VM is disabled:
ha-manager set vm:<VMID> --state disabled
# Open GDISK to modify disk partition map
gdisk /dev/zvol/rpool/vm-<VMID>-disk-<DISK#>
# Once GDISK opens, then just use the W command to re-write the partiion map
#...
It's probably a hack, but this usually works for me:
# Make sure VM is disabled:
ha-manager set vm:<VMID> --state disabled
# Open GDISK to modify disk partition map
gdisk /dev/zvol/rpool/vm-<VMID>-disk-<DISK#>
# Once GDISK opens, then just use the W command to re-write the partiion map
#...
It's probably a hack, but this usually works for me:
# Make sure VM is disabled:
ha-manager set vm:<VMID> --state disabled
# Open GDISK to modify disk partition map
gdisk /dev/zvol/rpool/vm-<VMID>-disk-<DISK#>
# Once GDISK opens, then just use the W command to re-write the partiion map
#...
I also am seeing this after a power failure and sometimes after a reboot if it follows a backup.
It's probably a hack, but this usually works for me:
# Make sure VM is disabled:
ha-manager set vm:<VMID> --state disabled
# Open GDISK to modify disk partition map
gdisk...
Thanks for your input! This makes sense and might be an option since I do have hot-swapable drives in my host machines. If I reconfigure, I could make it work like you said, but for now the usb is the most accessible in case of a failure since I don’t have another server to accept the hot...
I searched around on the forums but haven't seen to much relevant information about rotating backup drives. My current situation is this:
- I would like to keep a backup offsite but have almost no upload bandwidth to run a remote PBS host
- I have two 4TB portable USB 3 drives that I rotate off...
Yes, that's a problem that resulted from me forgetting to setup verification tasks when i setup the backups (now i'm trying to catch up by getting all backups verified).
From what i can tell, the garbage collection task hung about 3.5 hours before the other running verification tasks even...
@t.lamprecht & @vikozo
I'm also seeing this on my proxmox backup server:
Some background information:
- PBS is running in a VM on one of the nodes in my 3 node cluster (I know this isn't ideal)
- Node running PBS VM is a Dell T410 Tower server
- Storage at all points is ZFS (On VM, Host node...
@Alwin Thanks, for the link. I was upgraded to 6.2 and was having issues with live migrations as well as offline migrations. The migration would start and would hang after it said it found the local disk that was to be migrated. I let it sit for hours/days with no progress being made on even the...
I have a 3 node cluster with two nodes running on ZFS and one on EXT4. I'd like to change the third node to ZFS, but that node is running my VMs/containers currently. I'm not sure of the best path to take. I've tried migrating my vms/containers to the other nodes but i only have local storage on...
@x307 Thank you so much! I was banging my head agains this issue for a few weeks. This should definitely be better documented, but for now i've saved this thread in the wayback machine for future reference!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.