It occurred to me today that you need to have different numbering VM/CT scheme
to backup two seperate proxmox clusters on the same pbs box.
Backups are grouped by VM/CT number only on pbs.
This could cause a lot of confusion.
You can backup two different VM's with the same name...
It seems to be happening to all CT's that have a bind mount to the cephfs mount on the host.
Removing the bind mount the issue doesn't appear.
I've had it working until last week, why is this happening now?
I have this happening too. CT snapshot backups fail and leave a preparing snapshot that can only be deleted
from CT conf file...
My error is:
Use of uninitialized value in numeric eq (==) at /usr/share/perl5/PVE/VZDump.pm line 478.
INFO: starting new backup job: vzdump 100 --storage cephfs...
Without using ceph, my current described environment works without a hitch (proxmox wise)
I can fully manage the vms and cts on each site without a problem.
So the question remains, can a proxmox cluster manage multiple ceph clusters?
Or should I isolate the proxmox clusters per site?
I have managed to succesfully deploy a test cluster over three sites
connected with 100mbit fiber.
All three nodes have 3 osds each, and there is the default pools for cephfs-data and cephfs-metadata.
The performance of this one pool stretched across the 100mbit links is low but acceptable for...
After another hard disk failure, I shutdown, replace faulty disk with new and after boot:
root@pve:~# sgdisk -Z /dev/sdg
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.
root@pve:~# zpool replace -f tank 4952218975371802621...