I have exactly the same question... moreover the archives are crypted. I would like to avoid having to decrypt, then restore, then move the archive to the new pb server, then backup/encrypt each of them... is there a way to do that without writing long shell scripts ? or at least simplifiy the...
Here is my installation procedure for Proxmox 7.1 ZFS on OVH/SYS :
Use template PM6.4 from OVH with 3 partitions :
swap set to 1/4 of the RAM (may be other)
/ (root) set to 20GB (may be other) on ext4 (xfs does not work)
/var/lib/vz on ext4 for the remaining
Change :
#ip -c link to see...
Hi,
I had the same problem on PBS and I simply solved by destroying the incomplete backup.
But the real problem is that BPS did not say anything about this problem in the log neither in report mails : I just discovered it by hand checking the PBS backup list. Is there a way to have an alert...
I suggest that PM display a more precise alert in this case (saying that replication snapshots remains for instance together with a small procedure to solve it ?)
for ZFS system : "TASK ERROR: can't rollback, more recent snapshots exist" even when only one snapshot is remaining in the PM GUI.
Most of the time it is due to the fact that a replication snapshot file " __replicate_<VMID> is still in the zfs system and unfortunatly the PM GUI does not show it...
After more than 2 years "living" with zfs, I think this system is great for dev & experts and has some "miraculous" features like instant snapshot.
BUT you have to be VERY carefull in everything you are doing... I had and still have so many bad suprises (around replication, snapshots and...
Thank you for your answer.
But replication is usefull in case of a node failure. AFAIK the "/nodes/{node}/replication/{id}/status" API is not working if {node} is down. It would be very usefull to get this information either on any working node of the cluster or at least on the replication...
Hello,
"pvesh get /nodes/{node}/replication" gives the current configuration for the replication.
But is there a way to get the working status (working or not) and the last valid available replication date ?
Thanks
zfs get snapdir rpool :
NAME PROPERTY VALUE SOURCE
rpool snapdir visible local
By the way, for those who use this nice piece of patch from Ayufan (differential backup)
I have solved the problem by adding another --exclude './.zfs/.*' in the tar xpf (Create.pm).
I did not find any...
Hi,
On PM4.4-18 I had to do a CT restore from backups scheduled with the GUI :
- stop the CT
- choose the good archive file
- launch restore
Then the CT totally disappear from the GUI and do not work anymore with the following message :
extracting archive...
I had the same probleme (no fencing !). But the PVE let me add HA capability to my VM without testing first if fencing exist and if RGManager was started before on the cluster.
May I suggest :
1/ test fencing and RGManager before adding HA capability (even HA status is given to ok while it...
I use both as many people here : openVZ wherever possible (performance) and KVM when no other choice. My question is what is the future of OpenVZ vs LXC in a few months (years ?)
Hello,
I had the same problem (I set the "shared" flag on local) on a VZ container "101" migration.
But the situation now is that physically I still have the VZ container on server1 (/var/lib/vz/private/101) and 101 is declared on server 2 without any "101" in server2 /var/lib/vz/private/...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.