My response is still in draft, must have forgot to press post.
I Added the driver using the Rancher webgui under Global > Tools > Drivers > Node Drivers using the Add Node Driver button.
Here you can paste the link to the binary.
Just got another error and catted every file in /var/log/pve/replicate/*.
Not a conclusive error but still wanted to include it.
The affected replication is 107-0.
2021-04-21 17:00:14 101-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_101-0_1619016309__' on...
I wanted to update this thread again.
The cluster just got updated to the newest pve enterprise repo version (6.3-6 as of writing) including ZFS 2.0.
Even though this did not fix the issue it did give me some more information.
Instead of the general "no tunnel IP received" error it now spits...
In my case the repair didn't help. Also the metadata didn't seem to be corrupted at all.
Many hours later I found this:
changing the transaction_id for pve/data fixed the issue for me.
WARNING: This is a pretty...
Unfortunately I haven't been able to resolve the issue so far.
As these are mostly pbx vm's they don't generate a lot of load (especially not on the storage) and I don't run a backup either.
The systems are connected using a dedicated 10G backend link for replication so I don't suspect the...
Just wanted to note I'm having the same issue on pve 6.3-1:
proxmox-ve: 6.3-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
Recently I upgraded a 3 node-cluster running pve-5.3 to pve-6.3-3.
These nodes all have zfs-replication running to the two other nodes.
Since the upgrade I've been getting random errors about the Replication failing.
I'm receiving an e-mail with "Replication Job: 127-2 failed - no tunnel IP...
You're probably using x-vga=on in your config, this will redirect the video output to your M6000, plugging in a monitor to your card should give you the console of your vm.
To get console in proxmox however you will have to disable the x-vga=on, this should give you two screens (one from console...
You can limit the arc live without reboot (not persistent) or persistent:
Set arc min and max persistent on reboots:
$ vim /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=34359738368
options zfs zfs_arc_min=17179869184
Set arc min max realtime (non-persistent)
Thank you, I did dive into the manpages but didn't find the .pxarexclude solution.
This issue is fixed for me as I can simply exclude this folder.
I created a enhancement request in case someone ever runs into this issue with a more valuable folder.
The folder appeared to have over 2 million files, since these were old session files they could be deleted.
Is there any way to exclude a single folder or increase the file limit in case something like this happens again?
I've been testing pbs for a while now and I'm backing up production vm's and lxc containers to really test pbs in our environment.
One container has been giving errors regarding a file entries limit. This seems to be limited to pbs.
I just upgraded to pbs 9.0 (client and server) but had...
Thank you! That's what I wanted to know, glad to hear this is already a known issue.
I already restored the backup using our other backup strategy, I try to use pbs as much as possible for testing but it's not our only solution.
I was able to browse through the filesystem using the GUI, the...
I've been using pbs as a test for a while now.
I tried to restore a lxc container using pbs but it keeps failing with this error:
Error: error extracting archive - error at entry "cpdavd_error_log": failed to restore mtime attribute on "cpdavd_error_log": Operation not permitted (os error...
I've been using this cloud provider for a while now:
Tested and working with RancherOS iso and debian 10 prepared image + cloudinit.
Because of the massive amount of docker images being pulled by customers on their dev/test...