Before 6.1, with the zfs LXC, the default storage driver would be the terrible vfs. So the only way was changing docker's config to use fuse-overlayfs. I installed 6.1, uninstalled fuse-overlayfs in the LXC, rebooted the pve node and then to my surprise in the LXC docker container I see this:
❯...
I upgraded my 2 PVE nodes to 6.1.10 and I noticed my docker v23 LXC with ZFS finally works with the overlay2 storage driver. No more fuse-overlayfs. Really nice.
I wonder if someone could point me to what has changed at kernel level to finally allow support for this. Thanks in advance.
I had Docker v23 with 5.x Kernel, and it never worked, it always reverted to vfs so I used fuse-overlayfs.
I upgraded PVE nodes kernel to 6.1.10 (it's opt-in for now), removed fuse-overlayfs, and after rebooting the nodes, the docker LXC container on ZFS finally showed the proper support for...
@Matthias. would be interesting to know which update addressed the issue. Where can I see a detail of the updates of proxmox backup client (maybe also the server)?
I can't see a dropdown near the title nor the first post. Where is it? Maybe in the threads view?
@Matthias. just to let you know that I tried again today, all PVE nodes updated and PBS too, and it finally all backups completed without errors. :)
Didn't change anything at hw level, I don't know which of the updates of last month did the trick, but it worked. Here's the current versions of...
This Intel NUC was previously a PVE node, but I converted it to PBS. Never had any hw issue.
I switched cables, changed switch (I have 2 switches). No tx/rx errors on the switch ports. Tried a CAT6 (high quality) cable.
That's why I'm frustrated...don't know what else I can try. The only...
Hi Matthias. I regenerated the keys as instructed. Removed datastore and backup job, recreated datastore and job, ran it manually and same issue. :(
Since backups to Tuxis datastore are working fine, I thought the issue was the PBS installation, so I decided to reinstall PBS from scratch on the...
one other strange thing is that I have that backup scheduled, that gives errors, and another backup, the remote one, on Tuxis, and the remote one is perfect, never an error, and I backup everything like the local one with errors. does it give you some more info about what could the issue be?
Hi Matthias,
I replaced the log on the OP with one of this morning so you can see the detail of the error. This is a backup of node 2 (pve2) which runs VMs 100, 101, 107, 109. They all failed with that SSL error, except for VM 101, which seemed to have worked.
Thanks for the support.
I tried solving this for 3 days straight, but I surrendered. Read all threads/posts on the forum and tried all suggestions but I didn't solve anything.
Here's some contextual info: PBS runs on a dedicated Intel NUC. I have 2 PVE nodes running on separate physical servers, all three are on the...
Thanks for all the support you gave me via email. I can safely say that you have a great service and I can really recommend it to anybody in order to backup the proxmox environment. It's absolutely amazing how well it is working. Plus, with the free account, I can safely backup everything...
I had the same issue on other CTs without the fuse-overlayfs driver installed. So I guess it's not related to that.
But on that specific CT (101) the error always comes up and I have to exclude it from the backup schedule.
Thank you for the answer, I guess that's the "problem" (it actually isn't from what I read), since I'm using ZFS. I'm still at the beginning of the learning curve, didn't optimize things yet as I'm still reading docs.
Bu
Thanks for the answer Dominik, the problem is that I'm having the RAM issue on the PVE node, not on the PBS one. Would the reload also affect the PVE node? Right now the only way I found to release that memory is restarting the node, but I'd like to avoid it obviously.
I just issued the...
You have my same setup, only difference is that my container is privileged. Are you using suspend or snapshot? The strange thing is that I also tried with STOP, but the error is always the same...if a container is stopped I thought there wouldn't be snap issues...that's the only thing I don't...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.