from what I researched at the time, the issue is specific with e1000e driver and it came up only because my PBS backups were failing with strange communication errors. If you don't have any issues with those HP mini-pcs, don't bother.
On the bridge:
> cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.1.10.10/23
gateway 10.1.10.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
post-up /sbin/ethtool -K $IFACE tso off gso off...
Just implemented this, and it works as described, thanks for confirming it's actually the same thing.
Out of curiosity: if it's technically the same, so it's just a way of doing a bind mount in a different way, why does this allow snapshots and the other doesn't? I mean it's just some code in...
Works for me, but exactly like it was described by @leesteken in post #8: without the leading slash on the second path).
Initially, I thought it was a mistake, but with the leading slash it doesn't work. Works perfectly without it, and finally I can enable snapshots.
Here's what I added to ALL...
Ok, cleaned everything on this node...did this for all labels (hourly, daily, frequent, etc.)
❯ zfs-auto-snapshot --destroy-only --verbose --label=weekly --keep=1 -r pve-data
Then removed zfs-auto-snapshot debian package. All good now:
❯ zfs list -t all -r pve-data
NAME...
Looks like zfs-auto-snapshot, but I never used/installed it. I'm really confused now. :(
pve-data/subvol-107-disk-0@zfs-auto-snap_daily-2024-01-22-0525 44.2M - 27.4G -
pve-data/subvol-107-disk-0@zfs-auto-snap_hourly-2024-01-22-0617 48.5M - 27.4G -...
Digging into man is always helpful:
❯ zfs list pve-data/subvol-107-disk-0 -o usedbysnapshots
USEDSNAP
66.6G
❯ zfs list pve-data/subvol-107-disk-0 -o usedbydataset
USEDDS
27.4G
So it's the snapshots...could it be PBS? I didn't do any snapshot.
I see no snapshots or related descendants that could justify that USED value. But I'm looking at the UI, I think I will have to start digging deep into the ZFS CLI.
I had already tried quota, but it gave me the following error:
❯ zfs set quota=50G pve-data/subvol-107-disk-0
cannot set...
Thanks a lot for this. I am optimizing a few LXCs storage size because initially I set them too large.
Using refquota worked for all of them, except for one, that was initially configured at around 90-100GB:
❯ zfs list pve-data/subvol-107-disk-0
NAME USED AVAIL REFER...
I spent some hours debugging this issue, was getting crazy, and I solved it on my 3 nodes cluster this way: https://forum.proxmox.com/threads/cant-connect-to-destination-address-using-public-key-task-error-migration-aborted.42390/post-619486
Hope it doesn't trigger again.
Thanks for your post. It worked except for a GUI problem (connection error when managing other nodes), so I had to add a service restart.
In case someone else has the same issue, I ran these commands on all my 3 nodes:
cd /root/.ssh
mv id_rsa id_rsa.old
mv id_rsa.pub id_rsa.pub.old
mv config...
No need to replace it, it's an env variable that contains the interface for the scripts in /etc/network/if-*.d
Every time an interface comes up or goes down, scripts in those dirs are executed and that variable is set with the inteface name.
So now I'm using this, and it works perfectly:
auto...
You were right, it was the segment offload bug of the e1000 drivers. I used Thomas recommendation to disable it completely: https://forum.proxmox.com/threads/e1000e-reset-adapter-unexpectedly.87769/post-384609
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address...
But I don't have any reset events for the NIC in dmesg log. Anyway, the offload issue seems plausible, it would explain the apparent randomness of the issue.
There's also this post of Thomas to disable all offload...
That's one of the old workarounds, it's not what I wanted to achieve. You need to check why on the other 2 dockers it worked with full ZFS and on this one it didn't.
You removed the whole folder with subfolders?? I think you screwed up docker installation. :)
You had to only remove the driver from the json file, not the entire folder. I don't know how docker is starting without that folder.
looks like you have a driver configured, remove it. clean all the customizations you've done, docker installation has to be as clean as possible.
If you can't do it, reinstall docker.
don't specify the driver (remove the daemon.json setting) and remove fuse=1. If you use the latest kernels docker should finally choose the overlay2 driver instead of the obsolete and inefficient vfs driver.
I use privileged containers, you seem to use both privileged and unprivileged, you need...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.