Any news about fixing issue ? Now PBS is useless for unpriv CTs.
Current workaround:
Restore as privileged from PBS
Backup to local storage
Restore from local storage as unprivileged and ignore-unpack-errors:
pct restore 803 /hddpool/vz/dump/vzdump-lxc-803-2023_08_01-09_44_49.tar.zst...
VFS is just workaround to test where the issue is. It is completely unusable for production due lack of union FS (simply: kind of layers deduplication). Here it is described: How the vfs storage driver works.
When LXC is created with defaults, it uses host's filesystem by bind mount. I.e. for...
Thanks for this thread.
I don't have fast SSD/NVM for metadata yet. I just added consumer SSD as L2ARC. I found switching L2ARC policy to MFU only also helps a lot (cache is not flooded with every new backup):
Please add ZFS module parameters to /etc/modprobe.d/zfs/conf: options zfs...
The same issue. Dell R720 SATA discs (HBA/IT mode).
Newly downloaded ISO 7.0-2. Installation went smoothly with ZFS RAID1 2x SATA HDD 2TB.
Then I decided to reinstall it on 2x SSD 128GB and the problem appears.
My findings:
There is no tool to repair ZFS. It is planned somewhere in future.
Scrub only validates checksums. In this case incorrect data was stored correctly on VDEVs so scrub cannot help.
Sometimes, during zdb check read error appears: db_blkptr_cb: Got error 52 reading <259, 75932, 0, 17>...
Hello.
I reported ZFS issue here: PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288 #12019
The IO delay on node is rising from minute to minute. After some hours node stop responding completely. Service in RAM (like ceph) are still running.
After long time cluster shows...
I got the same issue.
With weekly backup set of LXCs on one node, this issue breaks all LXCs on this node (remains freezed). It starts happening after adding one LXC with snapd installed inside. This LXC cannot be freezed (Proxmox waits for freezing, but snapd keeps hands on own cgroup and...
Last days I decided to improve my experimental CEPH cluster (4 x PVE = 4 x OSD = 4 x 2TB HDD) performance by adding DB on small partition of NVMe.
To do this I need to cut some space from existing NVMe L2ARC partition.
Every PVE host has 2 x HDD for rpool, and rpool's ZIL and rpool's L2ARC are...
To clarify:
It is safe to specify already used device.
With PVE 6.3-3, pveceph osc create cannot handle pure free disc space (even with GPT). It expects that given disc is empty or with LVM and some free space to create new LV.
As workaround I have to use direct ceph CLI:
ceph-volume lvm...
It is not Proxmox issue, but well known NFS/CIFS issue in Linux. I remember this kind of problems since Kernel 2.0 and all problems still exists!
it seems that CIFS storage should be "forbidden" for production.
In my case remote CIFS storage gets full and problems starts accumulating.
Every...
Big thanks for fast fix (I noticed it was available even yesterday evening). Indeed it was related somehow to systemd dependencies. On slower machines everything started correctly and on faster machines it was randomized (multiple reboots helps).
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.