Recent content by niziak

  1. N

    [SOLVED] Pct restore lxc container with PBS

    Any news about fixing issue ? Now PBS is useless for unpriv CTs. Current workaround: Restore as privileged from PBS Backup to local storage Restore from local storage as unprivileged and ignore-unpack-errors: pct restore 803 /hddpool/vz/dump/vzdump-lxc-803-2023_08_01-09_44_49.tar.zst...
  2. N

    docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /var/log/apt: invalid argument.

    VFS is just workaround to test where the issue is. It is completely unusable for production due lack of union FS (simply: kind of layers deduplication). Here it is described: How the vfs storage driver works. When LXC is created with defaults, it uses host's filesystem by bind mount. I.e. for...
  3. N

    Verify jobs - Terrible IO performance

    Thanks for this thread. I don't have fast SSD/NVM for metadata yet. I just added consumer SSD as L2ARC. I found switching L2ARC policy to MFU only also helps a lot (cache is not flooded with every new backup): Please add ZFS module parameters to /etc/modprobe.d/zfs/conf: options zfs...
  4. N

    iSCSI, 10GbE Bond and Synology

    Hi. What is OC11 firmware version ? (try ethtool -i <iface>)?
  5. N

    CephFS zu Storage hinzufügen via GUI, Timeout. Fuse mount via shell funktioniert.

    Hi, ceph mon dump and locate monitor with not matching to current global config IP address. Then remove monitor and recreate it again.
  6. N

    Proxmox VE 7.0 Installation Error

    The same issue. Dell R720 SATA discs (HBA/IT mode). Newly downloaded ISO 7.0-2. Installation went smoothly with ZFS RAID1 2x SATA HDD 2TB. Then I decided to reinstall it on 2x SSD 128GB and the problem appears.
  7. N

    PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288

    My findings: There is no tool to repair ZFS. It is planned somewhere in future. Scrub only validates checksums. In this case incorrect data was stored correctly on VDEVs so scrub cannot help. Sometimes, during zdb check read error appears: db_blkptr_cb: Got error 52 reading <259, 75932, 0, 17>...
  8. N

    PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288

    Hello. I reported ZFS issue here: PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288 #12019 The IO delay on node is rising from minute to minute. After some hours node stop responding completely. Service in RAM (like ceph) are still running. After long time cluster shows...
  9. N

    [SOLVED] Can't install snap in LXC container

    Be aware that you can introduce very serious problem to your node: Storage replication regulary hangs after upgrade
  10. N

    Storage replication regulary hangs after upgrade

    I got the same issue. With weekly backup set of LXCs on one node, this issue breaks all LXCs on this node (remains freezed). It starts happening after adding one LXC with snapd installed inside. This LXC cannot be freezed (Proxmox waits for freezing, but snapd keeps hands on own cgroup and...
  11. N

    Warning: do not remove ZFS cache device remotely (machine may hang)

    Last days I decided to improve my experimental CEPH cluster (4 x PVE = 4 x OSD = 4 x 2TB HDD) performance by adding DB on small partition of NVMe. To do this I need to cut some space from existing NVMe L2ARC partition. Every PVE host has 2 x HDD for rpool, and rpool's ZIL and rpool's L2ARC are...
  12. N

    PVE6 pveceph create osd: unable to get device info

    To clarify: It is safe to specify already used device. With PVE 6.3-3, pveceph osc create cannot handle pure free disc space (even with GPT). It expects that given disc is empty or with LVM and some free space to create new LV. As workaround I have to use direct ceph CLI: ceph-volume lvm...
  13. N

    Soft / interruptible mounts for backup targets

    It is not Proxmox issue, but well known NFS/CIFS issue in Linux. I remember this kind of problems since Kernel 2.0 and all problems still exists! it seems that CIFS storage should be "forbidden" for production. In my case remote CIFS storage gets full and problems starts accumulating. Every...
  14. N

    Upgrade to ProxMox 6.3 failure

    No GUI nor SSH after upgrade 6.1 -> 6.3 - needs manual restart of services
  15. N

    [SOLVED] No GUI nor SSH after upgrade 6.1 -> 6.3 - needs manual restart of services

    Big thanks for fast fix (I noticed it was available even yesterday evening). Indeed it was related somehow to systemd dependencies. On slower machines everything started correctly and on faster machines it was randomized (multiple reboots helps).

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!