Search results

  1. N

    Best (config/storage) strategy for multiple lxc with docker?

    Nowadays LXC unprivileged can handle docker daemon smoothly. It is best option for memory constrained hosts. Everything depends on scenario you need. LXC Pros: light for host no VM overhead (we are using it as Gitlab runners with docker executors) device pass through (possible to run i.e...
  2. N

    Proxmox upgrade 8 to 9 on zfs boot: upgrade fails with grub errors

    Same mine. I think this is my fault that I initially created zpool using scsj- like links for ATA disks instead wwn-. I have mixed array of SATA and SAS discs and noticed that only SAS discs are linked using scsi- like names. So with Debian Trixie they made a cleanup in udev rules and wrong...
  3. N

    feat req: add "global blacklist" button in Quarantine

    Hello. During first setup we often looks into "Spam Quarantine" and manually tune "Whitelist" and "Blacklist" for our needs. Current button "Blacklist" adds sender to recipient's blacklist. It will be very convenient to add another button like like "Global Blacklist" and choose to add email...
  4. N

    Adding a domain name to the exclusion of spam filter rules

    Simply add no-reply@domain.bitrix24.ru to Whitelist ?
  5. N

    Greylisting feature unreliable

    We also have to disable it. We observed that all further deliveries within first 30 minutes are greylisted. This time window is too big and it is not possible to receive security codes within expiration window. Also simple option to specify one netmask for IPv4 is not enough. Huge providers...
  6. N

    Help !!! Question about remove Datastore (free up space?)

    PBS stores data in simple files. Most space is occupied by data stored in "hidden" directory .chunks. So you can temporary move some chunks to other storage perform prune and gc and restore chunks back.
  7. N

    only one io scheduler available

    I have one old home PC (ZFS on 2 x HDD + SSD log & cache) turned into PVE. Every backup to PBS cause big delays and slow response even on PVE host. Switching to bfq saves this machine life (Kernel ZFS's processes on lower prio than any other)
  8. N

    CR: ignore errors from cd-rom storage for backups

    Hello. I think backup process should completely ignore state of cd-rom drive. Today one backup fails because of: 709: 2024-07-21 01:50:12 INFO: creating Proxmox Backup Server archive 'vm/709/2024-07-20T23:50:12Z' 709: 2024-07-21 01:50:12 INFO: starting kvm to execute backup task 709: 2024-07-21...
  9. N

    [SOLVED] Pct restore lxc container with PBS

    Any news about fixing issue ? Now PBS is useless for unpriv CTs. Current workaround: Restore as privileged from PBS Backup to local storage Restore from local storage as unprivileged and ignore-unpack-errors: pct restore 803 /hddpool/vz/dump/vzdump-lxc-803-2023_08_01-09_44_49.tar.zst...
  10. N

    docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /var/log/apt: invalid argument.

    VFS is just workaround to test where the issue is. It is completely unusable for production due lack of union FS (simply: kind of layers deduplication). Here it is described: How the vfs storage driver works. When LXC is created with defaults, it uses host's filesystem by bind mount. I.e. for...
  11. N

    Verify jobs - Terrible IO performance

    Thanks for this thread. I don't have fast SSD/NVM for metadata yet. I just added consumer SSD as L2ARC. I found switching L2ARC policy to MFU only also helps a lot (cache is not flooded with every new backup): Please add ZFS module parameters to /etc/modprobe.d/zfs/conf: options zfs...
  12. N

    iSCSI, 10GbE Bond and Synology

    Hi. What is OC11 firmware version ? (try ethtool -i <iface>)?
  13. N

    CephFS zu Storage hinzufügen via GUI, Timeout. Fuse mount via shell funktioniert.

    Hi, ceph mon dump and locate monitor with not matching to current global config IP address. Then remove monitor and recreate it again.
  14. N

    Proxmox VE 7.0 Installation Error

    The same issue. Dell R720 SATA discs (HBA/IT mode). Newly downloaded ISO 7.0-2. Installation went smoothly with ZFS RAID1 2x SATA HDD 2TB. Then I decided to reinstall it on 2x SSD 128GB and the problem appears.
  15. N

    PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288

    My findings: There is no tool to repair ZFS. It is planned somewhere in future. Scrub only validates checksums. In this case incorrect data was stored correctly on VDEVs so scrub cannot help. Sometimes, during zdb check read error appears: db_blkptr_cb: Got error 52 reading <259, 75932, 0, 17>...
  16. N

    PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288

    Hello. I reported ZFS issue here: PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288 #12019 The IO delay on node is rising from minute to minute. After some hours node stop responding completely. Service in RAM (like ceph) are still running. After long time cluster shows...
  17. N

    [SOLVED] Can't install snap in LXC container

    Be aware that you can introduce very serious problem to your node: Storage replication regulary hangs after upgrade
  18. N

    Storage replication regulary hangs after upgrade

    I got the same issue. With weekly backup set of LXCs on one node, this issue breaks all LXCs on this node (remains freezed). It starts happening after adding one LXC with snapd installed inside. This LXC cannot be freezed (Proxmox waits for freezing, but snapd keeps hands on own cgroup and...
  19. N

    Warning: do not remove ZFS cache device remotely (machine may hang)

    Last days I decided to improve my experimental CEPH cluster (4 x PVE = 4 x OSD = 4 x 2TB HDD) performance by adding DB on small partition of NVMe. To do this I need to cut some space from existing NVMe L2ARC partition. Every PVE host has 2 x HDD for rpool, and rpool's ZIL and rpool's L2ARC are...
  20. N

    PVE6 pveceph create osd: unable to get device info

    To clarify: It is safe to specify already used device. With PVE 6.3-3, pveceph osc create cannot handle pure free disc space (even with GPT). It expects that given disc is empty or with LVM and some free space to create new LV. As workaround I have to use direct ceph CLI: ceph-volume lvm...