Search results

  1. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    I found the problem and it was my fault! It was pure chance, that the space started to get less, when I started the GC, because at the same time, a still actice replication from another proxmox started itsself writing onto the other file system on the same pool. I tested it three times, but it...
  2. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    Does this help? lsof | grep data1 proxmox-b 1864 backup 19u REG 0,54 0 3 /data1/pvebackup/.lock proxmox-b 1864 1905 tokio-run backup 19u REG 0,54 0 3 /data1/pvebackup/.lock...
  3. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    The space of changed timestamps would not be freed up after process failure, but they get freed up. How do you explain this? proxmox-backup unknown running kernel: 5.4.128-1-pve proxmox-backup-server 1.1.12-1 running version: 1.1.12 pve-kernel-5.4...
  4. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    root@xxx:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT data1 7.02T 5.89G 112K /data1 data1/pvebackup 4.52T 5.89G 4.52T /data1/pvebackup...
  5. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    It is ZFS. Here you see the log, while starting the GC phase. You also see that after failing, there are 5.9GB free again. data1/pvebackup 4.6T 4.6T 5.9G 100% /data1/pvebackup data1/pvebackup 4.6T 4.6T 5.9G 100% /data1/pvebackup data1/pvebackup 4.6T 4.6T 5.9G 100% /data1/pvebackup...
  6. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    There are NO snapshots on the volume and it definitely consume 50-100 MBytes/s directly after starting garbage collection until the garbage collection fails. Should I make video for to proof this?
  7. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    When I start the garbage collection and look at "watch df -h", it counts down by 100MB/s until the 6GB of free disk space is exhausted and fails. After failing the free disk space is 6GB again. For what does it use 6GB? How to solve this problem?
  8. C

    [SOLVED] HTML-Links im Feld Hinweise möglich?

    Super, bin noch auf PVE6. Dann gehts ja nach dem Update.
  9. C

    [SOLVED] HTML-Links im Feld Hinweise möglich?

    Es wäre sehr praktisch, wenn man im Feld Hinweise, klickbare HTML-Links einfügen könnte, die bei Klick ein neues Fenster oder einen neuen Tab mit der URL aufmacht. Konkret die Admin oder Service-URLs,die von dieser Mascheine zur Verfügung gestellt werden kann. URLs könnten automatisch erkannt...
  10. C

    [SOLVED] ENOSPC No space left on device during garbage collect

    How much free space does the garbage collect need? I got the following error during garbage collect, while having 16GB free on root und 6 GB free on the backup drive: 2021-07-31T14:34:18+02:00: starting garbage collection on store pvebackup 2021-07-31T14:34:18+02:00: Start GC phase1 (mark used...
  11. C

    ZFS pool detection problem with cryptsetup

    Where to I have to put this UVEV rule? In the meantime it discovered that using partitions in a luks encrypted drive seems to be the problem. After the system boots, lsblk gives the following picture: nvme0n1 259:0 0 953.9G 0 disk └─nvme0n1p1 259:1 0 400G 0 part └─nvme-c1...
  12. C

    ZFS pool detection problem with cryptsetup

    I use full disk encryption with cryptsetup for the rpool disks and separate vmdate disks. The disks are decrypted at boot time with the initramfs option in crypttab. The boot work fine but the data pool is not recognized and imported reliably. Only after a "partprobe" command it is imported. It...
  13. C

    Unpriviledged LXC access on mount points

    arch: amd64 cores: 2 hostname: xxxx memory: 2048 mp0: vmdata1zfs:subvol-103-disk-2,mp=/DXUSWP,backup=1 net0: name=eth0,bridge=vmbr0,gw=192.xx.xx.xx,hwaddr=yyyyyy,ip=xxxxxxxxx/24,type=veth onboot: 1 ostype: debian rootfs: vmdata1zfs:subvol-103-disk-1,size=32G startup: order=21 swap: 2048...
  14. C

    Reactivate replication with existing synchronized snapshot

    I stopped pvesr.timer, then created a snapshot '__replicate_101-0_1617281220__' for both disks, used 'zfs send -I ' to delta transfer it to the other location and started pvesr.timer. However, the first thing the replication job does, is deleting the "stale" snapshot. Here is the log...
  15. C

    Unpriviledged LXC access on mount points

    Promox current directory is container disk: chown 100000:101001 DISK chown 100000:101001 y Then ls -ln. Output: drwxr-xr-x 2 100000 101001 2 Jan 5 2020 DISK drwxr-xr-x 2 100000 101001 2 Apr 1 14:40 y Then pct enter <CTID>, ls -ln in root directory. OUtput: drwxr-xr-x 11 0 65534...
  16. C

    Unpriviledged LXC access on mount points

    I have a unprivileged container and mounted a second dxu unter /MYDISK On the Proxmox side it got the access 100000:100000 but under LXC it shoes up as root:nogroup. Why is it not shown as root:root as other directories which have 100000:100000? Is this a bug? I also tried to change the group...
  17. C

    Reactivate replication with existing synchronized snapshot

    I tried but the snapshot got deleted very quickly. Do I have to halt the replication service (how? which is the service name?). How does Proxmox find the replication snapshot? Does it use the latest one or is the last snapshot registered anywhere?
  18. C

    Storage Replication - vorhandene Repliken auf neuem Server nutzen?

    Ähnliche Frage von mir, aber leider bislang keine Antwort. https://forum.proxmox.com/threads/reactivate-replication-with-existing-synchronized-snapshot.86613/
  19. C

    Reactivate replication with existing synchronized snapshot

    I had a replication of a 2 TB VM running. For some unknown reason the special replication snapshots *__replicate* are gone. But I have old manual non-replication snapshot and was able to synchronize the ZFS dataset to the second server. I then tried to create a *__replicate*" snapshot hoping...