Search results

  1. S

    *.pxar vs *.fidx

    There is a large (3tb) file storage of mixed content on a non-virtual host (text files, archives, videos, etc.). From the documentation it is clear that I can backup as a .pxar or completely as an .img device (after starting it in the console it says that .img.fidx is being created) But from the...
  2. S

    Memory leak after backup jobs

    hi, @dcsapak! Thanks, "reload" is a much better workaround.
  3. S

    Memory leak after backup jobs

    this is a backup of files from the host - not vm or ct. if you do not reset the memory by reloading the process, then the next day the consumption will increase by a few percent, and so on until it takes up all the free
  4. S

    Memory leak after backup jobs

    after backup htop prometheus node exporter other task - gc and prune daily - are currently completed, but the process has not released the memory.
  5. S

    Memory leak after backup jobs

    pbs 2.1-2 there is a memory leak by the proxmox-backup-proxy process. during the execution of the backup by the client (the total size is about 3 TB), the consumption grows by 5 gigabytes, but is not released upon completion. Initially, this became noticeable when I transferred the storage to...
  6. S

    proxmox api

    Hello, @dietmar! Is 9 Years Insufficient Future? ;)
  7. S

    [SOLVED] Ceph Pacific. RADOS. Objects are not deleted, but only orphaned

    This not a bug. It takes 2 hours for the pool to clear (the default setting can be changed in the config https://docs.ceph.com/en/latest/radosgw/config-ref/#garbage-collection-settings) To clean up forcibly you need to run the garbage collection with the --include-all parameter radosgw-admin...
  8. S

    [SOLVED] Ceph Pacific. RADOS. Objects are not deleted, but only orphaned

    RADOS ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable) added file to bucket radosgw-admin --bucket=support-files bucket radoslist | wc -l 96 ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 44 TiB 44 TiB 4.7 GiB 4.7 GiB...
  9. S

    CIFS: storage is not online (500)

    pve 7.0-11 the same problem. timeout didn't help
  10. S

    How to logging and debug job of proxmox backup client?

    host - Ubuntu 20.04.2 LTS pbc - 1.0.13 pbs - 1.0.13-1 I ran a test job proxmox-backup-client backup nextcloud.pxar:/storage --repository 'proxmox@pbs!clientbackup@backup.e:storage' three hours later the job failed Found in logs only in syslog pbs Apr 13 20:34:01 backup...
  11. S

    PVE 6.3-3 not working discard

    @Fabian_E Maybe you know - cifs supports discard?
  12. S

    PVE 6.3-3 not working discard

    o_OIt looks like this is a feature of the NFS implementation on soho NAS if mount <server>:/nfs/proxmox to be vers=3 regardless of options if mount <server>:/proxmox to be vers=4
  13. S

    PVE 6.3-3 not working discard

    cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,vztmpl,iso,images shared 0 lvmthin: local-lvm thinpool data vgname pve content images,rootdir dir: ssd path /mnt/windows/virt content images shared 0...
  14. S

    PVE 6.3-3 not working discard

    To my regret, updating the NSF server to version 4.2 is not a trivial task, because it was installed on a soho NAS with an expired software support and there is only version 4. I recreated the connection and reboot the proxmox, but it still connects with vers=3. When mounted via fstab with the...
  15. S

    PVE 6.3-3 not working discard

    root@pve0:~# pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve) pve-manager: 6.3-3 (running version: 6.3-3/eee5f901) pve-kernel-5.4: 6.3-3 pve-kernel-helper: 6.3-3 pve-kernel-5.4.78-2-pve: 5.4.78-2 pve-kernel-5.4.65-1-pve: 5.4.65-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse...
  16. S

    PVE 6.3-3 not working discard

    VM config before du -h /mnt/windows/virt/images/100/vm-100-disk-0.qcow2 6,1G /mnt/windows/virt/images/100/vm-100-disk-0.qcow2 qemu-img info /mnt/windows/virt/images/100/vm-100-disk-0.qcow2 image: /mnt/windows/virt/images/100/vm-100-disk-0.qcow2 file format: qcow2 virtual size: 32 GiB...
  17. S

    PVE 6.3-3 not working discard

    Hi! i use du and ncdu. if du does not define correctly, then why is the image size not immediately maximum as after moving? Will monitoring systems like zabbix correctly determine available disk space?
  18. S

    remote storage for home lab

    ISCSI or NFS. i use NFS, because the simplest setting.
  19. S

    PVE 6.3-3 not working discard

    VMs with Debian 10 or Ubuntu 20.04 configured according to instructions https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files#Linux_Guest_Configuration fstrim -av writes that he successfully released about 10Gib But! qcow2.img size not changed (VM fs size 7.7G and qcow2 size 20G) After poweroff...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!