Search results

  1. A

    Feature Request: Display "real backup size" in PBS

    Aahhh, i See ... interesting challenge :) In fact the only option is to get these details one backup creation/storage on the PBS seide and then persist it so that you do not need to grab from scratch - I also understand that the value will change on pruning probably, but I think the info was...
  2. A

    Feature Request: Display "real backup size" in PBS

    Hey, after doing my first backups I was a bit confused because it always showed e.g. 10GB wuhci is the VM image size of the affected machine. But according to logs he only transferred some 200 MB ... so it would be awesome to see this as an extra column in the PBS datastorage overview
  3. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    @Dominic I will hopefully manage to try NFS over the weekend. But one question: How can I mount glusterfs directly in fstab without doing it via proxmox storage but have it still available for VMs as usual using the better performance gluster-lib and stuff?
  4. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    What that parameter should do differently? From my point of view the idea would be to not have gluster mounted by pve but having it mounted by fstab directly and so to remove the storage from pve itself, or?! What effect this would have? I might not have the statistics any longer, but what...
  5. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    Hey Dominic and thank you for supporting me on this. fstab is very normal; the standard stuff and two glusterfs bricks in this case The relevant part for storage.cfg are the nfs share I talked about, then two glusterfs storages themself and the directory mounts to get containers running on...
  6. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    @Dominic I know you are short on time, but it would help me to continue my own research if you could answer me this question: Which process/service is initiating the mount of all the "storage.cfg" defined storage locations? if it pve_guest? or something else?
  7. A

    [SOLVED] High Load because of PVE Storage checks

    Ok, then "solved". Sorry for the stress :-)
  8. A

    [SOLVED] High Load because of PVE Storage checks

    Ok, I reasearched more and I have news. I found basically the reason ... I had one script running which collected some statistic and it seems that this script is checking shared storaged "per node" :-) BUT in the end after disabling this I still see TWO calls per Host every 10s ... Mon Aug 3...
  9. A

    EMLINK: Too many links

    Ok I now used a BTRFS volume and it is working fine
  10. A

    NFS Datastore: EINVAL: Invalid argument

    I used now a volume with BTRFS as file system and it was working :-)
  11. A

    [SOLVED] High Load because of PVE Storage checks

    Ok, here we go :) I changed that on one host. I enabled the storage via PVE ... and immediately it started flooding again. I did nothing else ... no backups, no nothings Fri Jul 31 13:38:10 CEST 2020 - status --output-format=json --repository backup@pbs@192.168.178.131:backup-store1 Fri Jul 31...
  12. A

    [SOLVED] High Load because of PVE Storage checks

    Could you provice me such a shell script then I can try on one host
  13. A

    [SOLVED] High Load because of PVE Storage checks

    It is a PVE HS Cluster with 7 hosts. No migrations or replications running. But I can reproduce that completely: As soon as I enable the pbs storage in PVE it starts ... when I disable the pbs stprage it stops
  14. A

    syslog question

    @RobFantini Do not get me wring: Not the backup itself uses up the resources ... it is just when I add the pbs to PVE as storage ... then it starts
  15. A

    NFS Datastore: EINVAL: Invalid argument

    For me it was in the end orking with "map to "nothing"" ... but then I ran into the next problem with ext4 and too less number of directories
  16. A

    syslog question

    Also see https://forum.proxmox.com/threads/high-load-because-of-pve-storage-checks.73756/#post-329300 ... in fact yes it is ok for PBS for me (beside the log spam) ... but I have more probs with the pve performance as soon as I activate a pbs storage there
  17. A

    [SOLVED] High Load because of PVE Storage checks

    And when this runs the pve api itself (so NOT the pbs one!) is behaving very slow ... so it seems that this high frequency of calls makes the pve api more of a problem then for the pbs api!! So maybe it is also more a ""backup-integration in PVE" issue then a pure pbs issue. PBS API was behaving...
  18. A

    [SOLVED] High Load because of PVE Storage checks

    PS: I also checked: The syslog > Jul 30 08:31:16 pbs proxmox-backup-api[612]: successful auth for user 'backup@pbs' comes 14 times per second ... so basically 2 per second from each PVE host ...