Search results

  1. M

    Single SAS Port Passthrough (Dual Port HBA)

    what is the cause if one card is in multiple immo groups and another only in one? proxmox / board bios/efi / driver ?
  2. M

    Single SAS Port Passthrough (Dual Port HBA)

    For the sake of completeness: I wasn't able to passthrough the hba mentioned above to 2 different vm's. I solved it by attaching another hba (HP) which was laying around to the host ;-)
  3. M

    Garbage Collection on synced targets necessary if "remove vanished" is checked? (EOM)

    one last (?) question: in this case i never have to run a prune job on the target because the pruned snapshots are "deleted" by the sync job (respectively the gc afterwards on target)....
  4. M

    Garbage Collection on synced targets necessary if "remove vanished" is checked? (EOM)

    ok. this is what i thougt... but if I pruned snapshots on source (without gc) BEFORE sync is running they aren't synced to the target,right?
  5. M

    Garbage Collection on synced targets necessary if "remove vanished" is checked? (EOM)

    as i understand there's no 1:1 sync to a remote target (without processing the data afterwards) as known from "normal" backups (sync new files and delete old ones)? so a sync copies the content (chunks) of (new) snaphots und without a garbage collection on remote it runs out of memory at a...
  6. M

    Visibility! Feature request for backup reports (or more details in email reports)

    But i have to take a deepdive into the logs to see this... it should be visible in the dashboard. ... Like in other backup programs... ups... did I say that out loud? :p
  7. M

    Visibility! Feature request for backup reports (or more details in email reports)

    What I can't see (wether in pbs nor in the email reports) is detailed information which vm has backupped how many gigs per day / week / month... Is this planned for future versions?
  8. M

    Long time span with no update in log while gc is running...

    Is it normal that there are huge time spans where no update seems to happen in logfile (last three lines): 2024-10-25T08:21:40+02:00: starting garbage collection on store remote_backupstorage_nas07 2024-10-25T08:21:40+02:00: Start GC phase1 (mark used chunks) 2024-10-25T08:23:12+02:00: marked...
  9. M

    Adding NFS Share as Datastore in Proxmox Backup Server

    Configuration of the sync job looks like this: https://forum.proxmox.com/threads/push-sync-job-from-pbs-to-nas-nfs.156464/
  10. M

    Adding NFS Share as Datastore in Proxmox Backup Server

    I've added an nfs share as sync target (so no direct backup). It's running well so far. Garbage Collection is slow of course. Just mount the share via fstab und go with it...
  11. M

    Push sync job from PBS to NAS (nfs)

    My backup plan syncs the backup on the local storage at the pbs server to a remote nfs share on a nas. If I set up a sync job for this I think this scenario isn't envisaged by pbs as I only can do this if I turn the remote storage (nas) to a local storage by mounting the nfs-share to pbs. So...
  12. M

    Deactivate Sync Job

    Is it possible to add a checkbox to deactivate a scheduled sync job like already available in prune jobs? Will make testing easier (or emergency tasks ;-) ) Thanks in advance...
  13. M

    Single SAS Port Passthrough (Dual Port HBA)

    thats in fact the status quo. i have already passed the whole controller to a vm in my current setup. but after partitioning (2) of the tape libray i want to pass each partition to a different vm...
  14. M

    Single SAS Port Passthrough (Dual Port HBA)

    thank you for reply. there is only one device reported (19:00.0) but as i saw meanwhile it's an eight-port-controller (with two external connectors). i want to attach an dual-partition tape library connected to the server with two sas-cables to two different vm's (no disks).