Search results

  1. M

    Visibility! Feature request for backup reports (or more details in email reports)

    But i have to take a deepdive into the logs to see this... it should be visible in the dashboard. ... Like in other backup programs... ups... did I say that out loud? :p
  2. M

    Visibility! Feature request for backup reports (or more details in email reports)

    What I can't see (wether in pbs nor in the email reports) is detailed information which vm has backupped how many gigs per day / week / month... Is this planned for future versions?
  3. M

    Long time span with no update in log while gc is running...

    Is it normal that there are huge time spans where no update seems to happen in logfile (last three lines): 2024-10-25T08:21:40+02:00: starting garbage collection on store remote_backupstorage_nas07 2024-10-25T08:21:40+02:00: Start GC phase1 (mark used chunks) 2024-10-25T08:23:12+02:00: marked...
  4. M

    Adding NFS Share as Datastore in Proxmox Backup Server

    Configuration of the sync job looks like this: https://forum.proxmox.com/threads/push-sync-job-from-pbs-to-nas-nfs.156464/
  5. M

    Adding NFS Share as Datastore in Proxmox Backup Server

    I've added an nfs share as sync target (so no direct backup). It's running well so far. Garbage Collection is slow of course. Just mount the share via fstab und go with it...
  6. M

    Push sync job from PBS to NAS (nfs)

    My backup plan syncs the backup on the local storage at the pbs server to a remote nfs share on a nas. If I set up a sync job for this I think this scenario isn't envisaged by pbs as I only can do this if I turn the remote storage (nas) to a local storage by mounting the nfs-share to pbs. So...
  7. M

    Deactivate Sync Job

    Is it possible to add a checkbox to deactivate a scheduled sync job like already available in prune jobs? Will make testing easier (or emergency tasks ;-) ) Thanks in advance...
  8. M

    Single SAS Port Passthrough (Dual Port HBA)

    thats in fact the status quo. i have already passed the whole controller to a vm in my current setup. but after partitioning (2) of the tape libray i want to pass each partition to a different vm...
  9. M

    Single SAS Port Passthrough (Dual Port HBA)

    thank you for reply. there is only one device reported (19:00.0) but as i saw meanwhile it's an eight-port-controller (with two external connectors). i want to attach an dual-partition tape library connected to the server with two sas-cables to two different vm's (no disks).
  10. M

    Single SAS Port Passthrough (Dual Port HBA)

    Hello Guys. Is it possible to passthroug the ports of a dual sas hba to two different vm's? root@prox11:~# lspci -s 19:00.0 -v 19:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) Subsystem: Broadcom / LSI SAS9300-8e Flags: bus...
  11. M

    Auto add new VM to HA resource

    Any update to this? Maybe an option in the creation-wizard of a new vm (and in the recovery)?
  12. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I solved the problem by changing the order of the commands: nOK source /etc/network/interfaces.d/* post-up /usr/bin/systemctl restart frr.service OK post-up /usr/bin/systemctl restart frr.service source /etc/network/interfaces.d/* P.S. I didn't add the line "source ..."...
  13. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    when I fire up "ifreload -a" in shell i get the same error as mentioned above (not more). but when I execute "/usr/bin/systemctl restart frr.service" everything seems to be ok. didn't you add the line in your config?
  14. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I reverted the "lo1-thing". This could not be the problem. As mentioned in the manual you have to add the line "post-up /usr/bin/systemctl restart frr.service" in /etc/network/interfaces to reload the service after config upgrades in gui. And this throws an error ("ifreload -a" is...
  15. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    By the way: Can someone tell me which traffic goes through which connection on a cluster? Throught which network goes traffic (oobe) of (builtin) backup / corosync / cluster (same as corosync?) / migration? Is there an useful network diagramm of proxmox cluster with ceph?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!