Search results

  1. M

    Adding NFS Share as Datastore in Proxmox Backup Server

    Configuration of the sync job looks like this: https://forum.proxmox.com/threads/push-sync-job-from-pbs-to-nas-nfs.156464/
  2. M

    Adding NFS Share as Datastore in Proxmox Backup Server

    I've added an nfs share as sync target (so no direct backup). It's running well so far. Garbage Collection is slow of course. Just mount the share via fstab und go with it...
  3. M

    Push sync job from PBS to NAS (nfs)

    My backup plan syncs the backup on the local storage at the pbs server to a remote nfs share on a nas. If I set up a sync job for this I think this scenario isn't envisaged by pbs as I only can do this if I turn the remote storage (nas) to a local storage by mounting the nfs-share to pbs. So...
  4. M

    Deactivate Sync Job

    Is it possible to add a checkbox to deactivate a scheduled sync job like already available in prune jobs? Will make testing easier (or emergency tasks ;-) ) Thanks in advance...
  5. M

    Single SAS Port Passthrough (Dual Port HBA)

    thats in fact the status quo. i have already passed the whole controller to a vm in my current setup. but after partitioning (2) of the tape libray i want to pass each partition to a different vm...
  6. M

    Single SAS Port Passthrough (Dual Port HBA)

    thank you for reply. there is only one device reported (19:00.0) but as i saw meanwhile it's an eight-port-controller (with two external connectors). i want to attach an dual-partition tape library connected to the server with two sas-cables to two different vm's (no disks).
  7. M

    Single SAS Port Passthrough (Dual Port HBA)

    Hello Guys. Is it possible to passthroug the ports of a dual sas hba to two different vm's? root@prox11:~# lspci -s 19:00.0 -v 19:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) Subsystem: Broadcom / LSI SAS9300-8e Flags: bus...
  8. M

    Auto add new VM to HA resource

    Any update to this? Maybe an option in the creation-wizard of a new vm (and in the recovery)?
  9. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I solved the problem by changing the order of the commands: nOK source /etc/network/interfaces.d/* post-up /usr/bin/systemctl restart frr.service OK post-up /usr/bin/systemctl restart frr.service source /etc/network/interfaces.d/* P.S. I didn't add the line "source ..."...
  10. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    when I fire up "ifreload -a" in shell i get the same error as mentioned above (not more). but when I execute "/usr/bin/systemctl restart frr.service" everything seems to be ok. didn't you add the line in your config?
  11. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I reverted the "lo1-thing". This could not be the problem. As mentioned in the manual you have to add the line "post-up /usr/bin/systemctl restart frr.service" in /etc/network/interfaces to reload the service after config upgrades in gui. And this throws an error ("ifreload -a" is...
  12. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    By the way: Can someone tell me which traffic goes through which connection on a cluster? Throught which network goes traffic (oobe) of (builtin) backup / corosync / cluster (same as corosync?) / migration? Is there an useful network diagramm of proxmox cluster with ceph?
  13. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    @admartinator Did you read my question above?
  14. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    @alexskysilk I have 8 interfaces per node (2x 25G / 2x 10G / 2x1G / 2x1G) and i want to avoid the use of a switch for ceph and cluster/corosync as it reduces the points of failure (and there is no need for external connection). So I want 2 separated frr routers for ceph (25G) and...
  15. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    i've tested every possible variation but i don't get it to work...
  16. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    maybe we can find a solution together :) I've added a second configuration (openfabric) to the nodes. now it looks like this (node1): root@prox01:~# cat /etc/frr/frr.conf # default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in # /var/log/frr/frr.log # # Note: # FRR's...
  17. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I have no clue how to modificate the config file I have posted above to create a second (separated) fabric for e.q. IP 10.10.12.101/32...