But i have to take a deepdive into the logs to see this... it should be visible in the dashboard. ...
Like in other backup programs... ups... did I say that out loud? :p
What I can't see (wether in pbs nor in the email reports) is detailed information which vm has backupped how many gigs per day / week / month...
Is this planned for future versions?
Is it normal that there are huge time spans where no update seems to happen in logfile (last three lines):
2024-10-25T08:21:40+02:00: starting garbage collection on store remote_backupstorage_nas07
2024-10-25T08:21:40+02:00: Start GC phase1 (mark used chunks)
2024-10-25T08:23:12+02:00: marked...
I've added an nfs share as sync target (so no direct backup). It's running well so far. Garbage Collection is slow of course.
Just mount the share via fstab und go with it...
My backup plan syncs the backup on the local storage at the pbs server to a remote nfs share on a nas.
If I set up a sync job for this I think this scenario isn't envisaged by pbs as I only can do this if I turn the remote storage (nas) to a local
storage by mounting the nfs-share to pbs.
So...
Is it possible to add a checkbox to deactivate a scheduled sync job like already available in prune jobs? Will make testing easier (or emergency tasks ;-) )
Thanks in advance...
thats in fact the status quo.
i have already passed the whole controller to a vm in my current setup.
but after partitioning (2) of the tape libray i want to pass each partition to a different vm...
thank you for reply.
there is only one device reported (19:00.0) but as i saw meanwhile it's an eight-port-controller (with two external connectors).
i want to attach an dual-partition tape library connected to the server with two sas-cables to two different vm's (no disks).
Hello Guys.
Is it possible to passthroug the ports of a dual sas hba to two different vm's?
root@prox11:~# lspci -s 19:00.0 -v
19:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
Subsystem: Broadcom / LSI SAS9300-8e
Flags: bus...
I solved the problem by changing the order of the commands:
nOK
source /etc/network/interfaces.d/*
post-up /usr/bin/systemctl restart frr.service
OK
post-up /usr/bin/systemctl restart frr.service
source /etc/network/interfaces.d/*
P.S. I didn't add the line "source ..."...
when I fire up "ifreload -a" in shell i get the same error as mentioned above (not more).
but when I execute "/usr/bin/systemctl restart frr.service" everything seems to be ok.
didn't you add the line in your config?
I reverted the "lo1-thing". This could not be the problem.
As mentioned in the manual you have to add the line
"post-up /usr/bin/systemctl restart frr.service"
in /etc/network/interfaces to reload the service after config upgrades in gui.
And this throws an error ("ifreload -a" is...
By the way:
Can someone tell me which traffic goes through which connection on a cluster?
Throught which network goes traffic (oobe) of (builtin) backup / corosync / cluster (same as corosync?) / migration?
Is there an useful network diagramm of proxmox cluster with ceph?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.