Hello , this is my post regarding this subject: https://forum.proxmox.com/threads/backup-speed-vs-restore-speed.106873/#post-466581
My general conclusion was:
- backup speed is very satisfactory - 90% saturation of 1G net work and this is considered expected ... no matter backing up 1 VM or...
thanks, sounds promising ... would point-to-point port-forward be enough ? ... if yes which ports should be opened between those two PMGs ... or sit-to-site vpn is necesary ?
Thanks
BR
T
Hello, I did little test today, made one VM and this is its "df -h":
[root@s01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs...
Hi, I'm not 100% sure that my post belongs to this subject, but this would be my scenario:
I'm planing to involve PMG as incoming point for 10 clients of mine , a I'm planning to host it at one ISP (as VM) . So incoming mails for 10 domains will come to this pmg-gw1 and he will forward them to...
Hello to all,
after today's upgrade from 7.2-11 to 7.2.14 LXC's won't boot
run_buffer: 321 Script exited with status 2
lxc_init: 847 Failed to run lxc.hook.pre-start for container "109"
__lxc_start: 2008 Failed to initialize container "109"
TASK ERROR: startup for container '109' failed...
That solution solved my issue too, so thank you!
I 'd like to add following:
My home-LAB setup consists of PVE & PBS on the same proxmox host. PBS has 2 datastores, "pbs-local" as local one (local zfs dataset) and "pbs-nfs" as nfs share on synology NAS. So VMs from the local zfs-vm dataset are...
Hi to all,
today I did 3-node-cluster upgrade from 7.1.x to 7.2.3 (with community subs) and the process broke up on every node. After Apt update / dist-upgrade (that obviously went with errors) I rebooted host , but than OpenVswitch network went down and I had to switch back to linux bridge...
one more thing .... Backup speed was not an issue at all ... PBS was receiving data at 950Mbs. .... backing up 1 VM ... but restoring the same one at 350Mbs :(
regarding all mentioned above we can say that PBS is "faster at writing than in reading" ... which is not that common ...
There is one more thing to point out (after further "combination" testing) ... :
my cluster consists of two powerful hosts (pve1&2), and the "little" 3rd quorum node (pve3) is SUPERMICRO A2 C3558 Atom 4 x 4TB sata 7200 rpm WD RED Pro drives - zraid10)
This quorum hardware concurrently runs...
Now I tested with hw-raid volume (LSI3108) (1 volume -> zfsraid0) and the result is the same like with 4 x 1T server ssd zraid10
But , we are always talking about one target , all this 4 VMs are being concurrently restored from this PBS to the same target ... So target is capable of...
After/during latest upgrade (w and w/o subscription) I had problem with openvswitch interfaces :
The node was cut off from the cluster and I had to use IPMI to recover net interfaces ... ifup vmbr0 did the job ...
So be careful ....
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.