Hello , this is my post regarding this subject: https://forum.proxmox.com/threads/backup-speed-vs-restore-speed.106873/#post-466581
My general conclusion was:
- backup speed is very satisfactory - 90% saturation of 1G net work and this is considered expected ... no matter backing up 1 VM or...
thanks, sounds promising ... would point-to-point port-forward be enough ? ... if yes which ports should be opened between those two PMGs ... or sit-to-site vpn is necesary ?
Thanks
BR
T
Hello, I did little test today, made one VM and this is its "df -h":
[root@s01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs...
Hi, I'm not 100% sure that my post belongs to this subject, but this would be my scenario:
I'm planing to involve PMG as incoming point for 10 clients of mine , a I'm planning to host it at one ISP (as VM) . So incoming mails for 10 domains will come to this pmg-gw1 and he will forward them to...
Hello to all,
after today's upgrade from 7.2-11 to 7.2.14 LXC's won't boot
run_buffer: 321 Script exited with status 2
lxc_init: 847 Failed to run lxc.hook.pre-start for container "109"
__lxc_start: 2008 Failed to initialize container "109"
TASK ERROR: startup for container '109' failed...
That solution solved my issue too, so thank you!
I 'd like to add following:
My home-LAB setup consists of PVE & PBS on the same proxmox host. PBS has 2 datastores, "pbs-local" as local one (local zfs dataset) and "pbs-nfs" as nfs share on synology NAS. So VMs from the local zfs-vm dataset are...
Hi to all,
today I did 3-node-cluster upgrade from 7.1.x to 7.2.3 (with community subs) and the process broke up on every node. After Apt update / dist-upgrade (that obviously went with errors) I rebooted host , but than OpenVswitch network went down and I had to switch back to linux bridge...
one more thing .... Backup speed was not an issue at all ... PBS was receiving data at 950Mbs. .... backing up 1 VM ... but restoring the same one at 350Mbs :(
regarding all mentioned above we can say that PBS is "faster at writing than in reading" ... which is not that common ...
There is one more thing to point out (after further "combination" testing) ... :
my cluster consists of two powerful hosts (pve1&2), and the "little" 3rd quorum node (pve3) is SUPERMICRO A2 C3558 Atom 4 x 4TB sata 7200 rpm WD RED Pro drives - zraid10)
This quorum hardware concurrently runs...
Now I tested with hw-raid volume (LSI3108) (1 volume -> zfsraid0) and the result is the same like with 4 x 1T server ssd zraid10
But , we are always talking about one target , all this 4 VMs are being concurrently restored from this PBS to the same target ... So target is capable of...
After/during latest upgrade (w and w/o subscription) I had problem with openvswitch interfaces :
The node was cut off from the cluster and I had to use IPMI to recover net interfaces ... ifup vmbr0 did the job ...
So be careful ....
Hello to all
one of PBS-backedup VMs is linux mail server that stores each mail as one file (Kerio-Connect-GFI)
In this case, one of users has cca 160.000 mail files in #msgs directory and while browsing mentioned directory , this kind of time-out message pops up
Is there any chance to...
Hello to all , I just want to share my recent good experience with PBS ...
I upgraded my PVE - PBS lab setup to 7.x / 2.x
One of my VMs is truenas with virtual images in zfs raid (0-stripe)
File-restore works for widnows volumes, linux volumes but the possibility to read zfs-raid volumes was...
My PBS setup consists of:
4 x 4T sata (5400 rpm) -> raidz10 PBS + datastore datasate /rpool/datastore1
2 x satadom ssd 64G -> raidz1- special device
Backup speed gets up to 85-95% 1G wirespeed and this is pretty satisfactory, but restore speed is little questionable.
When restoring just...
JonathonFS thank you for your hints ... You're right , I'll post new(same) question to PBS Install&Conf forum
BTW ... My setup is 'default' and does not have has any bw-limits ... I've just tried vzdump backup and restore to (and from) the same PBS server (made nfs export in rpool/nfs...
No, there is no aggr or bond .. just one NIC towards PBS. So the main question would be : Is there any way to populate i.e. 80% of this 1G wire-bandwitdth when restoring just one VM at a time? When restoring 2-3 VMs concurrently to the same PVE node , wire link gets full populated at 1Gbs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.