Well, before the update from PVE7 to PVE8, we actually ran all the backups in one job, so all 12 nodes startet their backups at the same time, which never caused any load issues on our PBS. Now, we have the backups on our active nodes being started with an offset of 30 mins. to spread them out...
We recently updated all of our PVE servers from 7 to 8 and updated PBS as well. We also expanded our PVE cluster of 12 nodes to another 14 nodes to perform a phase-out of our old PVE servers. We are experiencing the same issues randomly across our cluster, where guests will not be backed out due...
Well, I am no hacker, but I'd guess that e.g. broadcasts would make it to and from the bride which poses an information leak to the outside world. The bridge will expose that kind of traffic to the internet, which is never a good thing. The issue is, that any traffic from the internet hits your...
Regarding the NIC setup of your OPNSense VM… from a security standpoint, it's always best to have dedicated (passthrough) ports to your guest. I would also never even think about having my WAN patched directly to a bridge, because that way all the WAN taffic hits your host directly. I don't...
You could always create your own config file in
/etc/default/grub.d/custom.cfg
and simply put it there. Just remember to run update-grub afterwards, which will update the grub config. This way, you will be safe from any distro updates messing with the default config. Once you don't need this...
I never operated with multiple controllers, just the virtIO one. You will have to connect your volume through the correct BUS/device (SCATA/SCSI,…) so that Windows does find its boot volume. Once you've managed that, you can go ahead and install the PV drivers. Then you will add another...
Hi, thanks - I haven't installed ifupdown2 as of yet, but I have done that now. However, this issue remains even with ifupdown2 installed. What really bugged me is the fact, that even a reboot won't configure this setting at all….
After I issued a ifdown bond1/ifup bond1 the required config is...
Hi,
I need to configure a network bond with arp_interval and arp_ip_target instead of the usual miimon on my 6.4.x PVE. I have created this config in /etc/network/interfaces:
auto bond1
iface bond1 inet manual
bond-slaves enp5s0f0 enp5s01f
bond-mode active-backup...
Thanks - that was what I suspected and after adding the ceph pve repo another full-update did the trick. The warning regarding the clients has gone away.
As I can see, my PVE/Ceph cluster pulls the ceph packages from a special source. Is it safe to also do that on my PVE/VM nodes?I'd assume so, but better be safe than sorry.
I am running two clusters, one PVE only for the benefit of having a Ceph cluster, so no VMs on that one. Plus, my actual VM cluster. I updated the Ceph one to the latest PVE/Ceph 6.4.9/14.2.20 and afterwards, I updated my PVEs as well. In that process, I performed live-migrations of all guests...
Thanks for chiming in, but in my case, I am running the PBS backup store on a SSD-only Ceph storage, so read IOPs shouldn't be an issue. Before this Ceph storage became my actual PBS data store, it served as the working Ceph for my main PVE cluster and the performance was really great.
Okay, so… GC needs to read all chunks and it looks like, that this is what it is doing. I checked a while back in the logs and found some other occurrences, where GC took 4 to 5 days to complete. I also took a look at iostat and it seems that GC is doing this strictly sequentially. Maybe, if...
Hi,
I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster...
Yeah… this is strange… looks like you've got all in place for achieving better throughputs when writing to your FreeNAS. I am kind of baffled… although it really looks line vzdump is the culprit. Have your tried backing up without compression?
So, if it's not the network - which clearly is not the case, the issue must be somewhere in the read pipe… Have you measured the throughput you get, when reading a large file from the vm storage, pipe it through gzip and pipe that to /dev/null? That should give you the though put you achieve...
Well, despite you stating that read speeds from your vm storage are unlimited - and checking that against sparse data is really no proof, I'd suggest to first benchmark the real read perfpormance of your vm storage. Then, as already suggested, perform a iperf bench between your vm node and your NAS.
Well… it seems logical, but only if you perform a non-live migration. But once the guest has been shutdown, it all boils down to a delta-migration and a restart of the guest on the new host. However, a live migration is only possible on shared storage. You could estimate the time for such an...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.