I never operated with multiple controllers, just the virtIO one. You will have to connect your volume through the correct BUS/device (SCATA/SCSI,…) so that Windows does find its boot volume. Once you've managed that, you can go ahead and install the PV drivers. Then you will add another...
Hi, thanks - I haven't installed ifupdown2 as of yet, but I have done that now. However, this issue remains even with ifupdown2 installed. What really bugged me is the fact, that even a reboot won't configure this setting at all….
After I issued a ifdown bond1/ifup bond1 the required config is...
Hi,
I need to configure a network bond with arp_interval and arp_ip_target instead of the usual miimon on my 6.4.x PVE. I have created this config in /etc/network/interfaces:
auto bond1
iface bond1 inet manual
bond-slaves enp5s0f0 enp5s01f
bond-mode active-backup...
Thanks - that was what I suspected and after adding the ceph pve repo another full-update did the trick. The warning regarding the clients has gone away.
As I can see, my PVE/Ceph cluster pulls the ceph packages from a special source. Is it safe to also do that on my PVE/VM nodes?I'd assume so, but better be safe than sorry.
I am running two clusters, one PVE only for the benefit of having a Ceph cluster, so no VMs on that one. Plus, my actual VM cluster. I updated the Ceph one to the latest PVE/Ceph 6.4.9/14.2.20 and afterwards, I updated my PVEs as well. In that process, I performed live-migrations of all guests...
Thanks for chiming in, but in my case, I am running the PBS backup store on a SSD-only Ceph storage, so read IOPs shouldn't be an issue. Before this Ceph storage became my actual PBS data store, it served as the working Ceph for my main PVE cluster and the performance was really great.
Okay, so… GC needs to read all chunks and it looks like, that this is what it is doing. I checked a while back in the logs and found some other occurrences, where GC took 4 to 5 days to complete. I also took a look at iostat and it seems that GC is doing this strictly sequentially. Maybe, if...
Hi,
I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster...
Yeah… this is strange… looks like you've got all in place for achieving better throughputs when writing to your FreeNAS. I am kind of baffled… although it really looks line vzdump is the culprit. Have your tried backing up without compression?
So, if it's not the network - which clearly is not the case, the issue must be somewhere in the read pipe… Have you measured the throughput you get, when reading a large file from the vm storage, pipe it through gzip and pipe that to /dev/null? That should give you the though put you achieve...
Well, despite you stating that read speeds from your vm storage are unlimited - and checking that against sparse data is really no proof, I'd suggest to first benchmark the real read perfpormance of your vm storage. Then, as already suggested, perform a iperf bench between your vm node and your NAS.
Well… it seems logical, but only if you perform a non-live migration. But once the guest has been shutdown, it all boils down to a delta-migration and a restart of the guest on the new host. However, a live migration is only possible on shared storage. You could estimate the time for such an...
That's the template I also used and it is displaying the actual stats for input reads (no, writes as I just learned) and both network input/output. Those were not super-important to me, so payed them not too much attention, but I will take a look at the missing write I/Os…
Yeah, unfortuanetly it seems to be that way - bummer. So you either focus on how to speed-up your WS 2019/guest setup or you abandon the idea of running WS2019 on kvm for the time being. Maybe you should start a new thread about the performance issues of WS2019 on Proxmox - worth a shot.
Relax… ;) Cut @alexskysilk some slack… it's not unreasonable to suggest that. However, having not experienced those errors myselg, I also went for some searching and found a couple of posts which deal with this dreadful KMODE_EXCEPTION_NOT_HANDLED error. To get to know, what actually causes this...
Well… the best advice I can give you is to try a new install to a different guest and see, if it runs stable. If your guest crashes randomly, then it looks to me like there are some other issues with your host. We do run a couple of RDP hosts - admittedly not WS2019, but 2016 but they all run...
Hmm… I am still not sure, that setting the guest's CPU config to host will help you. Regarding the BSODs… does this new server has some radical different CPU? Did you switch from Intel to AMD perhaps…? If you're getting a BSOD, there should be an error message, that you can try to look up and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.