Search results

  1. B

    [SOLVED] Is there a benefit to switch a W2k16 Server from IDE to VirtIO

    I never operated with multiple controllers, just the virtIO one. You will have to connect your volume through the correct BUS/device (SCATA/SCSI,…) so that Windows does find its boot volume. Once you've managed that, you can go ahead and install the PV drivers. Then you will add another...
  2. B

    [SOLVED] Module 'telegraf' has failed: str, bytes or bytearray expected, not NoneType

    Just updated my "old" Nautilus to Octopus and faced exactly the same issue.
  3. B

    How to configure bonding active-backup without miimon

    Yay, thanks - that did exactly the job! I really appreciate your input on this one.
  4. B

    How to configure bonding active-backup without miimon

    Hi, thanks - I haven't installed ifupdown2 as of yet, but I have done that now. However, this issue remains even with ifupdown2 installed. What really bugged me is the fact, that even a reboot won't configure this setting at all…. After I issued a ifdown bond1/ifup bond1 the required config is...
  5. B

    How to configure bonding active-backup without miimon

    Hi, I need to configure a network bond with arp_interval and arp_ip_target instead of the usual miimon on my 6.4.x PVE. I have created this config in /etc/network/interfaces: auto bond1 iface bond1 inet manual bond-slaves enp5s0f0 enp5s01f bond-mode active-backup...
  6. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Thanks - that was what I suspected and after adding the ceph pve repo another full-update did the trick. The warning regarding the clients has gone away.
  7. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    As I can see, my PVE/Ceph cluster pulls the ceph packages from a special source. Is it safe to also do that on my PVE/VM nodes?I'd assume so, but better be safe than sorry.
  8. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    I am running two clusters, one PVE only for the benefit of having a Ceph cluster, so no VMs on that one. Plus, my actual VM cluster. I updated the Ceph one to the latest PVE/Ceph 6.4.9/14.2.20 and afterwards, I updated my PVEs as well. In that process, I performed live-migrations of all guests...
  9. B

    Slow garbage collection on PBS

    Thanks for chiming in, but in my case, I am running the PBS backup store on a SSD-only Ceph storage, so read IOPs shouldn't be an issue. Before this Ceph storage became my actual PBS data store, it served as the working Ceph for my main PVE cluster and the performance was really great.
  10. B

    Slow garbage collection on PBS

    Okay, so… GC needs to read all chunks and it looks like, that this is what it is doing. I checked a while back in the logs and found some other occurrences, where GC took 4 to 5 days to complete. I also took a look at iostat and it seems that GC is doing this strictly sequentially. Maybe, if...
  11. B

    Slow garbage collection on PBS

    Hi, I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster...
  12. B

    Backup speed limited to 1 Gbps?

    Yeah… this is strange… looks like you've got all in place for achieving better throughputs when writing to your FreeNAS. I am kind of baffled… although it really looks line vzdump is the culprit. Have your tried backing up without compression?
  13. B

    Backup speed limited to 1 Gbps?

    So, if it's not the network - which clearly is not the case, the issue must be somewhere in the read pipe… Have you measured the throughput you get, when reading a large file from the vm storage, pipe it through gzip and pipe that to /dev/null? That should give you the though put you achieve...
  14. B

    Backup speed limited to 1 Gbps?

    Well, despite you stating that read speeds from your vm storage are unlimited - and checking that against sparse data is really no proof, I'd suggest to first benchmark the real read perfpormance of your vm storage. Then, as already suggested, perform a iperf bench between your vm node and your NAS.
  15. B

    [SOLVED] Replication doesn't speed up migration (6.2 community edition)

    Well… it seems logical, but only if you perform a non-live migration. But once the guest has been shutdown, it all boils down to a delta-migration and a restart of the guest on the new host. However, a live migration is only possible on shared storage. You could estimate the time for such an...
  16. B

    PVE 6 + InfluxDB + Grafana

    That's the template I also used and it is displaying the actual stats for input reads (no, writes as I just learned) and both network input/output. Those were not super-important to me, so payed them not too much attention, but I will take a look at the missing write I/Os…
  17. B

    Proxmox + Windows Server 2019 host = BSOD

    Yeah, unfortuanetly it seems to be that way - bummer. So you either focus on how to speed-up your WS 2019/guest setup or you abandon the idea of running WS2019 on kvm for the time being. Maybe you should start a new thread about the performance issues of WS2019 on Proxmox - worth a shot.
  18. B

    Proxmox + Windows Server 2019 host = BSOD

    Relax… ;) Cut @alexskysilk some slack… it's not unreasonable to suggest that. However, having not experienced those errors myselg, I also went for some searching and found a couple of posts which deal with this dreadful KMODE_EXCEPTION_NOT_HANDLED error. To get to know, what actually causes this...
  19. B

    Proxmox + Windows Server 2019 host = BSOD

    Well… the best advice I can give you is to try a new install to a different guest and see, if it runs stable. If your guest crashes randomly, then it looks to me like there are some other issues with your host. We do run a couple of RDP hosts - admittedly not WS2019, but 2016 but they all run...
  20. B

    Proxmox + Windows Server 2019 host = BSOD

    Hmm… I am still not sure, that setting the guest's CPU config to host will help you. Regarding the BSODs… does this new server has some radical different CPU? Did you switch from Intel to AMD perhaps…? If you're getting a BSOD, there should be an error message, that you can try to look up and...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!