Recent content by budy

  1. [SOLVED] Is there a benefit to switch a W2k16 Server from IDE to VirtIO

    I never operated with multiple controllers, just the virtIO one. You will have to connect your volume through the correct BUS/device (SCATA/SCSI,…) so that Windows does find its boot volume. Once you've managed that, you can go ahead and install the PV drivers. Then you will add another...
  2. Module 'telegraf' has failed: str, bytes or bytearray expected, not NoneType

    Just updated my "old" Nautilus to Octopus and faced exactly the same issue.
  3. How to configure bonding active-backup without miimon

    Yay, thanks - that did exactly the job! I really appreciate your input on this one.
  4. How to configure bonding active-backup without miimon

    Hi, thanks - I haven't installed ifupdown2 as of yet, but I have done that now. However, this issue remains even with ifupdown2 installed. What really bugged me is the fact, that even a reboot won't configure this setting at all…. After I issued a ifdown bond1/ifup bond1 the required config is...
  5. How to configure bonding active-backup without miimon

    Hi, I need to configure a network bond with arp_interval and arp_ip_target instead of the usual miimon on my 6.4.x PVE. I have created this config in /etc/network/interfaces: auto bond1 iface bond1 inet manual bond-slaves enp5s0f0 enp5s01f bond-mode active-backup...
  6. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Thanks - that was what I suspected and after adding the ceph pve repo another full-update did the trick. The warning regarding the clients has gone away.
  7. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    As I can see, my PVE/Ceph cluster pulls the ceph packages from a special source. Is it safe to also do that on my PVE/VM nodes?I'd assume so, but better be safe than sorry.
  8. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    I am running two clusters, one PVE only for the benefit of having a Ceph cluster, so no VMs on that one. Plus, my actual VM cluster. I updated the Ceph one to the latest PVE/Ceph 6.4.9/14.2.20 and afterwards, I updated my PVEs as well. In that process, I performed live-migrations of all guests...
  9. Slow garbage collection on PBS

    Thanks for chiming in, but in my case, I am running the PBS backup store on a SSD-only Ceph storage, so read IOPs shouldn't be an issue. Before this Ceph storage became my actual PBS data store, it served as the working Ceph for my main PVE cluster and the performance was really great.
  10. Slow garbage collection on PBS

    Okay, so… GC needs to read all chunks and it looks like, that this is what it is doing. I checked a while back in the logs and found some other occurrences, where GC took 4 to 5 days to complete. I also took a look at iostat and it seems that GC is doing this strictly sequentially. Maybe, if...
  11. Slow garbage collection on PBS

    Hi, I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster...
  12. Backup speed limited to 1 Gbps?

    Yeah… this is strange… looks like you've got all in place for achieving better throughputs when writing to your FreeNAS. I am kind of baffled… although it really looks line vzdump is the culprit. Have your tried backing up without compression?
  13. Backup speed limited to 1 Gbps?

    So, if it's not the network - which clearly is not the case, the issue must be somewhere in the read pipe… Have you measured the throughput you get, when reading a large file from the vm storage, pipe it through gzip and pipe that to /dev/null? That should give you the though put you achieve...
  14. Backup speed limited to 1 Gbps?

    Well, despite you stating that read speeds from your vm storage are unlimited - and checking that against sparse data is really no proof, I'd suggest to first benchmark the real read perfpormance of your vm storage. Then, as already suggested, perform a iperf bench between your vm node and your NAS.
  15. [SOLVED] Replication doesn't speed up migration (6.2 community edition)

    Well… it seems logical, but only if you perform a non-live migration. But once the guest has been shutdown, it all boils down to a delta-migration and a restart of the guest on the new host. However, a live migration is only possible on shared storage. You could estimate the time for such an...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!