Recent content by budy

  1. B

    [SOLVED] All Backups Started Failing

    Well, before the update from PVE7 to PVE8, we actually ran all the backups in one job, so all 12 nodes startet their backups at the same time, which never caused any load issues on our PBS. Now, we have the backups on our active nodes being started with an offset of 30 mins. to spread them out...
  2. B

    [SOLVED] All Backups Started Failing

    We recently updated all of our PVE servers from 7 to 8 and updated PBS as well. We also expanded our PVE cluster of 12 nodes to another 14 nodes to perform a phase-out of our old PVE servers. We are experiencing the same issues randomly across our cluster, where guests will not be backed out due...
  3. B

    New Proxmox Setup with OPNSense Advice needed

    Well, I am no hacker, but I'd guess that e.g. broadcasts would make it to and from the bride which poses an information leak to the outside world. The bridge will expose that kind of traffic to the internet, which is never a good thing. The issue is, that any traffic from the internet hits your...
  4. B

    New Proxmox Setup with OPNSense Advice needed

    Regarding the NIC setup of your OPNSense VM… from a security standpoint, it's always best to have dedicated (passthrough) ports to your guest. I would also never even think about having my WAN patched directly to a bridge, because that way all the WAN taffic hits your host directly. I don't...
  5. B

    No SAS2008 after upgrade

    You could always create your own config file in /etc/default/grub.d/custom.cfg and simply put it there. Just remember to run update-grub afterwards, which will update the grub config. This way, you will be safe from any distro updates messing with the default config. Once you don't need this...
  6. B

    [SOLVED] Is there a benefit to switch a W2k16 Server from IDE to VirtIO

    I never operated with multiple controllers, just the virtIO one. You will have to connect your volume through the correct BUS/device (SCATA/SCSI,…) so that Windows does find its boot volume. Once you've managed that, you can go ahead and install the PV drivers. Then you will add another...
  7. B

    [SOLVED] Module 'telegraf' has failed: str, bytes or bytearray expected, not NoneType

    Just updated my "old" Nautilus to Octopus and faced exactly the same issue.
  8. B

    How to configure bonding active-backup without miimon

    Yay, thanks - that did exactly the job! I really appreciate your input on this one.
  9. B

    How to configure bonding active-backup without miimon

    Hi, thanks - I haven't installed ifupdown2 as of yet, but I have done that now. However, this issue remains even with ifupdown2 installed. What really bugged me is the fact, that even a reboot won't configure this setting at all…. After I issued a ifdown bond1/ifup bond1 the required config is...
  10. B

    How to configure bonding active-backup without miimon

    Hi, I need to configure a network bond with arp_interval and arp_ip_target instead of the usual miimon on my 6.4.x PVE. I have created this config in /etc/network/interfaces: auto bond1 iface bond1 inet manual bond-slaves enp5s0f0 enp5s01f bond-mode active-backup...
  11. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Thanks - that was what I suspected and after adding the ceph pve repo another full-update did the trick. The warning regarding the clients has gone away.
  12. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    As I can see, my PVE/Ceph cluster pulls the ceph packages from a special source. Is it safe to also do that on my PVE/VM nodes?I'd assume so, but better be safe than sorry.
  13. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    I am running two clusters, one PVE only for the benefit of having a Ceph cluster, so no VMs on that one. Plus, my actual VM cluster. I updated the Ceph one to the latest PVE/Ceph 6.4.9/14.2.20 and afterwards, I updated my PVEs as well. In that process, I performed live-migrations of all guests...
  14. B

    Slow garbage collection on PBS

    Thanks for chiming in, but in my case, I am running the PBS backup store on a SSD-only Ceph storage, so read IOPs shouldn't be an issue. Before this Ceph storage became my actual PBS data store, it served as the working Ceph for my main PVE cluster and the performance was really great.
  15. B

    Slow garbage collection on PBS

    Okay, so… GC needs to read all chunks and it looks like, that this is what it is doing. I checked a while back in the logs and found some other occurrences, where GC took 4 to 5 days to complete. I also took a look at iostat and it seems that GC is doing this strictly sequentially. Maybe, if...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!