Search results

  1. M

    Cluster nodes fencing when performing backup

    I've also checked other nodes that failed, no corosync issues. Most show some variation of the above kernel bug.
  2. M

    Cluster nodes fencing when performing backup

    I was wondering about being able to dump the VM with greater size than local disks, so thanks for clarifying. We had plenty of corosync woes before, mainly when it went from multicast to unicast (version 3.0 IIRC). That's when we actually implemented a redundant ring for quorum. Our links are...
  3. M

    Cluster nodes fencing when performing backup

    Yes we do, however we also have another (redundant) ring over 10G iscsi: LINK ID 0 addr = 172.20.10.17 status: nodeid 1: connected nodeid 3: localhost nodeid 4: connected nodeid 5...
  4. M

    Cluster nodes fencing when performing backup

    Hello, We have started testing PBS on our 5-node cluster which was running very stable up to this point. Although some jobs ran fine, we often experience that node drops out of cluster while performing backup to local /var/tmp/vzdumptmpXXXXX without warning (usually we get fence event mail when...
  5. M

    Fake mail prevention (using local domain as sender)

    All our mail servers only allow senders from local domains if they use auth or if they are on a trusted relay list, which means anyone actually using local domain as a sender from unknown source (no auth, not known relay) is a spammer. We use only self-hosted mailing list servers (meaning...
  6. M

    Fake mail prevention (using local domain as sender)

    Thanks. The only legitimate hosts that are able to send as local senders from outside are already in the trusted networks (trusted relays). Will this override the rule you suggested or will the rule override trusted relays? Not all mail is being sent through PMG so it's possible that some...
  7. M

    Fake mail prevention (using local domain as sender)

    Hello, We've been recently (and this has been an issue in the past) hit with a wave of spam using fake FROM: header which is the same as TO:, meaning the inbound mail to be relayed is seemingly from the same domain/user as the recipient. Is there a quick/dirty setting to prevent this or the...
  8. M

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Well, we jumped the gun since there was that annoying "node hands with 100% CPU on LXC reboot" issue that existed in 5.x versions which were giving us serious issues, especially now that we're piloting Proxmox over ansible.
  9. M

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Just to pipe in... we had a severe fallout on Saturday, about one week after upgrading. The corosync messaging failed (crit: cpg_send_message failed etc.), after which fencing commenced and most nodes rebooted, but even after this the quorum was not formed again, but fragmented, each fragment...
  10. M

    LXC storage migration

    Thanks! Anyway, just what I needed, not ideal since it's offline but very workable within declared maintenance periods.
  11. M

    LXC storage migration

    I'm testing it right now (move_volume), but can't believe I've actually didn't see it... was it a recent addition or I'm just plain blind? It helps a lot and thanks for that!
  12. M

    LXC storage migration

    Hello, So we have a working Proxmox cluster for a couple of years and I've been wondering ... is there any plan on implementing a storage migration option for LXC containers? It really doesn't have to be fully "live", but having to manually backup and restore machines is cumbersome and...
  13. M

    LACP bonding stopped working after 5.2 upgrade

    I have changed the trunk from LACP to L2 XOR mode which works. In fact, everything does except 802.3ad mode. I guess there's nothing to be done right now except try to test as new kernel versions hit the repos?
  14. M

    LACP bonding stopped working after 5.2 upgrade

    Yeah, drivers are the same across latest kernel versions. The NIC's work (I can see traffic over individual NIC's, but not over bond). ip link show: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd...
  15. M

    LACP bonding stopped working after 5.2 upgrade

    Yes, this is the config that worked before the update: iface bond0 inet manual slaves enp175s0f0 enp175s0f1 bond miimon 100 bond_mode 802.3ad
  16. M

    LACP bonding stopped working after 5.2 upgrade

    Any news on this? Is this supposed to be fixed with kernel update?
  17. M

    LACP bonding stopped working after 5.2 upgrade

    This is the output (btw, it's exactly the same as output from 5.13 kernel): Jun 6 09:21:08 xxxx kernel: [ 1.731906] bnx2 0000:af:00.0 enp175s0f0: renamed from eth0 Jun 6 09:21:08 xxxx kernel: [ 1.765802] bnx2 0000:af:00.1 enp175s0f1: renamed from eth1 Jun 6 09:21:08 xxxx kernel: [...
  18. M

    LACP bonding stopped working after 5.2 upgrade

    Nope, sorry, same as before. I've installed 4.15.17-3-pve from your link above, updated grub and rebooted, but it still won't work.
  19. M

    LACP bonding stopped working after 5.2 upgrade

    Hello, We have an issue on one of our servers after 5.2 upgrade. All of servers have active LACP bonding interfaces. One of our servers won't work with LACP after upgrade (others do). It is in fact the only one using old Broadcom dual-nic adapter: af:00.0 Ethernet controller: Broadcom Limited...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!