Recent content by schoda

  1. S

    Swap usage on Proxmox node

    I'm running with vm.swappiness 1 for over a year now. It's doing what i wanted it to do - not using that much swap on my local disks. I have enough RAM and all my VMs are located on an all flash SAN in case they want to use their swap inside the VM.
  2. S

    Proxmox cluster questions

    Hello, i have a 8 node proxmox cluster located over two datacenters, 4 on each side. We are still running Proxmox 5.4. Currently all network communication (corosync and normal network communication) is done over one switch per datacenter. I know that is not optimal and i could add a second...
  3. S

    Proxmox 5.4 multipath LVM issues

    I'm already using wwids and aliases in multipath.conf. But that does not help at all when all lvm commands try to access the block devices (/dev/sd*) for whatever reason. multipath { wwid "3624a9370b9f225dcede6459700011430" alias pm-cluster01-online } multipath {...
  4. S

    Proxmox 5.4 multipath LVM issues

    Hi, I've tried to figure out how to make a static mapping with udev but didn't have any luck so far. Do you have a documentation / how to for that? Thanks in advance, Daniel
  5. S

    Proxmox 5.4 multipath LVM issues

    Hi, long story short: our storage vendor verified everything he could (multipath.conf, udev rules, best practices) but he only officially gives support for RHEL and SLES. Everytime they do an update they take down one controller on the storage side, update it and bring it back online. And...
  6. S

    [SOLVED] Problems with Proxmox 5.4 with FC multipath and lvm backend

    Just a heads up: For Proxmox 5.4 the multipath.conf for RHEL 6.2+ from https://support.purestorage.com/Solutions/Linux/Reference/Linux_Recommended_Settings works Single node Pure storage: defaults { polling_interval 10 find_multipaths yes } devices { device { vendor...
  7. S

    [SOLVED] Problems with Proxmox 5.4 with FC multipath and lvm backend

    Could anyone acknowledge that the following multipath.conf is supported by Proxmox 5.4 (and 6.1?): For only one Pure Storage: devices { device { vendor "PURE" product "FlashArray" path_grouping_policy "multibus" path_selector...
  8. S

    [SOLVED] Problems with Proxmox 5.4 with FC multipath and lvm backend

    We are still trying to understand what really happened because the /dev/mapper/pm-cluster01-online is 1.5T in size and the /dev/mapper/pm-cluster01-voice volume is 5T in size and how the wrong PV header got there. Something went wrong with multipath during the update of the storage. I have an...
  9. S

    [SOLVED] Problems with Proxmox 5.4 with FC multipath and lvm backend

    Hi, thanks for the reply. We fixed this. Somehow during the upgrade one of the PVs received a wrong header/PVID so we had a duplicate. Luckily enough it was only the header of the PV. This problem is fixed now. Never seen this before and i hope we never see this again. Fixing PV headers with dd...
  10. S

    [SOLVED] Problems with Proxmox 5.4 with FC multipath and lvm backend

    Hi, yesterday our manufacturer of our SAN did an upgrade of the firmware which should be interrupt free according to him. Sadly this was not the case. I had a lot of VMs running on proxmox which had read only filesystems which we could mostly fix by doing an fsck on the filesystem. We are...
  11. S

    tried to add a new server to our cluster - it failed

    We plan to upgrade, and yes i've seen that unicast instead of multicast is used in newer versions. However i can't upgrade atm because first i need a server to migrate VMs to. Cluster got 8 nodes now and we plan to do an RAM/Network upgrade before going to Proxmox 6.x. Migrating over a single...
  12. S

    tried to add a new server to our cluster - it failed

    We found the issue. Cisco sucks balls. We got new cisco nexus switches and we had to configure the vlan like that to get multicast working: Where X.X.X.X is a free ip address in the subnet of your proxmox hosts.
  13. S

    tried to add a new server to our cluster - it failed

    Alright than it is a problem with the LACP interface. They are already on the same network and i also can ssh from one proxmox host to another.
  14. S

    tried to add a new server to our cluster - it failed

    I've reinstalled and run into the same issue. I'm deleting the LACP interface and try without it.
  15. S

    tried to add a new server to our cluster - it failed

    Both services are up and running for 16hours (since reboot). pvecm status still looks the same I did an "pvecm delnode pm-08" on my cluster and removed it. I tried to readd it like it is now but i got: "trying to acquire lock... can't lock file '/var/lock/pvecm.lock' - got timeout" I'll...