Search results

  1. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    Not sure how to turn on more verbose/debug in those scripts, sh -x? The phase that's take long time, is when it's finding and listing each entry to go into the grub menu, first a pause then listing image of kernel A, another pause listing next image kernel B... So what ever it's doing to find...
  2. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    We've got dual pathed iSCSI PV devices: Always wondered how PVE handled shared LVMs, thought initially it would use CLVM or HA-LVM with shared VGs, but seems not to, more like normal VGs and then [de]activating LVMs on the HN node to run a VM, right? Our storage.cfg looks like this: To me...
  3. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    I admit not to be familiar with these inner scripts triggered behind grup updating. Also I'll rather not risk trashing a node inorder to debugging what's taking too long ;) What are others using HA & shared LVMs to hold VM LVMs do to avoid SW watchdog to fire and/or are others seeing long time...
  4. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    It's whenever it's running 'updating grub'... slowly finding new boot entries, like the newly installed kernel and previously kernel etc. It just takes too long +60 sec, up to minutes), but if we export VGs and logout of iSCSI, it's all swift (less than 60 sec) and manage before a NMI gets fired.
  5. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    Yes it is at true and also no execute bit on /etc/grub.d/30_os-prober
  6. stefws

    Disable LVM/VGs on iSCSI PVs during dist-upgrades

    Whenever we need to upgrade the pve-kernel in our PVE 4.4 HA cluster, we find grub updating to be very slow (seem to be looking for other boot images on all known devices). In fact so slow that the HA SW watchdog sometimes fires a NMI, depending on at what stage this happens, it sometimes...
  7. stefws

    Patching PVE 4.3 on one node made hole cluster reboot

    Turned out the other HN nodes all got a SW watchdog NMI as openvswitch patching somehow also breaks OVS network on other nodes for too long (+60 sec) and one of our dual corosync rings is via the OVS network and same time we used active rrp_mode (corosync.conf: rrp_mode: active)...
  8. stefws

    live migration between 4.3.71 + 4.3-72

    Reverting to secure migration worked, so I commented out this in /etc/pve/datacenter.cfg: #migration_unsecure: 1 or set it to '0' after all nodes got to 4.3-73 w/qemu-server 4.0-96, unsecure migrations seems to work again :)
  9. stefws

    Debian minor issue?

    ;) nope, sorry believe it's a local typo from way back... just forget about this
  10. stefws

    Debian minor issue?

    latest enterprise patches results in: Due to: Previous version has:
  11. stefws

    Patching PVE 4.3 on one node made hole cluster reboot

    What more log than given initially do you prefer to see? We've set up two corosync rings to try and avoid issues with loosing quorate when one network might fail, ring 0 through OVS on bonded 10Gbs and a ring 1 through bonded 1Gbs.
  12. stefws

    Patching PVE 4.3 on one node made hole cluster reboot

    Any clues as to why all our other hypervisor nodes rebooted at the same time (rather critical thing to happen :(), while we were patching one node (that got a watchdog NMI and thus rebooted during unpacking of pve-cluster, properly due to some networking issue because of openvswitch 2.6...)?
  13. stefws

    live migration between 4.3.71 + 4.3-72

    Any other suggestions to get out of this mess where live migrations doesn't work? Currently I can migrate off node w/qemu-server 4.0-100 to nodes w/qemu-server 4.0.92 (though not the reverse) also not from 4.0-100 to node n7 w/latest qemu-server 4.0-96 and pve-cluster 4.0-47.
  14. stefws

    live migration between 4.3.71 + 4.3-72

    Did you also patch the pve-cluster package as t.lamprecht suggested or just qemu-server?
  15. stefws

    live migration between 4.3.71 + 4.3-72

    Same result w/qemu-server 4.0-100 on old node...
  16. stefws

    live migration between 4.3.71 + 4.3-72

    Using migration_unsecure: 1 in datacenter.cfg. Tried to patch just qemu-server from enterprise repo on an 'old' node n2, but migration to a 'new' node n7 still fails with: Use of uninitialized value $migration_type in string eq at /usr/share/perl5/PVE/QemuServer.pm line 4478. What would be...
  17. stefws

    Patching PVE 4.3 on one node made hole cluster reboot

    It fails to migrate a VM on to the patched HN node:
  18. stefws

    Really weird proxmox issue

    2. Believe this is possible, we have overlapping HA groups. We seperate VMs from SW Service clusters into non-overlapping groups each: [QOUTE] group: AnyOne comment For VMs which could be run on any node nodes n3,n6,n1,n5,n2,n7,n4 restricted group: HN12 comment...
  19. stefws

    Patching PVE 4.3 on one node made hole cluster reboot

    Wanted to roll on last weeks changes to PVE 4.3: so migrated all VMs of first node and ran patch through apt-get upgrade. SW watchdog then fired a NMI during patching of pve-cluster package and node rebooted, came up fine and we finished it with: dpkg --configure -a and another apt-get...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!