Under Memory at https://HOST:8006/pve-docs/chapter-qm.html#qm_memory it's written:
'When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB of RAM available to the host'
But I tent to find that when getting at around 60% memory usage on a hypervisor host it start to send...
It turns out also to be an issue if we reboot other physical servers only attached to the Nexus. Our net admins have briefly seen in the Nexus mac addresses go 'missing' from various ports when such happens. They do not know why though :/ Nexus SWs are using MSTP and our openvswitches are using...
Ever got around narrowing down grub-probe calls?
Could this be the culprit if vgs is called multiple times:
root@n7:~# time vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 136.57g 16.00g
vgA 5 53 0 wz--n- 3.64t 1.07t
vgAbck 1 1 0 wz--n- 1.82t...
It's hard to determine the exact RC, any hints on how to pin point?
STP, I assume as it's default, also https://pve.proxmox.com/wiki/Open_vSwitch says about RSTP:
'WARNING: The stock PVE 4.4 kernel panics, must use a 4.5 or higher kernel for stability.'
and we're currently on pve-kernel...
Haven't installed it and makes sense we can't find it if default not installed by PVE and don't need to install os-prober either as we don't dual boot our PVE nodes of course :)
Still wonder what makes grub configuration slow when attached to our iSCSI devices compared to not...
Also wondering why every PVE node seems to be flapping one of our two corosync rings across the bonded 2x10Gbs NICs+Cisco 5672up switches like below. Also got another ring 1 across another virtual juniper switch chassis bonded over 2x 1Gbs NICs which showing no flapping signs at all:
We have all VM networks virtualized by vlan tagging and connected through a single OVS switch vmbr1. This switch is connected to a pair of bonded 2x10Gbs NICs cabled to a virtual chassis comprised of two Cisco Nexus 5672up. Sometimes during a reboot of a PVE 4.4 hypervisor node (properly during...
Right thanks, so far 3 out of 7 patched to corosync 2.4.2-2 with fencing turned off and no NMI yet.
Why does it seem like when purging an old kernel it goes to through grub configuration twice?
Specially annoying when it grub configuration is slow
Howto disable fencing and just during the next corosync patch and is it always a good idea?
Previous talk of this issue here in this forum suggested removing the debian package os-prober, only I can't find such a package...
Hm now we're on 2.4.2-1 corosync, so we still have this potential issue?
Would this count as you describe where corosync is stopped (@14:23:11) but not started again before after been fenced off by watchdog (@14:24:04) and reboots (@14:28:00)?
If so what to do about it, waiting for 2.4.2-2...
Running a 7 node 4.4 cluster with VM storage in LVMs from Vol groups with PVs from a shared iSCSI SAN.
Seems either our iSCSI devices or number of VM LVMs have caused slow OS probing during grub updating, causing risks that the SW watchdog sometimes firing a NMI during grub configuration as it...
usually 2-3 kernel images, but below 5 normally.
Note that the iSCSI LUNs are connected through a bonded pair of NICs plugged into an openvswitch (vmbr1), dunno if this matters, maybe only when patching the openvswitch package.
Got a 4.4 production cluster attached to a multipathed iSCSI SAN from a HP MSA1040, divided the MSA into two disk groups A&B, then created 5+1 xiSCSI LUNs per MSA disk group and mapped those to PVs in four volume groups on each hypervisor node like this:
vgXbck LUNs are mapped to nfs server...
Hm well reason we always first do upgrade before dist-upgrade is that we've experienced several times over time that a node would get a NMI fired as SW watchdog sometimes was held down for longer than 60 sec by some package patching taking too long time, like grub probing our iSCSI luns/LVMs on...
Had the same issue I believe, as I also just did a apt-get upgrade, seem to remember it was a perl compilation error in pve-daemon due to a missing new libhttp*perl* package which didn't get added before I did apt-get dist-upgrade...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.