Unfortunatley it didnth helped.
I cought problem at some charts, meaby somebody will help me.
Also beaby qcow2 + virtio = bad things ?
and meaby WRITE BACK shouldnt be used ? - i have this in all my setups .... ?
Anyway on VM there was accually no load at all !
Host proxmox kernel: Linux node1...
host have 32 cores (in 2 sockets)
and the guest is set as 1 socket and 10 cores
enable numa is not checked in guest (so its disabled)
default scsi LSI is also set
so you think that this could be an issue and i sould set guest as 2 sockets * 5 cores ?
and i also ahould enable numa meaby this...
Below, host and guest info.
Is it kernel(host kernel or guest kernel ?) bug ?
and you know that for a fact that this was fixed in 4.2.6-1 ?
Host info:
proxmox-ve-2.6.32: 3.4-150 (running kernel: 4.2.3-2-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-3.10.0-13-pve...
kernel 4.x doesnt solve problem.
As i see the problem is less frequent but it happened.
So no good solution for now ?
only disable virtio and use ide ? :(
@spirit dod you installed pve-kernel-4.2 on proxmox 3.4 and there is no problems with it ?
Can do this at night in production and it would go smoothly ?:)
Hi guys. I saw similar topics but still no solution.
I have one server in OVH center and sometimes totally random, VM hang up. While other 15 VMs has no problem. Host proxmox is on SSD Intel in raid1.
What does help is setting disk driver to IDE. no hangups but this is very slow and making...
i think you can use 2 powerfull and third only for quorum, and then you set drbd on dwo of them or ceph on two and thirs as monitor for ceph (dont know if that will work)
Anyway that could work i suppose but i would rather like NAS as qdisk as it will be more stable than cheap "little pc" :P
accually that was a great feature.
With proxmox 3 VE i am making very nice setups:
- 2x hardware dell servers (2x 5000 $)
- 1x cheap (500$) as backup space and for QUORUM(instead of third node)
each server with raid 10 using 4 SSD drives
ssd space mirrored on both servers via DRBD
live...
Hi,
I have strange problem in my 2 locations.
2 node + quorum on NAS
glusterf on both nodes as shared storage
When i do live migration or HA kicks in when node is going down. i see that glusterfs is doing splitbrain.
Anybody know why ?
root@node1:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136...
Hi.
About balloonin.
Guest had min 2GB and max 15GB of ram.
Over high usage it grows ram up to 15GB of max size. And it it working ok.
But now i want so shrink this balloon back, how can i do this ?
i am dooing
echo 1 > /proc/sys/vm/drop_caches
echo 2 > /proc/sys/vm/drop_caches
echo 3 >...
little update: When i turn off HA for VM, i can start it normally.
So problem is only for HA. At least i can start VM in case of quorumd failed :)
But anyway this should work anyway iwthout quorum disk enabled (in case of failure)
Anyone knows how to solve this ?
So how can i start node1 + node2 when qdisk is failing ?
Its is only problem in service startup because it cant start cluster when qdiskd service cant start.
There must be a way to get this working. After all i have all quorum needed but quorum is not the case...
no its not the case.
There are 2 servers so 2 votes avaible.
And one quorum disk = 1 vote
So when quorum is offline, why cant servers runt their services ?
cman cant start due to qdiskd process startup failure.
It should be possible to skip qdiskd and start cnam normally.
I am testing HA two node + NAS iscasi LUN as quorum disk
Anyway when i am turning on nodes without quorum, my cluster cant start VMS due to cman not started.
How can i fix so that cman will start even if qdiskd is offline when nodes are starting ?
Ofcourse when two nodes are booting and NAS is...
i think that i fixed the problem :)
both bond0 and bond1 was connected to the same switch (the same vlan)
When arpinging , host answered from both bond0 and bond1 with different mac addresses so that it could be a problem in connection sometinmes.
I disabled this feature in linux and now i...
Hi,
I have strange problem when two nodes sometimes i see error below, and /etc/pve is blocked due to problem in comunication
corosync [TOTEM ] Retransmit List: 3b8 3b9 3ba 3bb 3bc 3bd 3be 3bf
Also after 6 hours i saw that arping is acting strange.
Tell me is it normal or i have something...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.