Separate names with a comma.
True: In pve-no-subscription you have pve-kernel-4.4.98-3-pve which is 4.4.98-102 (patched one).
I've exactly the same issue on an HP Proliant server! Warning :eek::eek:
Thank you!! Problem solved: I enabled multicast group with that IP, on the affected Vlan, on the switch.
I have a cluster of two Proxmox, one of them I had to remove from the cluster. Then, I formatted it, readded to the cluster, and I have some...
Tom, did you manage to solve it?
I have the same problem with a SAS Controller......
Same problem here, instead of iSCSI i have a LSILogic SAS controller (SAS1064ET).
Obviously disks are fine, RAID is OK... and latest Proxmox...
So I did this:
And mount it on your fstab:
I still have Px 3.4 on my HP nodes... :(
Yes, there are plenty of advantadges :)
But, at install time, is there any way of choose if you want "new LVM layout" or "old /var/lib/vz...
Thanks for your answer.
I was thinking of I/O congestion too.... but I have other servers (some older) and I've never seen I/O...
Yes, /var/lib/vz it is present but inside root partition. I usually set maxroot to 10Gb... so now /var/lib/vz is 10Gb shared with system...
Something strange is happening related to storage...
A VM also gets that kind of panics! Related to jbd2/vda2-8 (Storage?).
Yes, that's it.
PS: Reboot entire machine and then check with a "brctl show" that vmbr0 is bonded to eth2
After deploying a new server with latest version (4.2) I've noticed that /var/lib/vz does'nt exists anymore.
Instead, it uses LVM as RAW...
You can rename every "eth0" to "eth2" and restart networking.
Remember to change bridge_ports eth0 to eth2 too.
I'm deploying a new server (Intel S2600WT2R) and Proxmox 4.2. It has a Hardware (megaraid) RAID1 of 2.8Tb.
After doing intensive I/O...
In Proxmox 3 I have a similar behavior in one VM with SATA hard disk configured.
I Usually configure VirtIO, but this VM was P2V'ed and I...
I have to add that Snort is detecting "BAD-TRAFFIC same SRC/DST" issues caused by "eth1" devices in all Proxmox nodes.
The strange thing is...
I have a 7 node cluster and all nodes have eth0 mapped to 172.26.x Internal network (vmbr0) and all have a vmbr1 bridge which involves eth1...
Thank you @debi@an!!!
I've finally downgraded all nodes to Proxmox 3.4.... Cluster works flawlessly, HP Proliants without compliants too... :)...