Note, PCI passthrough is an experimental feature in Proxmox VE.
If the passthrough makes your host unstabil, you can also create a new bridge on the host, add the physical NIC to this bridge and add only the Virtual NIC of the firewall to that bridge, so only your that firewall NIC has access...
I still see this as a problem of your corosync conf being node in sync between your nodes.
you have a configuration mismatch beween your nodes and corosync cannot build a quorum ( hence the need to mount the cluster file system read only)
Is the deleted node still trying to joing the corosync...
ok can you please post the output of
journalctl -u corosync | tail -20
pvecm status
on one of your nodes
I don't understand why it is staying it's receiving a config file version 4
do you have some "hidden" nodes hanging on the network ?
I don't think this is related to the network bridges or even virtualization
for a timeout like this I would check if some DNS / reverse DNS is not working
you should start HA proxy in debug mode to see if this is really the time processing the request wich takes 30s
can you post the /etc/corosync/corosync.conf of the node having the error message, and
the /etc/corosync/corosync.conf of one of the remaining two nodes ?
what is actually your use case here ? ceph will do the high availability disk management for your disks, so you should'nt need to add a multipath device but the real hard drive to the pveceph createosd command
"[CMAP ] Received config version (4) is different than my config version (5)! Exiting"
this means the node has a more recent corosync.conf than the rest of the cluster
compare the content of corosync.conf in your nodes
is pve proxy the only process in D state ?
try the following command to check that
ps faxl | awk '$10~"D" {print $0}
sorry the command for checking the kernel log is
dmesg | grep "blocked for more than 120 seconds"
is the process still in D state (that's the important state)
as soon as this process is in D state please send:
dmesg -T | ack "blocked for more than 120 seconds"
first 10 lines of vmstat --unit M 2 output
I would like to know. are you using mechanical hardrives or SSDs.
and if the storage is OK check if you're not actively swapping at the momment.
when executing the command
vmstat 2
the columns si/so (swap in / swap out ) should be around 0 most of the time
NB: swap as seen with free/top is totally fine ! it's *actively* swapping, when the values si/so...
@Moritz Weichert
do you get the same situation as in http://blog.wittchen.biz.pl/ubuntu-system-boot-problem/
then do vgchange -ay , then exit
if this helps, then add a rootdelay paramater to your command line as explained in
http://blog.wittchen.biz.pl/ubuntu-system-boot-problem/ ( attempt 1)...
PVE does not have a builtin export functionnality, but you could create an OVF associated with a disk image using
https://github.com/EmmanuelKasper/import2vbox
then if you create a tarball containing the generated OVF and the vmdk, you have your OVA archive
@fireon:
you can import raw, qcow2, vmdk
in case you have a OVA export, you have to unpack the ova ( it's a tar archive) to get the disk image. Usually this will be in vmdk format although the standard does not mandate it.
Then use the qm importdisk command above with the disk image.
from the ip link output
eth3 has no carrier , probably meaning the cable is not connected
check if the network leds on the switch and on your network card are active
the key point is, are you able to export to OVA/OVF format ?
if you can do a OVA/OVF export from xen, then just extract the vmdk disk image from the export and run
qm importdisk 999 mydisk.vmdk my_storage
see...
Probably windows noticed the virtualized hardware changed, so you need to reactivate (windows is *very* picky about hardware changes) IIRC it is only a matter of calling an automatic voice machine to reactivate. Or you can try to configure the VM with exactly the same settings as it was before...
maybe the update did not complete successfully leaving some packages in a broken state ?
is it possible to ping the machine or login via ssh ?
I would advise the following:
* at the grub prompt, right after the bios try to boot a different kernel
* if that does not help, boot the system...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.