For 2 node setup you must follow this guide: http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster#Configuring_Fencing
have you followed that?
Hello
Since it works on first node then there must be something wrong on node2, for example hardware problem?
Have you used the normal backup/restore procedure to restore the vm on node2?
What kind of hardware has node2(cpu,ram,storage)?
Do you have other vms on node2 working correctly?
What...
You mean inside VM right?
Probably your disk is LVM member and cannot be directly mounted.
You should give:
pvscan
vgscan
lvscan
What does:
lvdisplay
shows?
You should mount that LV. For example:
mount /dev/mapper/vg0/lv0 /mnt
Sorry I meant /etc/pve/qemu-server/102.conf.
That should be the correct path and it should have already content.
Try to put whole drive not only the partition.Also make sure that /dev/sde is not occupied by another process.
Example:
sata1: /dev/sde
or
ide1: /dev/sde
If you want to passthrough them to VM edit the VM config /etc/qemu-server/<vmid>.conf, and directly assign the device.
For example:
ide2: /dev/sdb2
If you want to copy them inside VM virtual disk you should clone and restore them inside VM (with clonezilla or similar tool)...
I think you will need IPMI on both of them to work.
If you want just to test and experiment you can use "manual fencing" as alternative.
Take a look in this post to get an idea of how you should configure your cluster.conf...
thank you for the feedback.
I ended using zvol+drbd for replication.Seems more stable than gluster.Each vm resides on separate zvol + drbd resource.If I need to rollback I bring down the specific drbd resource first,do zfs rollback zvol and finally bring up drbd resource.The only problem is that...
Looks like pve cannot locate your qcow2 disks and CTs.
Maybe problem with the underlying storage?
Did you tried to reboot? Try to fsck only un-mounted volumes otherwise you may corrupt the data (e.g from livecd).
Remove the temp scsi disk that you created for driver installation, edit vm config, change boot disk from "scsi1:" to "scsi0:". Then change boot order on VM -> Options (boot order: device1=scsi0).
Yes, you can by using a quorum disk.Check https://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster#Create_the_Quorum_Disk
Why you don't just put proxmox on all of them + Gluster or better Ceph? If their hardware is capable of course.
Yes, it still applies on 3.x
Yes, the performance will be degraded during the backup that is normal.But you can stop it if you want.Anyway try to test it at the end of the day.
You can try adding an external hdd as an additional backup device to the node where vms are running.This will be much better I think (for your...
if it was there before then for sure it is not the problem.
Can you try to check the box "no backup" on this particular disk (unused) and then initiate a manual backup?
No, it should show you all available kernel headers.
For example to me it shows:
apt-cache search pve-headers
pve-headers-3.10.0-4-pve - The Proxmox PVE Kernel Headers
pve-headers-2.6.32-30-pve - The Proxmox PVE Kernel Headers
pve-headers-3.10.0-3-pve - The Proxmox PVE Kernel Headers...
hello
Looks like there is an issue with vm-100-disk-4.raw file which is preventing backup to complete.
Does it exists on your local storage?
Is it included on your backup job?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.