can't boot after migrating from master cluster

xeniux

Member
Oct 6, 2010
33
0
6
hi...

does anyone know why is my qemu HD didn't detected after i migrating from master cluster?
i got this problem after migrating to my cluster node and can't boot to the system
can i just change the HD type from qcow2 to another type so that f12 can detect my HD (in /etc/qemu-server/ID.conf

very need advice from you guys

thanx ;)
 
post the output of 'pveversion -v', from both nodes.
 
node1
ProxmoxPluit:~# pveversion -v
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-19
pve-kernel-2.6.32-4-pve: 2.6.32-19
pve-kernel-2.6.32-1-pve: 2.6.32-4
pve-kernel-2.6.32-3-pve: 2.6.32-14
qemu-server: 1.1-18
pve-firmware: 1.0-8
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4

node2pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-25
pve-kernel-2.6.32-4-pve: 2.6.32-25
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4
 
make sure you run the same version within one cluster (only the kernel can be different)
 
i had updated both nodes to the same version
but i got stuck and freeze (the slave node vm) when i try to run it
any thoughts?

thanx for the reply

xeniux
 
ProxmoxPluit:~# pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-25
pve-kernel-2.6.32-4-pve: 2.6.32-25
pve-kernel-2.6.32-1-pve: 2.6.32-4
pve-kernel-2.6.32-3-pve: 2.6.32-14
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4


USysAidx:~# pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-25
pve-kernel-2.6.32-4-pve: 2.6.32-25
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4
 
Have you tried running the guest vm from the command line on the slave node?

"qm start 123"

Maybe the guest vm is locked.

c:)
 
yes i had tried it but the qemu server keep restarting over and over and can't reach to my ubuntu lucid lynx system login menu, does the different motherboard and processor effect this?

thanx for your thought ;)
 
yes i had tried it but the qemu server keep restarting over and over and can't reach to my ubuntu lucid lynx system login menu, does the different motherboard and processor effect this?

thanx for your thought ;)
Hi,
perhaps is your image (.qcow2) damaged?
You can try to convert it with qemu-img to a raw-disk and change the filename in the config to the raw-file.

Udo
 
where can i put the text? in qemu monitor or in bash terminal?
can u gimme an example?
tq udo :cool:
 
where can i put the text? in qemu monitor or in bash terminal?
can u gimme an example?
tq udo :cool:
Hi,
an example:
Code:
ssh root@proxmoxhost
cd /var/lib/vz/images/117/
ls -l
-rw-r--r-- 1 root root 262144  6. Nov 12:22 vm-117-disk-1.qcow2

qemu-img convert -O raw vm-117-disk-1.qcow2 vm-117-disk-1.raw
ls -l
-rw-r--r-- 1 root root     262144  6. Nov 12:22 vm-117-disk-1.qcow2
-rw-r--r-- 1 root root 4294967296  6. Nov 12:24 vm-117-disk-1.raw

vi /etc/qemu-server/117.conf

# change drivename from .qcow2 to .raw

qm start 117

Udo