Hello All,
I have a problem i've spent the last two days trying to sort out. I have two identical servers, different locations. At one location i have working KVM's, no problems there.
at the new location i cannot get kvm's to run, either from a backup (from the working location) or, if i create a new kvm. creating a new kvm goes fine until i reboot or power cycle the kvm. in all instances the message i see in the proxmox web gui is 'internal-error' after a few seconds of starting. i've tried to start the kvm by disable 'kvm hardware virtualization' and when i deselect that the kvm will start, but the windows boot says that the volume is damaged, or, i see only a black screen.
so, whether i use a backup and restore and try to start, or, if i create a new kvm from scratch, on start i see always 'internal-error' and i can duplicate it every time.
here is my pveversion output
proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
the server is 48gb ram and quadcore xeon 5520. it has also a lsi raid 5 hardware.
if there is any ideas about this i would appreciate it. i already worked it two days without any good results.
I have a problem i've spent the last two days trying to sort out. I have two identical servers, different locations. At one location i have working KVM's, no problems there.
at the new location i cannot get kvm's to run, either from a backup (from the working location) or, if i create a new kvm. creating a new kvm goes fine until i reboot or power cycle the kvm. in all instances the message i see in the proxmox web gui is 'internal-error' after a few seconds of starting. i've tried to start the kvm by disable 'kvm hardware virtualization' and when i deselect that the kvm will start, but the windows boot says that the volume is damaged, or, i see only a black screen.
so, whether i use a backup and restore and try to start, or, if i create a new kvm from scratch, on start i see always 'internal-error' and i can duplicate it every time.
here is my pveversion output
proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
the server is 48gb ram and quadcore xeon 5520. it has also a lsi raid 5 hardware.
if there is any ideas about this i would appreciate it. i already worked it two days without any good results.