Hello.
I have been using a regular desktop pc with a AMD Phenom2 cpu as a proxmox server for a couple of years which has been working great and have been extremely happy with it. Recently I got a HP Proliant ML370G5 server (2 x E5430 Xeon cpu) and started using that with proxmox instead since it is a real server. The problem is that none of the backed up windows guests boot with the "new" Proliant server when restored to the new server. They can be made bootable with fixmbr, but what is it thats different with the new system? Is it because of the hardware or because of newer kernel or some other package that has broken the windows MBRs?
Booting a Win VM with the "KVM hw virtualization" unchecked does start booting ok but BSODs with unknown cpu type error, which seems to be a known problem.
Also, with the old server I used to be able to ddrescue a physical harddrive to a image and then dd that image to a VM's hdd but that doesnt work anymore. The VM hangs with "booting from hardisk" just like a restored windows VM.
The old server was originally installed as version 1.9 and has been upgraded gradually to 3.1.
The new Proliant server with problems got a fresh 3.1 install and is also fully updated.
Can I downgrade any packages to make the new server work like the old with windows mbr's and dd images or is this a hardware issue??
I also installed the same proxmox 3.1 on a core2duo desktop just for testing and tried restoring a backup there with the same results, windows mbr doesnt work.
--- pveversion -v from the "old" server that works great ---
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
--- pveversion -v from the "new" server that cannot handle MBR and dd images ---
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
Would dowgrading the kernel help or is it something else?
And I have to also thank you for this great software!
I have been using a regular desktop pc with a AMD Phenom2 cpu as a proxmox server for a couple of years which has been working great and have been extremely happy with it. Recently I got a HP Proliant ML370G5 server (2 x E5430 Xeon cpu) and started using that with proxmox instead since it is a real server. The problem is that none of the backed up windows guests boot with the "new" Proliant server when restored to the new server. They can be made bootable with fixmbr, but what is it thats different with the new system? Is it because of the hardware or because of newer kernel or some other package that has broken the windows MBRs?
Booting a Win VM with the "KVM hw virtualization" unchecked does start booting ok but BSODs with unknown cpu type error, which seems to be a known problem.
Also, with the old server I used to be able to ddrescue a physical harddrive to a image and then dd that image to a VM's hdd but that doesnt work anymore. The VM hangs with "booting from hardisk" just like a restored windows VM.
The old server was originally installed as version 1.9 and has been upgraded gradually to 3.1.
The new Proliant server with problems got a fresh 3.1 install and is also fully updated.
Can I downgrade any packages to make the new server work like the old with windows mbr's and dd images or is this a hardware issue??
I also installed the same proxmox 3.1 on a core2duo desktop just for testing and tried restoring a backup there with the same results, windows mbr doesnt work.
--- pveversion -v from the "old" server that works great ---
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
--- pveversion -v from the "new" server that cannot handle MBR and dd images ---
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
Would dowgrading the kernel help or is it something else?
And I have to also thank you for this great software!