Migrating physcial Windows 2003 servers using selfimage, fail to boot

abadger

New Member
Sep 16, 2012
16
0
1
I have three Windows 2003 servers running on HP Dl360, Dell Poweredge 1950 and Dell PowerEdge 2950 servers.

To migrate these to a KVM virtual machine I firstly tried vmware converter without sucess (BSOD on boot).

So have tried using selfimage as follows:-

1. Installed and run mergeide.reg
2. Installed selfimage - and made an image of the whole disk. (Note machines have a Perc raid card in them).
qemu-img create -f qcow2 disk.qcow2 200G
qemu-nbd -t -p 2000 diosk.qcow2

3. Attached the disk.qcow2 as a IDE device to a new VM

4. It fails to boot....
Booting from Hard Disk...
Error loading operating system

I not sure whether I need to load some disk drivers onto these systems, I note that if I load these disks into a Linux VM, fdisk -l reports no valid partition table on them.

Any help with P2V (physical to virtual) for windows 2003 servers much appreciated.

Dave



If i
 
I have the same problem. I'm trying to migrate Windows server 2003 R2 using selfimage and I get the same "Error loading operating system" error.
I've tried all of the methods from this proxmox wiki with no luck. I'm trying to migrate from an HP ProLiant ML350 G5 using hardware RAID 5.
 
Thanks for the feedback, very keen to resolve this so happy to try any suggestions.

I chose "\Device\Hardidsk0 (entire disk) from the following Selfimage listing on the windows 2003 server....

\Device\floppy
\Device\Harddisk0 (entire disk)
\Device\Harddisk0\Partition1
\Device\Harddisk0\Partition2 (C:\)
\Device\Harddisk0\Partition3 (D:\)
\Device\harddisk1 (entire disk)

Listing the partirtions individually in SelfImage I can see their respective sizes as:-
Partition 1 is 39.19M ... not sure what this is.
Partition2 is 40.25G .. the OS on C:\
Partition3 is 98.8G .. D:\ drive

But I did self image the entire Disk, and this gave me a 100G qcow2 image:-

# du -s -h /var/lib/libvirt/images/disk.qcow2
100G /var/lib/libvirt/images/disk.qcow2

I surprised this is only 100G and not 139G, I will rerun the selfimage to check this again. The disk.qcow was created as a 200G qcow2 disk, and file on this reports that size.

Does in matter if SelfImage reports a size in the 'Skipped' column ?

I beleive the problem is related to the SCSI disk controller being supported under KVM....

This is a Dell 2950 server running Windows 2003 standard edition, the hardware lists the disk controller as:-DELL PERC 5/i SCSI Disk Device

The machine has 4 x 146G drives in it, the first two being configured as a mirrored raid, and they hold the OS (C:\) and D:\ drives. The second two disks a re not currently being used.
 
Thanks for the feedback....

My system is a Dell 2950 with 4x 146G drives, only the first two being used in a mirrored raid configuration. The OS is Windows 2003 standard edition, and the hardware raid controller reports DELL PERC 5/i Disk Device.

I believe the issue lies with the support of this SCSI raid controller in KVM.....

Here are the selfimage details:-

\Device\Floppy
\Device\Harddisk0 (Entire disk) 136.124G
\Device\Harddisk0\Partition1 39.19M (not sure what this is...some Vendor info ?)
\Device\Harddisk0\Partition2 C:\ 40G
\Device\Harddisk0\Partition3 D:\ 89G
\Device\Harddisk1

I chose \Device\harddisk0 (Entire disk)

One strange thing was the image it created was only 100G in size...I will recheck this and do the image again. The qemu-img image was 200G, so had enough space for the entire disk.

Happy to try anything to make this work, or provide any further details to help debug this.

Thanks Dave
 
I am sorry, but i am having the same problem here.
I have chosen to mirror entire harddisk, but when booting i get
booting from harddisk...
Fehler beim Laden des Betriebssystems

I love this selfimage approach, and would like to get my copy running...
Any idea?

Thanks
Sascha
 
I retried the selfimage again on the same Dell 2950 sever runing Windows 2003 standard edition, service pack 2. The system has a DEll Perc 5/i raid controller, which is mirroring 2x 146G drives. A further 2x146G drives are installed but not being used. This time I get the BSOD screen when I boot from the copied entire disk image......so I do momentarily see the windows screen. The BSOD appears and disappears in a flash...any way to make this stick on the display for longer so I can get the error codes from it ? Looks like stop 0x000000078 at end..but too quick to read properly.

Again willing to try any suggestions to make this work, still feel the issue lies with the RAID SCSI controllers, and their support in KVM ?

Thanks Dave
 
Hi, for what it is worth, I recently (last ~week) migrated a number of systems, VMWare Linux and Win2003 guests to ProxmoxVE via "Clonezilla". The process is more or less identical for physical hosts. I had good success with the procedure. I might recommend you do an 'easy test' machine first to test:validate the process. The 'slow' (time consuming) step is the moving of blocks from source to target host. So a machine with relatively small amount of data on the disk (10gig or less) might be a nice easy 'test case' to practice on.

I did migrate an SBS2003 host with 500gig disk / 300gig of data - the migration went fine, just slow to copy data via gig ether.

Process I used is documented in my wiki, at the URL: http://sandbox.fortechitsolutions.ca/pmwiki.php/Assorted-Reference/Vmware-to-kvm-proxmox-migration

I hope this helps,


Tim Chipman
Fortech I.T. Solutions
http://FortechITSolutions.ca
 
Yes we have tried clonezilla before, but it didnt copy any data unfortunately.
Clonezilla created the partition with the right parameters, but then just finished without copying anything else.
I guess it could have something to do with the source raid configuration, but am not sure.
So as an alternative we wanted to try selfimage.
Selfimage worked like a charm, and is of course very very handy, as it can be used out of a working system.
Transfered Data is complete and looks good, but the boot process will just stop after the seabios message...

I am stuck...what can be the problem?

Thanx
Sascha
 
Tim,

Did any of your machines being migrated have RAID cards in them ? What was the hardware config for the windows machines you migrated.

Happy to try the clonezilla route - but this means having to go to the data centre and taking down each server and booting from the clonezilla CDROM - right ?

Thanks Dave
 
Hi Sascha,

I guess it will depend on your environment. As I said, I used clonezilla to move a number of systems and it worked flawlessly. With Clonezilla there are a number of modes and you need to choose the correct one. I found for ease of use, this arrangement in clonezilla:

- work with 'images' not 'drives' (ie, backup data into an image, rather than clone direct to another disk on the same host)
- work with 'disks' not 'partitions' to capture all partitions / data on the disk (including MBR, etc) in one fell swoop (rather than capturing specific partitions).

Your description sounds like you might have multiple 'disks' present in your source system, which complicates things slightly but doesn't make it impossible. If clonezilla didn't copy data, then it tells me -- you didn't ask it to do this. Clonezilla will do what you ask; the trick is learning how to use it properly to do what you need / in your situation.

For example, if you have:

- Single large raid array of 1Tb composed of 5 disks
- this raid is sliced into 2 "virtual disks" of capacity 200gb (OS) and 800gb (data)
- your guest OS is windows, has C: drive for OS volume and D: drive for data volume

with clonezilla, you would need to capture first the C: drive 200gig disk image; restore this to a ProxVE KVM VM instance booted to clonezilla livecd instance - and restore the 200gig data image to get this volume recreated. Then, you would need to clonezilla image the 800gig data disk; and restore that to a second virtual HDD on your ProxVE KVM VM instance, of appropriate size (800gb or more). Once each drive had been transferred, you would be able to boot the system (in theory) and it would work 'normally'.

Note that I doubt raid configuration is a factor in your situation; RAID is 'below the level of concern for the OS' - ie - windows does not care if it is running on a single HDD; a Raid1 array or a Raid60 array. All windows knows is that it is running on a "disk". (Raid redundancy is out of its control / not its concern). What is visible to windows is how many disks are presented to it (multiple drives, partitions, etc)


Tim
 
Hi Dave,

As I said in my post, the machines I migrated last week with Clonezilla were not 'real physical hosts'. They were VMware based VMs running on ESXi 3.5 I believe. But - the process is the same - a VMWare VM (or a ProxVE KVM VM; or a Xenserver Xen based VM) - are all a "standalone system" in their own right; and would all be equally migrated via clonezilla.

You are correct, you need to boot the 'source' system from clonezilla to migrate it. This implies either
(a) remote KVM console control of the server being migrated, configured and working, OR
(b) physical access to the server, console, optical drive which boots the server (or USB bootable device - memory stick - appropriately prepared and tested)

so - it is an 'offline' migration - in the sense that you reboot your windows host to Linux/Clonezilla
Then capture data using clonezilla and send 'image of hard drive' to a nearby SSH:NFS:SMB server / storage pool
Then boot your target system (ProxVE KVM VM instance) using clonezilla liveCD (iso)
Then restore from your SSH:NFS:Smb server which has the "image of hard drive" for system being migrated
Then once clonezilla restore is done, you can reboot the ProxVE KVM instance, and bouf, it will spin up.

If you want a non-offline approach I know of a few other options I've used,

- Citrix XenServer distribute a free tool that facilitates migration of a server to a Xen VM image file.
- I've used this tool in the past to build a VM image of a host; then copy this image file over to ProxVE; then use qemu-convert tools to flip it to a more desirable format; and ProxVE was able to boot it.

The main constraint I had in this sort of conversion, is that you need to have a second storage device available to hold the image data file.

For example (simplest scenario), if you have a Win2003 physical server with a single C: drive of 100gig
Then you need either (a) second hard drive - even a crummy USB disk is adequate or (b) a SMB storage target that is tolerable performance (gig ether, not a cruddy NAS device ideally) - which is accessible from the 2003 server being migrated.

Then you run the XenServer migration tool; tell it to store the output ~100gig system image file on your 'non-C-Drive storage device' and let it grind. It takes a while; longer if your storage is slow (USB / cheapo NAS / etc rather than internal SATA /SAS/Direct-attached / Raid device) but it does work.

VMware P2V (free) migration tool also works but is a 2 step process, and is better to avoid because it installs VMWare tools in the migrated VM; which you then have to uninstall before migrating - ie - a 2 step migration, extra work:drama.

There are commercial tools which also cook image files but these are not free of course

Tim
 
The server I'm trying to migrate is using an HP E200i raid controller in a raid 5 configuration.
Many sites seem to put the blame on going from SCSI or RAID to IDE and the boot sector isn't where it should be.

I'll find time to try Clonezilla again. The server stores our users home directory so I can't do it during business hours.
 
Hi,

As I said, the raid controller can't really be the issue. Your OS doesn't care, it is just a disk as far as it is concerned.

I assume(?) you have run the "mergeide.reg" on the system before doing this? if you fail to do ths step, then likely windows can't boot from the image in the KVM environment - this forces it to have IDE drive support, which is needed to boot 'the first time'. After that you can go through process of forcing install for Paravirtual IO drivers; change over to paravirt for HDD - and get better performance for the VM disk. But the first boot, you must have IDE support or the migration is dead in the water.

Tim
 
These steps have always worked for me.

Depending on the systems you are trying to migrate to proxmox ie Windows or linux depends on the steps you are going to take.

I have migrated the windows systems using these steps. And i haven't had a issue yet.

1 download and run the
mergeide.reg
2 download and install
vmware vCenter converter standalone once completed follow the following steps and your system will boot in proxmox
3 Select source type: powered-on machine, Specify the powered-on machine.
Next section select destination type: Vmware Virtual machine/workstation, this is just to convert the file so proxmox can read it. The Vmware type I have used Vmware workstation 8.0x and vmware player 4.0.x as the product type i haven't tried the others as both of these have worked for me, then name your system and select the destination
Next section Data to copy you want advance property's select type/cluster for you system disk as well as you main disk select pre-allocate, all else default.
Then in Devices go to other then select ide for your disk controller the memory and cpus really don't matter as proxmox will change what the systems sees depending on what you have your vms set in your proxmox interface. All other options you can leave default. Wait for your system to finish converting then move the image to proxmox then convert the image to raw or qcow2 if you want or you can leave it vmdk. Add the disk to you proxmox conf and boot the system. once you login remember to remove your ghost hardware by following these steps.

To work around this behavior and display devices when you click Show hidden devices:
Click Start, point to All Programs, point to Accessories, and then click Command Prompt.
At a command prompt, type the following command , and then press ENTER:
set devmgr_show_nonpresent_devices=1
Type the following command a command prompt, and then press ENTER:
start devmgmt.msc
Troubleshoot the devices and drivers in Device Manager.

NOTE: Click Show hidden devices on the View menu in Device Manager before you can see devices that are not connected to the computer.
When you finish troubleshooting, close Device Manager.
Type exit at the command prompt.

Note that when you close the command prompt window, Window clears the devmgr_show_nonpresent_devices=1 variable that you set in step 2 and prevents ghosted devices from being displayed when you click Show hidden devices.


Your system is converted run updates and you should be good to go.

With linux all i did is boot the system you want to convert in a live environment and DD the drive to a raw image and I used that to boot works for me for most linux os Freebsd gave me some issues i believe its a bug i just haven't had the time to find out in details if someone else succeed please let me know.​

Last edited by Dragoon; 07-10-2012 at 09:41 AM.​
 
Have not yet tried the Clonezilla route, but inspired by Dragoon's full response I retried using VMware vCentre convertor standalone (5.0.0 build-470525).

Still get same error when trying to boot from the raw vmdk disk...

Booting from Hard Disk...
Error loading operating system

I selected VMware Player 4.0.x as the product (had previously used VMware Workstation 8.0.x)
Pre-allocated for the disk (it has two partitions C: and G:eek:n it)
Set IDE as the controller rather that Auto select...

And still not joy. Frustrated.

I will try to convert above image to qcow2 format...see if that helps with booting.

Interesting that most people write the RAID controllers (PERC in my case) make no difference. Is there a way I can test my disk image, other than attach it to a new virtual machine. Suppose I could attach it to a Linux VM and see if it mounts....will check this.

Will also try clonezilla, when I can get to data centre, but would much prefer a method that used either VMware vConvertor or selfimage.

Thanks again to all those that have responded on this thread....I'm still working on getting this physical machine conversion to work.
Dave


vwin-flat.vmdk: x86 boot sector, Microsoft Windows XP MBR, Serial 0xdcffdcff; partition 1: ID=0x7, active, starthead 32, startsector 2048, 142254080 sectors; partition 2: ID=0x7, starthead 8, startsector 142256128, 144474112 sectors, code offset 0xc0
vwin.vmdk: ASCII text
vwin.vmx: ASCII text, with CRLF line terminators
root@m1:/var/lib/libvirt/images#



standalone convertor (
 
I retried the vmware convertor, and selected options suggest by Daroon, but still fails to boot from the created raw disk image.

Booting from hard disk...
Error loading operationg system

Here are the details of the disk created by VMWare convertor:

root@m1:/home/dave# file /var/lib/libvirt/images/*
/var/lib/libvirt/images/vwin-flat.vmdk: x86 boot sector, Microsoft Windows XP MBR, Serial 0xdcffdcff; partition 1: ID=0x7, active, starthead 32, startsector 2048, 142254080 sectors; partition 2: ID=0x7, starthead 8, startsector 142256128, 144474112 sectors, code offset 0xc0
/var/lib/libvirt/images/vwin.vmdk: ASCII text
/var/lib/libvirt/images/vwin.vmx: ASCII text, with CRLF line terminators
root@m1:/home/dave#


Has anyone else suceeded who was previously having these P2V issues ? I still have yet to try Clonezilla as have to go to data centre for that, but it's very frustraing that selfimage and vmware convertor don't appear to work on any of my three physical servers ??

Dave
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!