Ancient CentOS 5 VM won't boot after transfer...


New Member
Nov 21, 2023
Washington State
So I'm in the process of consolidating two ESXi hosts (a Dell R710 running ESXi 6.5 and an R430 running ESXi 7) down to a single Proxmox host on an R740.

I'm moving my least used machines first in order to nail the process down.

I'm using the ESXi web interface to export the machine out and it does export without error.

The import command I use is:

qm import ovf 400 <machine>.ovf <filesystem name>

The import worked without error, however when I start the machine it dies...badly:


Any suggestions as to how I can get this to properly function?

Thanks all!



.encoding = "UTF-8" config.version = "8" virtualHW.version = "7" vmci0.present = "TRUE" powerType.powerOn = "hard" displayName = "old-Sage" extendedConfigFile = "Sage.vmxf" floppy0.present = "TRUE" numvcpus = "8" scsi0.present = "TRUE" scsi0.virtualDev = "lsilogic" memsize = "8192" scsi0:0.present = "TRUE" scsi0:0.fileName = "Sage.vmdk" scsi0:0.deviceType = "scsi-hardDisk" ide1:0.present = "TRUE" ide1:0.clientDevice = "FALSE" ide1:0.deviceType = "atapi-cdrom" ide1:0.startConnected = "FALSE" floppy0.startConnected = "FALSE" floppy0.fileName = "" floppy0.clientDevice = "TRUE" ethernet0.present = "TRUE" ethernet0.virtualDev = "e1000" ethernet0.networkName = "VM Network" ethernet0.addressType = "generated" guestOS = "centos-64" uuid.location = "56 4d 2e 3e 91 c0 8c fa-93 15 e3 43 de bf 8a 05" uuid.bios = "56 4d 89 30 3f a5 56 16-01 3a ce c3 2e 9a c6 09" vc.uuid = "52 74 c0 47 94 29 3c f1-f7 d8 6c d2 1e 09 66 f4" ethernet0.generatedAddress = "00:0c:29:9a:c6:09" = "781895177" cleanShutdown = "TRUE" sched.cpu.affinity = "all" sched.mem.shares = "normal" sched.mem.affinity = "all" ide1:0.fileName = "/vmfs/devices/cdrom/mpx.vmhba0:C0:T0:L0" tools.syncTime = "FALSE" sched.scsi0:0.throughputCap = "off" tools.upgrade.policy = "manual" sched.cpu.units = "mhz" sched.scsi0:0.shares = "normal" toolScripts.afterPowerOn = "TRUE" toolScripts.afterResume = "TRUE" toolScripts.beforeSuspend = "TRUE" toolScripts.beforePowerOff = "TRUE" sched.cpu.min = "0" sched.cpu.shares = "normal" sched.mem.min = "0" sched.mem.minSize = "0" scsi0:1.deviceType = "scsi-hardDisk" scsi0:1.fileName = "Sage_1.vmdk" sched.scsi0:1.shares = "normal" sched.scsi0:1.throughputCap = "off" scsi0:1.present = "TRUE" sched.cpu.latencySensitivity = "normal" tools.guest.desktop.autolock = "FALSE" pciBridge0.present = "TRUE" pciBridge4.present = "TRUE" pciBridge4.virtualDev = "pcieRootPort" pciBridge4.functions = "8" pciBridge5.present = "TRUE" pciBridge5.virtualDev = "pcieRootPort" pciBridge5.functions = "8" pciBridge6.present = "TRUE" pciBridge6.virtualDev = "pcieRootPort" pciBridge6.functions = "8" pciBridge7.present = "TRUE" pciBridge7.virtualDev = "pcieRootPort" pciBridge7.functions = "8" nvram = "Sage.nvram" virtualHW.productCompatibility = "hosted" replay.supported = "FALSE" sched.swap.derivedName = "/vmfs/volumes/61368a86-79d35fac-3934-a0369fe33ff4/Old Sage/Sage-9356b2a2.vswp" debugStub.linuxOffsets = "0x0,0xffffffff,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0" pciBridge0.pciSlotNumber = "17" pciBridge4.pciSlotNumber = "21" pciBridge5.pciSlotNumber = "22" pciBridge6.pciSlotNumber = "23" pciBridge7.pciSlotNumber = "24" scsi0.pciSlotNumber = "16" ethernet0.pciSlotNumber = "32" vmci0.pciSlotNumber = "33" vmotion.checkpointFBSize = "4194304" ethernet0.generatedAddressOffset = "0" hostCPUID.0 = "00000014756e65476c65746e49656e69" hostCPUID.1 = "000406f10010080077fefbffbfebfbff" hostCPUID.80000001 = "0000000000000000000001212c100800" guestCPUID.0 = "00000014756e65476c65746e49656e69" guestCPUID.1 = "000406f10004080082d822031f8bfbff" guestCPUID.80000001 = "00000000000000000000010128100800" userCPUID.0 = "00000014756e65476c65746e49656e69" userCPUID.1 = "000406f10004080082d822031f8bfbff" userCPUID.80000001 = "00000000000000000000010128100800" evcCompatibilityMode = "TRUE" tools.remindInstall = "TRUE" numa.autosize.cookie = "80001" numa.autosize.vcpu.maxPerVirtualNode = "8" monitor.phys_bits_used = "40" softPowerOff = "TRUE" toolsInstallManager.lastInstallError = "21004" toolsInstallManager.updateCounter = "1" migrate.hostLog = "./Sage-9356b2a2.hlog" scsi0:0.redo = "" scsi0:1.redo = "" checkpoint.vmState = "" cpuid.coresPerSocket = "4" annotation = "This machine was exported to vmhost3 (Proxmox) on 11/19/23|0A"

I should note that the export from vsphere gave me these files:

you filtered out the parts that would be important. please post the ENTIRE vmid.conf.

While we're at it, whats the storage type for superchungus? (may be useful to post /etc/pve/storage.cfg)
Last edited:
the problem is that you dont have a SCSI controller in the system. This configuration should not be working at all.

The first order of business would be to go to the gui, select the VM and go to the hardware tab.

IF you see a SCSI controller listed, post it here. it SHOULD be either default LSI 53C895A or VMware PVSCSI.
IF you do not, select each of your hard disks and click "detach." Then you can attach them to or SATA or IDE and that should get you going- just make sure to change the boot order afterwards so your selected host bus is in there.

What is the VM doing? if at all possible, you really should just move its function to a supported distro. it will be more secure and probably much faster too.
Here's the hardware config from the gui:

The VM runs some ancient software that won't run on later versions of Linux, so updating isn't possible unfortunately. It's only used occasionally so the security aspect isn't really an issue.

Thanks for the help, it's appreciated.

yeah, that would work. remember to change the boot order in the vm options.

There is another thing that I vaguely remember moving vms from vmware which had to do with caching; try setting the cache to write-back and see if that helps.
yeah, that would work. remember to change the boot order in the vm options.

There is another thing that I vaguely remember moving vms from vmware which had to do with caching; try setting the cache to write-back and see if that helps.
That fixed it. Moving from scsi? to ide? in the configuration file was the key. Now on to the next problematic machine. :)

thank you!

  • Like
Reactions: alexskysilk


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!