CentOS 7.3 cannot boot after migrate from VMware

tommykur

New Member
Mar 4, 2025
6
1
3
I have a new vm migrated from VMware (Centos 7.3). After migrating my vm won't boot and the error /dev/mepper/centos-root does not exist appears.
In this vm I have 6 hard drives with a total of 800 GB.
Please help me what should I do?

1747125408954.png

thank you
 
Hello tommykur! First of all, for the sake of completeness, I would like to point to some very useful documentation pages on migrations from other hypervisors to Proxmox VE:
These are great sources of information, so make sure to read them ;)

Now, to your specific problem, could you please post the following:
  1. The storage configuration of the server - that is, the contents of /etc/pve/storage.cfg
  2. The VM configuration - that is, the output of qm config <VMID> --current
 
Hello tommykur! First of all, for the sake of completeness, I would like to point to some very useful documentation pages on migrations from other hypervisors to Proxmox VE:
These are great sources of information, so make sure to read them ;)

Now, to your specific problem, could you please post the following:
  1. The storage configuration of the server - that is, the contents of /etc/pve/storage.cfg
  2. The VM configuration - that is, the output of qm config <VMID> --current
Hi, thank you for reply.

I migrated using VEEAM.

  1. The storage configuration of the server - that is, the contents of /etc/pve/storage.cfg
i didn't find that configuration

1747131916212.png

this is the configuration of VM
1747131998530.png

2. The VM configuration - that is, the output of qm config &lt;VMID&gt; --current

1747132216380.png

thank you
 
You are using the tab feature which is not going to show the storage.cfg file

Use instead:
Code:
 cat /etc/pve/storage.cfg
thank you,

this is the config storage :
Code:
 root@prox02:~# cat /etc/pve/storage.cfg 
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

zfspool: zssd01
        pool zssd01
        content rootdir,images
        mountpoint /zssd01
        nodes prox01

zfspool: zsas01
        pool zsas01
        content images,rootdir
        mountpoint /zsas01
        nodes prox01

zfspool: zssd02
        pool zssd02
        content images,rootdir
        mountpoint /zssd02
        nodes prox02

zfspool: zsas02
        pool zsas02
        content images,rootdir
        mountpoint /zsas02
        nodes prox02

esxi: vm-huawei
        server 10.xx.xx.xx
        username root
        skip-cert-verification 1[CODE]

root@prox02:~#
 
  • Like
Reactions: gfngfn256
I don't use CentOS & I definitely don't know what VM configs you had originally, but maybe some things to consider:

  • Are you sure which disk (out of 6) is the ACTUAL boot drive? You've chosen sata0 in your boot options.
  • Are you sure that the original VM used SATA & not SCSI as the bus? You've chosen sata .
  • There is a possibility that the original VM may have even used IDE as the bus - for up to 4 drives.
  • This Unix/Linux StackExchange link may be relevant.