Hello,
We are in the process of migrating from Vmware 6.5 to Proxmox 6.1.2 and we are having migration problem with one of the VMs.
We have two servers in a cluster and we get same problems on both servers. Outputs of ha-manager st and service statuses are also the same on both servers.
Does any one have any idea what we are doing wrong with this one? We have migrated a dozen VMs so far without problems.
Command: qm importovf 110 vmname.ovf storage_name
Output:
qemu-img: Could not open '/root/ovf/vmname/disk-0.vmdk': Invalid footer
could not parse qemu-img info command output for '/root/ovf/vmname/disk-0.vmdk'
Use of uninitialized value $virtual_size in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer/OVF.pm line 221.
error parsing /root/ovf/bgweb2/disk-0.vmdk, size seems to be at /usr/share/perl5/PVE/QemuServer/OVF.pm line 221.
ha-manager st:
quorum OK
master server2 (active, Fri Feb 21 09:59:27 2020)
lrm server1 (active, Fri Feb 21 09:59:21 2020)
lrm server2 (active, Fri Feb 21 09:59:24 2020)
service vm:101 (server1, started)
service vm:102 (server2, started)
systemctl status pve-ha-lrm.service pve-ha-crm.service:
● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-02-21 09:35:36 CET; 25min ago
Process: 88964 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=0/SUCCESS)
Main PID: 88965 (pve-ha-lrm)
Tasks: 1 (limit: 13516)
Memory: 90.6M
CGroup: /system.slice/pve-ha-lrm.service
└─88965 pve-ha-lrm
Feb 21 09:35:35 bgproxmox1 systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Feb 21 09:35:36 bgproxmox1 pve-ha-lrm[88965]: starting server
Feb 21 09:35:36 bgproxmox1 pve-ha-lrm[88965]: status change startup => wait_for_agent_lock
Feb 21 09:35:36 bgproxmox1 systemd[1]: Started PVE Local HA Resource Manager Daemon.
Feb 21 09:35:41 bgproxmox1 pve-ha-lrm[88965]: successfully acquired lock 'ha_agent_server1_lock'
Feb 21 09:35:41 bgproxmox1 pve-ha-lrm[88965]: watchdog active
Feb 21 09:35:41 bgproxmox1 pve-ha-lrm[88965]: status change wait_for_agent_lock => active
● pve-ha-crm.service - PVE Cluster Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-02-21 09:35:35 CET; 25min ago
Process: 88960 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
Main PID: 88963 (pve-ha-crm)
Tasks: 1 (limit: 13516)
Memory: 84.3M
CGroup: /system.slice/pve-ha-crm.service
└─88963 pve-ha-crm
Feb 21 09:35:35 bgproxmox1 systemd[1]: Starting PVE Cluster Resource Manager Daemon...
Feb 21 09:35:35 bgproxmox1 pve-ha-crm[88963]: starting server
Feb 21 09:35:35 bgproxmox1 pve-ha-crm[88963]: status change startup => wait_for_quorum
Feb 21 09:35:35 bgproxmox1 systemd[1]: Started PVE Cluster Resource Manager Daemon.
Feb 21 09:35:40 bgproxmox1 pve-ha-crm[88963]: status change wait_for_quorum => slave
We are in the process of migrating from Vmware 6.5 to Proxmox 6.1.2 and we are having migration problem with one of the VMs.
We have two servers in a cluster and we get same problems on both servers. Outputs of ha-manager st and service statuses are also the same on both servers.
Does any one have any idea what we are doing wrong with this one? We have migrated a dozen VMs so far without problems.
Command: qm importovf 110 vmname.ovf storage_name
Output:
qemu-img: Could not open '/root/ovf/vmname/disk-0.vmdk': Invalid footer
could not parse qemu-img info command output for '/root/ovf/vmname/disk-0.vmdk'
Use of uninitialized value $virtual_size in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer/OVF.pm line 221.
error parsing /root/ovf/bgweb2/disk-0.vmdk, size seems to be at /usr/share/perl5/PVE/QemuServer/OVF.pm line 221.
ha-manager st:
quorum OK
master server2 (active, Fri Feb 21 09:59:27 2020)
lrm server1 (active, Fri Feb 21 09:59:21 2020)
lrm server2 (active, Fri Feb 21 09:59:24 2020)
service vm:101 (server1, started)
service vm:102 (server2, started)
systemctl status pve-ha-lrm.service pve-ha-crm.service:
● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-02-21 09:35:36 CET; 25min ago
Process: 88964 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=0/SUCCESS)
Main PID: 88965 (pve-ha-lrm)
Tasks: 1 (limit: 13516)
Memory: 90.6M
CGroup: /system.slice/pve-ha-lrm.service
└─88965 pve-ha-lrm
Feb 21 09:35:35 bgproxmox1 systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Feb 21 09:35:36 bgproxmox1 pve-ha-lrm[88965]: starting server
Feb 21 09:35:36 bgproxmox1 pve-ha-lrm[88965]: status change startup => wait_for_agent_lock
Feb 21 09:35:36 bgproxmox1 systemd[1]: Started PVE Local HA Resource Manager Daemon.
Feb 21 09:35:41 bgproxmox1 pve-ha-lrm[88965]: successfully acquired lock 'ha_agent_server1_lock'
Feb 21 09:35:41 bgproxmox1 pve-ha-lrm[88965]: watchdog active
Feb 21 09:35:41 bgproxmox1 pve-ha-lrm[88965]: status change wait_for_agent_lock => active
● pve-ha-crm.service - PVE Cluster Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-02-21 09:35:35 CET; 25min ago
Process: 88960 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
Main PID: 88963 (pve-ha-crm)
Tasks: 1 (limit: 13516)
Memory: 84.3M
CGroup: /system.slice/pve-ha-crm.service
└─88963 pve-ha-crm
Feb 21 09:35:35 bgproxmox1 systemd[1]: Starting PVE Cluster Resource Manager Daemon...
Feb 21 09:35:35 bgproxmox1 pve-ha-crm[88963]: starting server
Feb 21 09:35:35 bgproxmox1 pve-ha-crm[88963]: status change startup => wait_for_quorum
Feb 21 09:35:35 bgproxmox1 systemd[1]: Started PVE Cluster Resource Manager Daemon.
Feb 21 09:35:40 bgproxmox1 pve-ha-crm[88963]: status change wait_for_quorum => slave