"vCLS-4c4c4544-0042-5310-8030-b1c04f5a3233": {
"config": {
"datastore": "",
"path": "/var/run/crx/infra/vCLS-4c4c4544-0042-5310-8030-b1c04f5a3233/vCLS-4c4c4544-0042-5310-8030-b1c04f5a3233.vmx",
"checksum": "b2cee0014fab6e951cd89d30c1e8a9bf8718f6e3f5d9903a7cada39fa8571fa6"
},
"disks": [],
"power": "poweredOn"
},
--- #<buffer listvms.py<rust>>
+++ #<buffer listvms.py<libexec>>
@@ -265,6 +265,12 @@
with connect_to_esxi_host(connection_args) as connection:
data = {}
for vm in list_vms(connection):
+ # skip vms with empty datastore_name
+ datastore_name, relative_vmx_path = parse_file_path(
+ vm.config.files.vmPathName
+ )
+ if not datastore_name:
+ continue
try:
fetch_and_update_vm_data(vm, data)
except Exception as err:
Hello,FAQ
[...]
Q: Why can't I find updates for those packages?
A: As of this writing, these packages are available in the pvetest and the pve-no-subscription repositories.
I tried to use Veeam to restore also limited at 1Gbit but backup can be up to 7Gbit..This limitation is because of ESXi. Nothing PVE can do about it.
You can use other means of transferring the data, e. g. using a shared storage (NFS) or clonezilla.
See https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE
We recently tried live migration and it was quite weird.
The host we migrated (from ESXi 8.0.3 to Proxmox 9) was a simple Linux VM with some docker containers.
While the target Proxmox server reported a constant ingress of data of around 800MBit/s from the ESXi node, the progress on the VM migration itself was very very slow, far slower than it should've been considering the ingress rate of data.
Sometimes when the VM did not access the disk, you could see that the progress was approximately 100MByte/s.
But when the VM began accessing the disk (mostly reads!), the progress dropped far below the 100MByte/s while the ingress was still around 800MBit/s.
Even after waiting twice as long as it would take to migrate the entire VM disk (given the ingress rate of data), the overall migration was only at around 25%.
I also realized that during live migration (when you realize that it doesn't make sense, because your VM is SO SLOW that it's almost equal to the VM being down), you should NOT STOP the VM, because the migration will then fail.
Overall, I'd recommend to stay away from live migration in its current state.
We use essential cookies to make this site work, and optional cookies to enhance your experience.