Ok, so, I create the new VM with a temporary disk (let's say, 1GB), then I remove the whole storage configuration from the VM and after that I'll run "qm importdisk" specifying the VM id ?
I've installed qemu guest agent on a client (apt-get install qemu-guest-agent) and then, on the host:
# qm set 100 -agent 1
update VM 100: -agent 1
but
# qm agent 100 ping
No Qemu Guest Agent
any help ?
I'm able to export a XenServer VM, creating a huge XVA file.
I only have PVs, I can add grub and a linux kernel, then exporting the VM as XVA, but my biggest concern is the total time:
some hours are needed to export the VM as XVA (due to a very stupid and nonsense ratelimit imposed by...
But if I don't need any MBR failover and boot only via UEFI, isn't possible to use standard ZFS partitioning?
ZFS automatically creates partition 1 (data) and partition 9 when using the whole disk. It seems possible to directly boot in zfs without any bios partition
I've seen here: https://pve.proxmox.com/wiki/ZFS_on_Linux#_bootloader that PVE add ad partition for grub in the ZFS unallocated space
Any advantage about this?
Why, in a brand new machine, I can see ZFS configured in partition 2 (sda2) and at the same time, sda1 and sda9 are still used by ZFS...
In using debian
Will these modules break something in PV under xen?
Can I add these after the migration or the migrated machine won't boot without these modules?
This: https://docs.broadcom.com/docs/SAS3IRCU_P15.zip is working properly with SAS3008 HBA (no RAID).
Obvisouly, you can't use any RAID-related feature (like STATUS, CONSTCHK, ....) but LOCATE seems to work properly.
This was already discussed tons of time, but all thread are very old
I have to convert about 150 XenServer VMs (all PV) to proxmox. Considering the number of VMs I have to migrate, I need to automate as much as possible
I know that I have to install a kernel and grub in each VM before...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.