Search results

  1. F

    Cannot create more snapshot without deleting some olds one

    Ok, I´ve discovered where the problem is, By default, pv metadata size is 1020k, but two copies, so 510k = 522240bytes pvs -o mda_size /dev/sdb1 PMdaSize 1020.00k vgs -o mda_free raid0 VMdaFree 2.00k :rolleyes: I cannot increase size after pv is created, so...delete it and create with large...
  2. F

    Cannot create more snapshot without deleting some olds one

    Hi, I´m trying to create a new snapshot to a VM but I get this error: VG raid0 69189 metadata on /dev/sdb1 (521555 bytes) exceeds maximum metadata size (521472 bytes) snapshot create failed: starting cleanup Only if I delete an old snapshot, it works, but I can create only one. I´ve review my...
  3. F

    New UEFI qemu vm error: Guest has not initialized the display (yet), after update from 6.3 - 6.4 - 7.x

    Hello, I’ve upgraded from 6.3 to 6.4 and then to 7.x and now any new UEFI vm give the above error. Any other vm allready created before the upgrade works perfectly. I’ve discovered that the issue is related to efi disk. New vms add the extra parameter efitype=4m. Older vms doesn’t have this...
  4. F

    Failed to migrate disk (Device /dev/dm-xx not initialized in udev database even after waiting 10000

    I had the same issue even updating from PVE6.2-15 to PVE6.3-6 It happens both clonning a VM and running VMs with lvm-thin volumes in a multipath FC storage udevadmin trigger didn´t worked for me. Any ideas? Thks.
  5. F

    using DS4700 storage with proxmox howto

    Hi, Have you managed to solve the problem? I´m trying to use DS4700 also with Proxmox. Thks.
  6. F

    offline migration fails if vm disks name start with base-

    ok, in the second part I can manage to avoid those errors, but what can I do to avoid the first error? I know that I can avoid it by cloning the template, migrate and convert back to template, and do it again to migrate to node 1 after migration to V.6. Another way would be a backup / restore of...
  7. F

    offline migration fails if vm disks name start with base-

    Hi, I´ve monted a cluster with 2 nodes to migrate to V.6.0 I´m migrating offline all my qemu and lxc from node 1 to node 2, but I´m having troubles in some templates that previosly had snapshots. 2019-10-30 13:06:54 starting migration of VM 102 to node 'proxmox-hp' (192.168.8.15) 2019-10-30...
  8. F

    [error] clone failed: command '/sbin/lvs --separator : --noheadings ...... got timeout

    Hi, I´ve been deleting almost 40 VM´s, so now I don´t have so many VM´s . Here is the output: PV VG Fmt Attr PSize PFree /dev/raid0/base-135-disk-1 --- 0 0 /dev/raid0/base-138-disk-0...
  9. F

    [error] clone failed: command '/sbin/lvs --separator : --noheadings ...... got timeout

    Hi, LVM volume is not a shared volume. Today I´ve had another error cloning a vm qemu, but perhaps the problem was trying to make two clone of diferent snapshot of the same VM at the same time. Days ago errors occurred cloning only one VM at the same time journalctl: Jul 03 13:36:11 proxmox...
  10. F

    [error] clone failed: command '/sbin/lvs --separator : --noheadings ...... got timeout

    Hi, Same issue here, since upgrading from PVE 5.2 to PVE 5.4. Any news? Thks.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!