Search results

  1. L

    "storage migration failed" moving a local disk to lvmthin

    Did this while the migration was at 7% (still running right now). Didn't produce any output. Should it? Edit: Still getting the same error: WARNING: Device /dev/dm-6 not initialized in udev database even after waiting 10000000 microseconds. Logical volume "vm-108-disk-0" successfully...
  2. L

    "storage migration failed" moving a local disk to lvmthin

    I have a VM that is using a local qcow2 disk (virtual 465 GB, physical 55 GB) that is stored on a local directory (100 GB). The VM has recently been migrated from another node to the new, second node in a cluster. It's running fine, only the disk needs to be moved to a lvmthin storage (7 TB)...
  3. L

    New node in a cluster can't use all of its storage

    Will open another thread for the last problem as the original thread topic has been resolved with comment #7
  4. L

    New node in a cluster can't use all of its storage

    Migration was fine, but I get an error when I try to "Move disk" to switch from local storage to lvmthin (using GUI on first node; VM is on second node): Virtual Environment 6.1-8 Search Virtual Machine 108 (atl) on node 'atl-vm03' Server View () scsi0 lvmthin Raw disk image (raw) create full...
  5. L

    New node in a cluster can't use all of its storage

    Would I start this migration on a specific node?
  6. L

    New node in a cluster can't use all of its storage

    No, I thought this wouldn't make a difference. Apparently it does in this case. It's added now. I want to migrate a VM from node 1 (where it's using a local storage (type "directory")) to the new node. Will I be able to configure it to use the lvm-thin storage then? Or how would I continue?
  7. L

    New node in a cluster can't use all of its storage

    Via "Storage > Add > LVM-Thin"? When I click on the dropdown for "Thin Pool", it's scanning, but doesn't find anything. atl-vm03:~# cat /etc/pve/storage.cfg dir: Backup path /var/lib/vz/dump content backup maxfiles 2 dir: local path /var/lib/vz content...
  8. L

    New node in a cluster can't use all of its storage

    Yes, the new node has already been added to the cluster. The state I described in my initial post is after clustering.
  9. L

    New node in a cluster can't use all of its storage

    Hello, we have been running Proxmox for some years on our first server and have now just started to use a second server. The cluster is set up and running, but I'm confused about how it works with storage now. PVE 6.1-8 Server 1: 1 TB disk space "local" storage: 800 GB Server 2: 8 TB disk...
  10. L

    Proxmox VE 6.0 beta released!

    Had a very similar problem also with a Dell C6100 that later on in the stack trace showed "Code: Bad RIP value.". This was the only thread I found, so for anyone looking this might be helpful in the future: In the BIOS, set the BMC to use a dedicated MAC instead of sharing it. Afterwards...
  11. L

    live migration change qcow2 disk into raw

    Ah, found my mistake: The new host has been freshly installed, but apparently needed the "pve-no-subscription" repository added and upgraded to get the most recent version. Haven't tested but this should be fine then.
  12. L

    live migration change qcow2 disk into raw

    My server has 6.1-6 but still the disk got converted from qcow2 to raw when I live migrated via the GUI yesterday.
  13. L

    live migration change qcow2 disk into raw

    Experiencing the same problem. The bug report states that a patch has been applied on 2020-01-21. When will a new version be released?