jackjackson

New Member
Mar 20, 2023
19
0
1
Hello im getting a littlebit confused over live migration
do i need ceph or any kind of special settings to be able to migrate 1vm from node1-to node2 while the vmdisk is on local storage?
currently im getting this error

2023-10-12 23:28:16 ERROR: migration aborted (duration 00:00:00): storage 'storage4' is not available on node 'vmn2'
TASK ERROR: migration aborted.
 
No, you dont need Ceph. When migrating from UI with local disks, the system expects you to have a storage that has matching name on the target side.
It sounds like your disks are on "storage4", then node2 needs to have some storage object named "storage4" as well.
You have a few more option when using CLI "man qm">search "qm migrate".


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi,
you can select a target storage when doing online migration (both UI or CLI, for offline migration the option is currently CLI-only) to make it work even if the same storage is not available on the target.
 
Hi,
you can select a target storage when doing online migration (both UI or CLI, for offline migration the option is currently CLI-only) to make it work even if the same storage is not available on the target.
not realy working i can see the CT is migrated but im still getting this error TASK ERROR: storage 'tarolo4' is not available on node 'vmn2' any ideas?
also a single vm lasted 3 hours and nothing migrated from A-B on 1GB link 300GB DISK i should have left it to let it do its own thing? " i cancelled it"
 
Last edited:
No, you dont need Ceph. When migrating from UI with local disks, the system expects you to have a storage that has matching name on the target side.
It sounds like your disks are on "storage4", then node2 needs to have some storage object named "storage4" as well.
You have a few more option when using CLI "man qm">search "qm migrate".


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
so should i edit the Storage name then? currently im having these on both nodes, i can re create node2 from scrach, size doesent matter right? thank fully i did not start to populate or have mission critical stuff on it unlike node1. another question can i mix zfs with lvm/directory pool or both of them have to share the same storage type? cuz node1 doesent have any zfs its backed by HW raid. n2 on the other hand has its core on zfs + 4Tb disk created as a new pool "storagepool" the remaining 2 on n2 is lvm/directory storage types
 

Attachments

  • node1.png
    node1.png
    5.8 KB · Views: 5
  • node2.png
    node2.png
    4.9 KB · Views: 4
Last edited:
not realy working i can see the CT is migrated but im still getting this error TASK ERROR: storage 'tarolo4' is not available on node 'vmn2' any ideas?
For containers, the target storage option is only available via CLI.
also a single vm lasted 3 hours and nothing migrated from A-B on 1GB link 300GB DISK i should have left it to let it do its own thing? " i ancelled it"
Can you share the full migration log? Is the disk maybe LVM-thin with discard turned on? Then the target will be zeroed first, which will take a while (reference).
so should i edit the Storage name then? currently im having these on both nodes, i can re create node2 from scrach, size doesent matter right? thank fully i did not start to populate or have mission critical stuff on it unlike node1.
It makes sense to have only a single storage name/configuration for similar storages that are present on multiple nodes. Ideally, the size would be comparable, but it's not a hard requirement. The important thing is that things you want to migrate fit.
another question can i mix zfs with lvm/directory pool or both of them have to share the same storage type? cuz node1 doesent have any zfs its backed by HW raid. n2 on the other hand has its core on zfs + 4Tb disk created as a new pool "storagepool" the remaining 2 on n2 is lvm/directory storage types
No, you can't mix different types using the same storage name/configuration. You'll need to use the target storage option then, but for offline/LXC restart migration, not all combinations will work either.
 
For containers, the target storage option is only available via CLI.

Can you share the full migration log? Is the disk maybe LVM-thin with discard turned on? Then the target will be zeroed first, which will take a while (reference).

It makes sense to have only a single storage name/configuration for similar storages that are present on multiple nodes. Ideally, the size would be comparable, but it's not a hard requirement. The important thing is that things you want to migrate fit.

No, you can't mix different types using the same storage name/configuration. You'll need to use the target storage option then, but for offline/LXC restart migration, not all combinations will work either.
tried with the same names checked shared storage too, it did not gave the error i meantioned above but the disk it self was not found on the other node tarolo4

i can post the task log if i had one curently this i s what i get this last time i tried lasted 3 hours and the disk still was no where to found


2023-10-16 11:05:45 starting migration of CT 127 to node 'vmn2' (10.0.1.6)
2023-10-16 11:05:45 found local volume 'tarolo4:127/vm-127-disk-0.raw' (in current VM config)
2023-10-16 11:05:48 Formatting '/mnt/pve/tarolo4/images/127/vm-127-disk-0.raw', fmt=raw size=21474836480 preallocation=off

also it looks like its going on infinite loop
1697448052873.png


also here is the other log wich did move the vm instantly but the disk it self stayed on the node1 storage4 somehow
TASK ERROR: volume 'tarolo4:127/vm-127-disk-0.raw' does not exist
ill post my revamped storage layout , im just confused how this should work and the requirements it needs to function properly 1697448183594.png
 
Last edited:
tried with the same names checked shared storage too
also here is the other log wich did move the vm instantly but the disk it self stayed on the node1 storage4 somehow
TASK ERROR: volume 'tarolo4:127/vm-127-disk-0.raw' does not exist
Dont set "shared" attribute for your storage, your storage is local, not shared. The data needs to be fully copied with all underlying requirements (preformat etc), as opposed to "shared" storage where data is available on both servers simultaneously.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Dont set "shared" attribute for your storage, your storage is local, not shared. The data needs to be fully copied with all underlying requirements (preformat etc), as opposed to "shared" storage where data is available on both servers simultaneously.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
okay so same name and do not check in shared storage then it will work with out f.king out the drives, but im curious why it takes so much time to migrate 300gb container on 1gb link, i will do an i perf test 300gb on 1GB should at least last for 10-15 mins not 2-3 hours. thank you for the assistance
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!