Migration - storage is not available on target-node

keytrickz

New Member
Mar 2, 2022
15
0
1
27
Hello,

we are using Proxmox with several different hard drives.
Proxmox runs locally via ZFS on a storage, there are no problems here either.
We have created additional NVMe hard drives as additional storage and are also recognized by Proxmox and are functional.

However, there is now a problem in the cluster, namely the storagename must not occur twice in the cluster (must be unique) and here there is the problem,
that VM's can't be migrated to other nodes, since the name doesn't match the source system when migrating to the "other" storage
(storage1 → storage2 = storage1 not found)

The aim is to migrate the VM's from one node to another, but the naming of the storage is a problem.
How can we solve this?

Proxmox Version:
7.1-12
Error-Message at migration-start:
"storage 'host01-2tb-nvme' is not available on node 'host02' (500)"

Migrating a VM to the local Proxmox storage first, then to another node and then migrating the disk back to the additional storage is not a solution for us.

I thank you for your support.
 
zfs is used as the storage type.
Here ist the content of /etc/pve/storage.cfg

Code:
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

rbd: rbd
        content rootdir,images
        krbd 0
        pool rbd

zfspool: host01-2tb-nvme
        pool host01-2tb-nvme
        content rootdir,images
        mountpoint /host01-2tb-nvme
        nodes host01

zfspool: host02-2tb-nvme
        pool host02-2tb-nvme
        content images,rootdir
        mountpoint /host02-2tb-nvme
        nodes host02
 
you should only have one storage "2tb-nvme" with the same pool name and not limited to just a single node.. like you only have a single local-zfs (with different data on each node obviously, since it's a local storage).
 
@fabian - this doesn't work, if we try to create a storage with the same name, the following error message appears:
storage ID 'host01-2tb-nvme' already defined (500)
 
you only need to create the storage once, only the underlying zpool needs to created on each node.. if you create the zpool over the GUI, simply check the 'add as storage' checkbox once and not each time you create the pool (or leave it unchecked every time, and then add the storage as a last step in datacenter -> storage -> add)
 
I have a ZFS vpool on pve1 (2TB ZFS), then I setup another machine (pve2 w/512GB LVM) and clustered them together.
All I want to do was move/migrate a VM from pve1 to run on pve2, but it complained that there is no vpool on pve2

When I go under datacenter -> storage -> vpool, I can select vpool and select pve2 under Nodes.

This makes vpool show up on pve2,

1. but what does it mean? There is clearly no 2TB ZFS on pve2
2. what happens if the network disconnects?
 
Last edited:
local storages which are only available on a subset of the nodes need to be limited accordingly with the "nodes" option in storage.cfg .
 
Hi,
I have a ZFS vpool on pve1 (2TB ZFS), then I setup another machine (pve2 w/512GB LVM) and clustered them together.
All I want to do was move/migrate a VM from pve1 to run on pve2, but it complained that there is no vpool on pve2

When I go under datacenter -> storage -> vpool, I can select vpool and select pve2 under Nodes.

This makes vpool show up on pve2,

1. but what does it mean? There is clearly no 2TB ZFS on pve2
2. what happens if the network disconnects?
the storage configuration should reflect reality. If you tell Proxmox VE that a storage exists on a certain node when in reality it doesn't, you will just get errors when trying to activate the storage. When joining a cluster, a node inherits the cluster's storage configuration (it's shared across all nodes). So you can re-add the LVM storage and restrict it to the second node. When live-migrating a VM (or offline migrating via CLI) you can specify a target storage.
 
I can't quite connect your replies to my 2 questions.

Why is nodes written in quotes? Do you mean it is like a node but not a node? Where and what is `storage.cfg`? I'm telling PVE the truth, but networks can go down, what happens then?
 
First of all, I haven't read the thread and am just going to briefly jump into the last answers.

Under Datacenter -> Storage you should find your storage "vpool". If you edit this, you have the option of restricting it to certain nodes under "Nodes". The quotation marks simply describe the field or option and have nothing to do with the evaluation of what a node is or is not.
 
Thanks. So, my `vpool` basically becomes a networked storage if I select additional nodes?
 
Last edited:
So, my `vpool` basically becomes a networked storage if I select additional nodes?
No, local storage does not become network storage. You're just saying that every system has this storage. But if this isn't set up, it leads to problems.
 
Can you kindly elaborate on "set up"? Do you mean I have to get the same 2TB and set it up as `vpool` in PVE2?
 
If you connect a USB hard drive to a node and insert it as storage with the name "USB-DRIVE" on the one node. Then you can also use the storage on this exact node. You can also remove the restriction and then every node will see this storage. But it will only work where the USB hard drive is plugged in. If you now move the hard drive, it will work there but no longer work elsewhere. If you give everyone a USB hard drive and set it up identically for each node, then it will work everywhere.

However, it will never happen that you have suddenly made a network storage from the USB hard drive via the restriction. It's simply a restriction on which node can see which storage. It doesn't matter whether the node can reach it or not, PVE will mount it and expect you to have done everything correctly in the background.
 
  • Like
Reactions: fabian
Ok, thanks it is getting clearer, but

Remove the restriction, then every node will "see" this storage You are saying that "see" is not "reach" aka "read/write". What does "see" means? and why does a node need to "see" the storage in another node? So that I as a node know how to setup myself in the event that VM is passed to me?
 
There is a file call storage.cfg, it lives in special place - /etc/pve/storage.cfg. This special place is a replicated filesystem, through cluster magic this filesystem is replicated to all nodes in the cluster and each node sees the exact same content.

In the storage.cfg you define storage objects. You can configure this storage object (a text stanza) from any node and all nodes will see it almost instantaneously. By default there is local-lvm which points to pve-data LVM. By default each node has that LVM slice, so this storage works everywhere even though the content of this LVM is different on each node.

If you add a new disk to nodeA and create new LVM slice called NEW, you need to create a new text entry in storage.cfg for PVE to access it. If you dont "restrict" the nodes in that text stanza, then all nodes will think that this storage must exist and try to activate and use it. If you restrict it, then upon reading the configuration nodeB will find a restriction and will skip this storage object.

When you have network storage that is accessible from all nodes, again a simple text definition on nodeA will be seen by nodeB and nodeC. All nodes can access this storage over the network and so they are all happy.

For a migration in GUI to work, a storage must exist on all nodes that has one name.
However, from CLI (man qm) migration has an option to specify target storage:
Code:
qm migrate <vmid> <target> [OPTIONS]

       Migrate virtual machine. Creates a new migration task.

       <vmid>: <integer> (1 - N)
           The (unique) ID of the VM.

       <target>: <string>
           Target node.

       --bwlimit <integer> (0 - N) (default = migrate limit from datacenter or storage config)
           Override I/O bandwidth limit (in KiB/s).

       --force <boolean>
           Allow to migrate VMs which use local devices. Only root may use this option.

       --migration_network <string>
           CIDR of the (sub) network that is used for migration.

       --migration_type <insecure | secure>
           Migration traffic is encrypted using an SSH tunnel by default. On secure, completely private networks this can be disabled to increase performance.

       --online <boolean>
           Use online/live migration if VM is running. Ignored if VM is stopped.

       --targetstorage <string>
           Mapping from source to target storages. Providing only a single storage ID maps all source storages to that storage. Providing the special value 1 will map each source storage to itself.

       --with-local-disks <boolean>
           Enable live storage migration for local disk


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!