Proxmox cluster, can't migrate or create VM on new node

proxproxprox

New Member
Jan 22, 2025
2
0
1
Hey all,

We have an existing Proxmox cluster, which is two Dell micro servers. Both running 8.3, those in a cluster have been reliable and worked well. Both of these have two drives, one for OS via LVM defaults and a second drive for VMs, configured as below:

Code:
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                          8:0    0   1.8T  0 disk
├─VMs-VMs_tmeta            252:0    0  15.9G  0 lvm 
│ └─VMs-VMs-tpool          252:8    0   1.8T  0 lvm 
│   ├─VMs-VMs              252:9    0   1.8T  1 lvm 
│   ├─VMs-vm--106--disk--0 252:10   0   100G  0 lvm 
│   ├─VMs-vm--104--disk--0 252:11   0   100G  0 lvm 
│   ├─VMs-vm--107--disk--0 252:12   0     4M  0 lvm 
│   ├─VMs-vm--107--disk--1 252:13   0   100G  0 lvm 
│   └─VMs-vm--107--disk--2 252:14   0     4M  0 lvm 
└─VMs-VMs_tdata            252:1    0   1.8T  0 lvm 
  └─VMs-VMs-tpool          252:8    0   1.8T  0 lvm 
    ├─VMs-VMs              252:9    0   1.8T  1 lvm 
    ├─VMs-vm--106--disk--0 252:10   0   100G  0 lvm 
    ├─VMs-vm--104--disk--0 252:11   0   100G  0 lvm 
    ├─VMs-vm--107--disk--0 252:12   0     4M  0 lvm 
    ├─VMs-vm--107--disk--1 252:13   0   100G  0 lvm 
    └─VMs-vm--107--disk--2 252:14   0     4M  0 lvm 
nvme0n1                    259:0    0 465.8G  0 disk
├─nvme0n1p1                259:1    0  1007K  0 part
├─nvme0n1p2                259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                259:3    0 464.8G  0 part
  ├─pve-swap               252:2    0     8G  0 lvm  [SWAP]
  ├─pve-root               252:3    0    96G  0 lvm  /
  ├─pve-data_tmeta         252:4    0   3.4G  0 lvm 
  │ └─pve-data-tpool       252:6    0 337.9G  0 lvm 
  │   └─pve-data           252:7    0 337.9G  1 lvm 
  └─pve-data_tdata         252:5    0 337.9G  0 lvm 
    └─pve-data-tpool       252:6    0 337.9G  0 lvm 
      └─pve-data           252:7    0 337.9G  1 lvm

I've just added an AMD "nuc" to the cluster, installed Proxmox VE using all defaults with LVM etc.
This is current hardware, with a single 1TB NVME - I didn't see a need for two drives in this system.

However, after installing (before adding to the cluster) and after adding to the cluster, I can't create VMs or migrate existing VMs to the new node, as it says there's nowhere to store VMs.

Disk layout on the new node (default)
Code:
root@pve:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1            259:0    0 931.5G  0 disk 
├─nvme0n1p1        259:1    0  1007K  0 part 
├─nvme0n1p2        259:2    0     1G  0 part /boot/efi
└─nvme0n1p3        259:3    0 930.5G  0 part 
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   8.1G  0 lvm  
  │ └─pve-data     252:4    0 794.3G  0 lvm  
  └─pve-data_tdata 252:3    0 794.3G  0 lvm  
    └─pve-data     252:4    0 794.3G  0 lvm

Exact error is:
TASK ERROR: unable to create VM 108 - no such logical volume VMs/VMs
I can't create a VM drive as on the other nodes as there is only one drive in this node, but it also says that 'data' and the existing LVM should be able to store VMs.

I'm lost and confused, this seems like a pretty basic thing to not work out of the box - how can I get this working?

Thanks!
 
Hi @proxproxprox , welcome to the forum.

To assist you better, you’ll need to provide additional details. Specifically:

  • The contents of your /etc/pve/storage.cfg file.
  • The output of pvesm status from each node in your cluster.
Based on your description, it’s possible that your storage configuration is no longer symmetrical. Since the new node has fewer drives.

It should definitely be possible to get everything running smoothly again, but you may need to make adjustments to account for the dissimilar configuration.

One potential solution could be to create a new storage pool scoped specifically to the new node, as well as scope the existing pool to the two older nodes.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks for the reply :)

Yep, that whas what I could discover based on other reading I'd done. I played around with LVM to try and setup a new storage pool on the new node, but it would only let me do so if there was an additional, new drive.

Node 1:
Code:
root@proxmox-01:~# pvesm status
Name             Type     Status           Total            Used       Available        %
VMs           lvmthin     active      1919823872       179503532      1740320339    9.35%
local             dir     active        98497780        28825536        64622696   29.27%
local-lvm     lvmthin     active       354275328               0       354275328    0.00%

Node 2:
Code:
root@proxmox02:~# pvesm status
Name             Type     Status           Total            Used       Available        %
VMs           lvmthin     active      1919827968        63546305      1856281662    3.31%
local             dir     active        98497780        13737284        79710948   13.95%
local-lvm     lvmthin     active       354275328               0       354275328    0.00%

Node 3 (new node). Noting the error appearing here, this seems to be the issue.
Code:
root@pve:~# pvesm status
no such logical volume VMs/VMs
Name             Type     Status           Total            Used       Available        %
VMs           lvmthin   inactive               0               0               0    0.00%
local             dir     active        98497780         2910868        90537364    2.96%
local-lvm     lvmthin   disabled               0               0               0      N/A

storage.cfg:
Code:
root@proxmox02:~# cat /etc/pve/storage.cfg 
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
        nodes proxmox02,proxmox-01

lvmthin: VMs
        thinpool VMs
        vgname VMs
        content rootdir,images
        nodes proxmox02,proxmox-01,pve

So, based on this - I guess the question is, how do I split the existing NVME on the new host to add the VMs partition?
 
So, based on this - I guess the question is, how do I split the existing NVME on the new host to add the VMs partition?
I would recommend reading through results of this google search: "proxmox change default lvm"
You will need to combine steps of "remove" and "create".

At a high level - remove existing lvm "data" VG, create two partitions, create two VG "data" and "VMs", create corresponding storage pools.

Good luck

PS if you are unsure about safety of any particular step - you can always install PVE as a VM on existing nodes and experiment there.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: proxproxprox