Cluster-stuck

Volter

Member
Jun 27, 2023
15
0
6
Hi hoping someone can help me. I have been running a single proxmox node as a homelab for a while, recently I bought a 2nd mini pc as a second node. The new device only has one nvme internally

I have setup and installed the new device, created the cluster on my existing node then added the new device. Once added the local-lvm partition disappeared and after reading through some posts I readded it to the storage.cfg and got it back. Now even though it is there and showing the correct volume it will not appear as an option to make any CT or VM images, it is configured for these.

What do I need to do to get this back? Any help appreciated! Thanks!
 
Hi, I think you can do similarly like in two first screenshots in the post by @Johannes S in https://forum.proxmox.com/threads/physical-windows-server-2022-to-vm-on-proxmox-9.174241/post-809988

Just select your storage ID, then Edit, Content, add the selection of Disk image and Container (I think).

Does it help?
Thanks for the reply. It is already configured for disk image and containers, so it should be ready to accept these. It's the same settings as I have on the other node. I can't exactly compare it to the other node as I removed the lvm-local partition when it was first setup (couple years ago) and all images are on separate disks, but this machine I will need to use it.
 
You need to differentiate between block and file storage: Usually LVM storage splits the disk in two parts, one for block storage (container and vm disk images) and one file storage (as filesystem) for iso images, templates, imports, backups etc. The size is fixed and resizing it after install is possible but cumbersome and more envolved.
ZFS is more flexible in that regard since you don't have a fixed part for one or the other.

But even now if you still have enough free space on your OS disc or another file storage available (e.G. a network share on a NAS) you could just add any directory as directory storage and you should be able to use it.

See also https://pve.proxmox.com/wiki/Storage for the different storage types and their differences
 
You need to differentiate between block and file storage: Usually LVM storage splits the disk in two parts, one for block storage (container and vm disk images) and one file storage (as filesystem) for iso images, templates, imports, backups etc. The size is fixed and resizing it after install is possible but cumbersome and more envolved.
ZFS is more flexible in that regard since you don't have a fixed part for one or the other.

But even now if you still have enough free space on your OS disc or another file storage available (e.G. a network share on a NAS) you could just add any directory as directory storage and you should be able to use it.

See also https://pve.proxmox.com/wiki/Storage for the different storage types and their differences
Thanks, ok so on face value I have both available as it was on the default config after installation. My 1Tb NvME was split into 100Gb for local filesystem (boot, iso, templates etc), the remaining 8XXGb was set up as the block local-lvm, with default name: data and volume group pve. I can see it under node > disk > lvm-thin. It was always visible in here. The problem started when it joined the cluster as the local-lvm block storage partition disappeared from the storage list under that node, it was still visible under node > disk > lvm-thin. After looking for other posts I saw that I could re-add it under the storage.cfg which I did, it was now listed again as normal under the storage list, I click on it, it shows a summary with the expected available capacity, the content is set to disk image and container.

The issue is that when I go to create a CT or VM on this node, this space is not listed to use so there is no available storage on this node for creating CTs or VMs?? I can't work out what I need to do or what I have done. It would seem that I have missed something when readding it.

lsblk shows:

Bash:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1            259:0    0 931.5G  0 disk
|-nvme0n1p1        259:1    0  1007K  0 part
|-nvme0n1p2        259:2    0     1G  0 part /boot/efi
`-nvme0n1p3        259:3    0   930G  0 part
  |-pve-swap       252:0    0     8G  0 lvm  [SWAP]
  |-pve-root       252:1    0    96G  0 lvm  /
  |-pve-data_tmeta 252:2    0   8.1G  0 lvm 
  | `-pve-data     252:4    0 793.8G  0 lvm 
  `-pve-data_tdata 252:3    0 793.8G  0 lvm 
    `-pve-data     252:4    0 793.8G  0 lvm

but df -h shows:

Bash:
ilesystem            Size  Used Avail Use% Mounted on
udev                  7.7G     0  7.7G   0% /dev
tmpfs                 1.6G  1.6M  1.6G   1% /run
/dev/mapper/pve-root   94G  4.6G   85G   6% /
tmpfs                 7.8G   66M  7.7G   1% /dev/shm
efivarfs              256K   89K  163K  36% /sys/firmware/efi/efivars
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
tmpfs                 7.8G     0  7.8G   0% /tmp
/dev/nvme0n1p2       1022M  8.8M 1014M   1% /boot/efi
/dev/fuse             128M   44K  128M   1% /etc/pve
tmpfs                 1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
tmpfs                 1.6G  4.0K  1.6G   1% /run/user/0

The above suggests its not mounted??
 
The above suggests its not mounted??
Right. And this is expected and OK to NOT be seen as mounted at the hypervisor level.
"Mounted" are filesystems. While LVM(-thin) is like a raw disk space in which you can cut fragments FOR filesystems.
But in your issue I suspect some cluster-related reason which I don't know :).
 
Last edited:
  • Like
Reactions: Johannes S