Storage missing in XFS install

yyyy

Member
Nov 28, 2023
68
2
8
Hi all,

For some reason after installing proxmox on my 1.5TB drive with XFS formatting, proxmox is now showing 95GB is only available. What gives?!

Furthermore it is showing "local-zfs" when I never selected ZFS during installation. The status for this unknown local-zfs is unknown and the "local" drive is only 95GB with the rest of the 1.4TB seemingly having disappeared.

HD Space in the Summary section of this node confirms that 1.4TB is missing as it is showing only 95GB is available!

Please help!
 

Attachments

  • gsedgfdzfg.PNG
    gsedgfdzfg.PNG
    473.1 KB · Views: 10
Last edited:
Some more infos might help. Like:
lsblk, zpool list, pvesm status, cat /etc/pve/storage.cfg. And looks like you are running a cluster? Maybe with another node that uses ZFS and you didn't deselect "local-zfs" for the new node at Datacenter -> Storage so its applied for the new node too?
 
Last edited:
  • Like
Reactions: yyyy
Some more infos might help. Like:
lsblk, zpool list, pvesm status, cat /etc/pve/storage.cfg. And looks like you are running a cluster? Maybe with another node that uses ZFS and you didn't deselect "local-zfs" for the new node at Datacenter -> Storage?
Hi here is the output:

root@dws-zve-4:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.4T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 1.4T 0 part
├─pve-swap 252:0 0 7.7G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 12.8G 0 lvm
│ └─pve-data 252:4 0 1.2T 0 lvm
└─pve-data_tdata 252:3 0 1.2T 0 lvm
└─pve-data 252:4 0 1.2T 0 lvm
sdb 8:16 1 0B 0 disk
sr0 11:0 1 1024M 0 rom
root@dws-zve-4:~# zpool list
no pools available
root@dws-zve-4:~# pvesm status
zfs error: cannot open 'rpool': no such pool

zfs error: cannot open 'rpool': no such pool

could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available

Name Type Status Total Used Available %
local dir active 100597760 3404172 97193588 3.38%
local-zfs zfspool inactive 0 0 0 0.00%
root@dws-zve-4:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl
shared 1

zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1
 
Some more infos might help. Like:
lsblk, zpool list, pvesm status, cat /etc/pve/storage.cfg. And looks like you are running a cluster? Maybe with another node that uses ZFS and you didn't deselect "local-zfs" for the new node at Datacenter -> Storage so its applied for the new node too?
Oh i just noticed the edit, How would I go about deselecting zfs for the new nodes, I checked datacenter -> storage and clicked local-zfs but there is no option to unselect the other nodes? thanks in advance
 
Oh i just noticed the edit, How would I go about deselecting zfs for the new nodes, I checked datacenter -> storage and clicked local-zfs but there is no option to unselect the other nodes?
Datacenter -> Storage -> local-zfs -> Edit -> Nodes: deselect the new node

Did you change anything after the installation? Its strange that you got a "local-zfs". Your disk is formated with LVM, so there should be a "local-lvm" instead.
You could add that with pvesm add lvmthin local-lvm --content rootdir,images --thinpool data --vgname pve.
 
  • Like
Reactions: yyyy
This is great thank you
Datacenter -> Storage -> local-zfs -> Edit -> Nodes: deselect the new node

Did you change anything after the installation? Its strange that you got a "local-zfs". Your disk is formated with LVM, so there should be a "local-lvm" instead.
You could add that with pvesm add lvmthin local-lvm --content rootdir,images --thinpool data --vgname pve.
this is great thank you, only problem was that I couldn't find the "Nodes" when clicking edit for local-zfs in Datacenter -> storage, it does not exist. I ended up disabling local-zfs altogether as I no longer use ZFS in favor of Ceph. Instead I am trying to merge local and local-lvm together since I want to use all disk space for Ceph (then again this probably isn't how ceph would be set up since the local drive has the proxmox installation on it so the local-lvm would be used for Ceph I'm presuming?), how would I do that? Thanks again
 
Instead I am trying to merge local and local-lvm together since I want to use all disk space for Ceph (then again this probably isn't how ceph would be set up since the local drive has the proxmox installation on it so the local-lvm would be used for Ceph I'm presuming?), how would I do that? Thanks again
Do you have 3 or more nodes with 10+Gbit NICs? You also want multiple dedicated disks fper node for ceph.