[SOLVED] I don't have much hair left! Formatting drives!

itsnota2ma

New Member
Feb 5, 2025
9
0
1
Milwaukee, WI
I am testing Proxmox to replace my current environment. I had it all setup and working and then I broke it. So, I reinstalled. I am trying to get the local drives that I originally configured as ZF pools back online. I can see them in the OS. I have tried 50 different flavors of formatting and configuration and I am not able to see them in Proxmox. I cannot find official documentation on how they should be formatted. I have followed several threads from this forum without successful results. What am I missing??
 
"seeing" storage in pve requires you to add them to the pve storage engine (pvesm) this is accessible both in the gui under datacenter-storage and using pvesm in cli.

if you'd like more direct help, post the output of
lsblk if your device is lvm
df if its a mounted filesystem
zfs list if its a zfs filesystem

the content of /etc/pve/storage.cfg
 
Here is the output of lsblk:

Code:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0   1.8T  0 disk
sdb                  8:16   0   1.8T  0 disk
└─sdb1               8:17   0   1.8T  0 part
sdc                  8:32   0   1.8T  0 disk
└─sdc1               8:33   0   1.8T  0 part
sdd                  8:48   0   1.8T  0 disk
└─sdd1               8:49   0   1.8T  0 part
sde                  8:64   0 223.6G  0 disk
├─sde1               8:65   0  1007K  0 part
├─sde2               8:66   0     1G  0 part
└─sde3               8:67   0 222.6G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0  65.6G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   1.3G  0 lvm 
  │ └─pve-data     252:4    0 130.3G  0 lvm 
  └─pve-data_tdata 252:3    0 130.3G  0 lvm 
    └─pve-data     252:4    0 130.3G  0 lvm
 
Code:
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

I think I see my issue already - /dev/sda through /dev/sdd do not have an lvm mount??
 
was that how you configured them? Let me ask this a different way. What are you expecting to see?
I guess I would expect to see the device and a partition. After that, I am unsure. I have not manually partitioned drives in Linux before, so I do not know what is required by Proxmox. I see the boot device has LVM partitions, and those are visible in Proxmox, so I am suspecting the other drives need to be formatted/partitioned similarly?
 
By your reply I'm assuming you didnt set up your disks yet, although 3 of your disks have a partition created (if this is not consequential ignore them for now)

You have some choices to make. to start, have a look here:
https://pve.proxmox.com/wiki/Storage

you can see, there are a bunch of options, all with pros and cons. if you were asking me how I would set up the storage, I'd set up a zfs striped mirrorset. see https://pve.proxmox.com/wiki/Storage:_ZFS for more detail. I will give you one admonition to use device ids and not their drive "letters" when creating your pools, but that aside its really straightforward.
 
By your reply I'm assuming you didnt set up your disks yet, although 3 of your disks have a partition created (if this is not consequential ignore them for now)

You have some choices to make. to start, have a look here:
https://pve.proxmox.com/wiki/Storage

you can see, there are a bunch of options, all with pros and cons. if you were asking me how I would set up the storage, I'd set up a zfs striped mirrorset. see https://pve.proxmox.com/wiki/Storage:_ZFS for more detail. I will give you one admonition to use device ids and not their drive "letters" when creating your pools, but that aside its really straightforward.
I know how to create the ZFS pools in the GUI once the drives are available - I did it once before. Where I am stuck, is what state do the drives need to be in, in order to be available for ZFS pool creation?
 
No.

here is the quick and dirty:
1. obtain the drives scsi id (or WWNs, your choice) names. these will be listed in /dev/disk/by-id
2. zpool create poolname -o ashift=12 mirror ata-diskname-for sda ata-diskname-for-sdb mirror ata-diskname-for-sdc ata-distname-for-sdd

ashift=12 is for 4kn drives. if your drives are 512B native, use ashift=9
if you get a message that any of the disks are in use, you can use the -f flag to force creation anyway, but this sometimes fails because the disk IS actually in use. if thats the case, you'll need to wipe the disk signature using dd and reboot before retrying.

If the disks ARE actually in use... what are you using them for? might want to make sure its nothing you want to lose before proceeding.
 
  • Like
Reactions: itsnota2ma
No.

here is the quick and dirty:
1. obtain the drives scsi id (or WWNs, your choice) names. these will be listed in /dev/disk/by-id
2. zpool create poolname -o ashift=12 mirror ata-diskname-for sda ata-diskname-for-sdb mirror ata-diskname-for-sdc ata-distname-for-sdd

ashift=12 is for 4kn drives. if your drives are 512B native, use ashift=9
if you get a message that any of the disks are in use, you can use the -f flag to force creation anyway, but this sometimes fails because the disk IS actually in use. if thats the case, you'll need to wipe the disk signature using dd and reboot before retrying.

If the disks ARE actually in use... what are you using them for? might want to make sure its nothing you want to lose before proceeding.
This used to be my Datto device before I moved to Veeam, so the drives are not in use. Like I said, I had the drive setup in 2 ZFS pools once already.