Updated-edit on bottom
Hey I'm pretty new to proxmox (and zfs) been using it for about a month or two and really like it so just wanted to say thank you right away. However there are a couple things I don't fully understand, I think it will be clear after I explain what I'm trying to do and my hesitations to do so.
Essentially in the most basic way I can describe it : I want to reparation my local nvme drive on which proxmox is installed to have another partition that will be used as a zfs log drive/partition.
I have 1 500GB nvme drive which proxmox , bios and swap partitions exist. Along with that I have 5 4TB drives in a zfs pool for storage. the output for lsblk is as follows:
I was going to just boot into a live linux environment or gparted and decrease the size of partition 3 on nvme0n1 by 100-200 GB and then create 2 new partitions from that space, 1 like i mentioned to hold the zfs intention logs in and the other either into a new partition for future use so I don't need to do this again or left unallocated again for future use so I can just extend another partition into it in case the need arises.
Now the thing that prevents me from doing this is I can swear I remember reading something during initial setup/install that the partition I define is final and cant be changed later but I could be wrong and the other reason I'm hesitant to do this is because when I go to datacenter -> Node (prxmx) -> Disks -> LVM . I see that there is 97% assigned to LV's ? I'm not entirely sure what that means exactly but when I see this and think about shrinking a partition I think well I cant shrink something that 97% full. But I know this isnt the case because when I go to Datacenter -> Node -> local (prxmx) i see only 9% full. So my question is what is the difference between what the 2 are displaying? Also what is the difference between local, local-LVM, and local LVM-Thin?
Here is a picture of my disks and what I am talking about:
XXX there was a picture here that i removed can add back if necessary XXX
instead I give you the output of pvs, lvs and vgs
So I guess it boils down to am I ok just repartitioning nvme0n1p3 or partition 3 of disk nvme0n1? Or is there something I need to do before hand or is this not possible at all without a fresh reinstall?
EDIT:
I've been doing some research and I think I have a better understanding of proxmox storage however it looks like because I have an LVM-thin storage device that I cant do this without first removing the LVM-storage device and then creating a new one as LVM-thin storage is not resizable? I have all of my VM and container data on the ZFS pool except for one, my most crucial VM, so unfortunately it looks like I am going to need to recreate it, right?
EDIT2:
Would the following commands work? I got the base/original from this tutorial on extending LVM storage and altered to my needs I just wanted to confirm I'm not going to screw things up here.
While Ima doing this i figured it would be a good idea to decrease my root logical volume down to 30Gb as i currently have it set to like 90 and theres no way I would ever use that much space and it's better allocated elsewhere.
SO, for fist section I would either
1) use gparted live env for this first section
or
2) boot into linux live environment (probably arch because btw I use arch) and do following
then I would boot proxmox and run these commands
and after running all those commands I would expect the output of pvs, lvs and vgs to go from this
to this - well hopefully something close to this
Does this seem like a good idea? Is there something I'm missing? Will Promox freak once it boots and realizes the volumes/drives aren't the sizes it expects?
Final EDIT:
in the worst case scenario I have to reinstall proxmox to do this, what is the best approach? I don't have anything on the local storage besides proxmox itself and 1VMs data, which I can probably recreate if absolute necessary I would rather just avoid doing that. My bigger concern is the ZFS drive/pool , if I reinstall is there anything I need to save/backup in order to recreate the ZFS pool or can I simply plug the drives in and they will be automatically found and setup, I've only ever used mdam for RAID before and if I remember correctly recreating the RAID wasnt an issue, at the worst it was one or two commands to recreate /etc/fstab or something along those lines and thats what Im hoping for, of course if this is the only option.
Also open to any other suggestions on what I should do instead. The main reason I want to do this is because I need a postgresql DB and as I was reading about the setup it I came to the conclusion I should probably use a log drive . Don't remember the specifics anymore but that was the conclusion, of course not absolutely necessary but I feel like its the "right" way to do things and therefore I am in this position.
Hey I'm pretty new to proxmox (and zfs) been using it for about a month or two and really like it so just wanted to say thank you right away. However there are a couple things I don't fully understand, I think it will be clear after I explain what I'm trying to do and my hesitations to do so.
Essentially in the most basic way I can describe it : I want to reparation my local nvme drive on which proxmox is installed to have another partition that will be used as a zfs log drive/partition.
I have 1 500GB nvme drive which proxmox , bios and swap partitions exist. Along with that I have 5 4TB drives in a zfs pool for storage. the output for lsblk is as follows:
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 3.6T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 3.6T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 3.6T 0 disk
├─sdc1 8:33 0 3.6T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 3.6T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 3.6T 0 disk
├─sde1 8:65 0 3.6T 0 part
└─sde9 8:73 0 8M 0 part
zd16 230:16 0 32G 0 disk
zd32 230:32 0 32G 0 disk
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 1G 0 part /boot/efi
└─nvme0n1p3 259:3 0 464.8G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.4G 0 lvm
│ └─pve-data-tpool 252:4 0 337.9G 0 lvm
│ ├─pve-data 252:5 0 337.9G 1 lvm
│ └─pve-vm--101--disk--0 252:6 0 32G 0 lvm
└─pve-data_tdata 252:3 0 337.9G 0 lvm
└─pve-data-tpool 252:4 0 337.9G 0 lvm
├─pve-data 252:5 0 337.9G 1 lvm
└─pve-vm--101--disk--0 252:6 0 32G 0 lvm
I was going to just boot into a live linux environment or gparted and decrease the size of partition 3 on nvme0n1 by 100-200 GB and then create 2 new partitions from that space, 1 like i mentioned to hold the zfs intention logs in and the other either into a new partition for future use so I don't need to do this again or left unallocated again for future use so I can just extend another partition into it in case the need arises.
Now the thing that prevents me from doing this is I can swear I remember reading something during initial setup/install that the partition I define is final and cant be changed later but I could be wrong and the other reason I'm hesitant to do this is because when I go to datacenter -> Node (prxmx) -> Disks -> LVM . I see that there is 97% assigned to LV's ? I'm not entirely sure what that means exactly but when I see this and think about shrinking a partition I think well I cant shrink something that 97% full. But I know this isnt the case because when I go to Datacenter -> Node -> local (prxmx) i see only 9% full. So my question is what is the difference between what the 2 are displaying? Also what is the difference between local, local-LVM, and local LVM-Thin?
Here is a picture of my disks and what I am talking about:
XXX there was a picture here that i removed can add back if necessary XXX
instead I give you the output of pvs, lvs and vgs
Code:
root@prxmx:~# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p3 pve lvm2 a-- <464.76g 16.00g
root@prxmx:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- <464.76g 16.00g
root@prxmx:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 337.86g 1.43 0.54
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-101-disk-0 pve Vwi-aotz-- 32.00g data 15.12
So I guess it boils down to am I ok just repartitioning nvme0n1p3 or partition 3 of disk nvme0n1? Or is there something I need to do before hand or is this not possible at all without a fresh reinstall?
EDIT:
I've been doing some research and I think I have a better understanding of proxmox storage however it looks like because I have an LVM-thin storage device that I cant do this without first removing the LVM-storage device and then creating a new one as LVM-thin storage is not resizable? I have all of my VM and container data on the ZFS pool except for one, my most crucial VM, so unfortunately it looks like I am going to need to recreate it, right?
EDIT2:
Would the following commands work? I got the base/original from this tutorial on extending LVM storage and altered to my needs I just wanted to confirm I'm not going to screw things up here.
While Ima doing this i figured it would be a good idea to decrease my root logical volume down to 30Gb as i currently have it set to like 90 and theres no way I would ever use that much space and it's better allocated elsewhere.
SO, for fist section I would either
1) use gparted live env for this first section
or
2) boot into linux live environment (probably arch because btw I use arch) and do following
Code:
fdisk -l
# edit partitions with fdisk, change device id as needed
fdisk /dev/nvme0n1
# print partition table, delete a partition, enter the lvm partition number - create a new partition - enter the new partition number, same as the number deleted
# accept the default first sector, last sector= +300G - n=dont remove the LVM signature , set the partition type, partition number, type to Linux LVM, write the changes
p - d - 3 - n - 3 - enter - +300G - n - t - 3 - 30
# now create 2 paritions, 1 for the zfs log partition (partition 4-type bf=solaris) and extra partition for future use
n - 4/enter - enter - +50G , t , bf , n , 5/enter , enter , t , 83
# and write everything to disk
w
# confirm
fdisk -l
then I would boot proxmox and run these commands
Code:
# resize the existing physical volume
pvresize /dev/nvme0n1 --setphysicalvolumesize 300G
# decrease pve-root logical volume to 30G as we dont need the original 96G
lvresize -L -66GB /dev/pve/root
# resize the underlying file system
resize2fs /dev/mapper/pve-root
# list logical volumes, noting root is now 30GB
lvdisplay
# decrease/extend the data to 100% available free space
lvextend -l +100%FREE pve/data
# list logical volumes, noting root is now 30G and data ~237
lvdisplay
and after running all those commands I would expect the output of pvs, lvs and vgs to go from this
Code:
root@prxmx:~# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p3 pve lvm2 a-- <464.76g 16.00g
root@prxmx:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- <464.76g 16.00g
root@prxmx:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 337.86g 1.43 0.54
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-101-disk-0 pve Vwi-aotz-- 32.00g data 15.12
to this - well hopefully something close to this
Code:
root@prxmx:~# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p3 pve lvm2 a-- <300g NOT_SURE_ABOUT_THIS
root@prxmx:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- <300g OT_SURE_ABOUT_THIS
root@prxmx:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 237.86g 1.43 0.54
root pve -wi-ao---- 30.00g
swap pve -wi-ao---- 8.00g
vm-101-disk-0 pve Vwi-aotz-- 32.00g data 15.12
Does this seem like a good idea? Is there something I'm missing? Will Promox freak once it boots and realizes the volumes/drives aren't the sizes it expects?
Final EDIT:
in the worst case scenario I have to reinstall proxmox to do this, what is the best approach? I don't have anything on the local storage besides proxmox itself and 1VMs data, which I can probably recreate if absolute necessary I would rather just avoid doing that. My bigger concern is the ZFS drive/pool , if I reinstall is there anything I need to save/backup in order to recreate the ZFS pool or can I simply plug the drives in and they will be automatically found and setup, I've only ever used mdam for RAID before and if I remember correctly recreating the RAID wasnt an issue, at the worst it was one or two commands to recreate /etc/fstab or something along those lines and thats what Im hoping for, of course if this is the only option.
Also open to any other suggestions on what I should do instead. The main reason I want to do this is because I need a postgresql DB and as I was reading about the setup it I came to the conclusion I should probably use a log drive . Don't remember the specifics anymore but that was the conclusion, of course not absolutely necessary but I feel like its the "right" way to do things and therefore I am in this position.
Attachments
Last edited: