Briefly: needing to upgrade our two ProxMox 3.4 hosts to a newer release, I followed the recommendation to back up VMs and save the /etc directory someplace, then pop the install disk in to do a clean install of 4.4. I did this on our backup host and after a couple hiccups and tweaks it is now running v. 4.4 and giving NFS access to our main host. But this morning I realized that the new installation is giving me only using 100Gb out of the 1Gb LVM storage I previously set up. (Note: storage is all on one drive.) fdisk -l tells me that I've got a 931Gb LVM volume available (see below), and my old setup notes suggested that probably I just needed to tell the filesystem to use it all.
---------------------------
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E7E8DD0C-2AAB-42AB-9CA4-A5C3495014AA
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 528383 524288 256M EFI System
/dev/sda3 528384 1953525134 1952996751 931.3G Linux LVM
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
---------------------------
My notes say to use "resize2fs /dev/mapper/pve-data", but that gives me the error
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.
Searching the online resources I learned that the whole LVM scheme has changed SIGNIFICANTLY.
Just to see what LVM "thinks" I have, I checked:
-------------------
lvm> vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.26 GiB
PE Size 4.00 MiB
Total PE 238402
Alloc PE / Size 234359 / 915.46 GiB
Free PE / Size 4043 / 15.79 GiB
VG UUID KneCxs-rI8a-R901-nAbb-Buxv-qLR0-f3bSy4
-------------------------------
Looks OK! I'm thinking I just need the magic words to extend the filesystem. Searching the wiki, and reading about the new LVM-thin scheme, I see this command for to "resizing the metadata pool"...
--------
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
---------
.....but so far several attempts at a valid command have failed, probably because I haven't got the syntax right:
----------------
lvm> lvresize --poolmetadatasize +800G pve
Path required for Logical Volume "pve".
Please provide a volume group name
-----------------
Checking the help for lvresize, I see it has a "-r --resizefs" option that looks promising for what I'm attempting, but so far, no joy:
--------------------
lvm> lvresize -r
Please specify either size or extents but not both.
lvm> lvresize -r +800G
Please specify either size or extents but not both.
-----------------
So at this point I wanted to stop for a moment and check in here to see if I'm on the right track AT ALL, or off in a blind alley, and to get directed to previous threads that perhaps already covered this (I haven't yet found them.) Is the lvresize -poolmetadatasize thing what I need to get right? or the lvresize - resizefs? Or is there a shell command outside of lvm that correctly handles this now? Thanks.
(It will be stickier with the main host I need to upgrade, because on that 3.4 machine, LVM is spanning more than one physical drive, and I'm unsure whether that volume will exist/persist after I throw in the 4.4 install disk and do a clean install ... But the question of the moment is getting my filesystem extended on this simpler system.)
---------------------------
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E7E8DD0C-2AAB-42AB-9CA4-A5C3495014AA
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 528383 524288 256M EFI System
/dev/sda3 528384 1953525134 1952996751 931.3G Linux LVM
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
---------------------------
My notes say to use "resize2fs /dev/mapper/pve-data", but that gives me the error
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.
Searching the online resources I learned that the whole LVM scheme has changed SIGNIFICANTLY.
Just to see what LVM "thinks" I have, I checked:
-------------------
lvm> vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.26 GiB
PE Size 4.00 MiB
Total PE 238402
Alloc PE / Size 234359 / 915.46 GiB
Free PE / Size 4043 / 15.79 GiB
VG UUID KneCxs-rI8a-R901-nAbb-Buxv-qLR0-f3bSy4
-------------------------------
Looks OK! I'm thinking I just need the magic words to extend the filesystem. Searching the wiki, and reading about the new LVM-thin scheme, I see this command for to "resizing the metadata pool"...
--------
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
---------
.....but so far several attempts at a valid command have failed, probably because I haven't got the syntax right:
----------------
lvm> lvresize --poolmetadatasize +800G pve
Path required for Logical Volume "pve".
Please provide a volume group name
-----------------
Checking the help for lvresize, I see it has a "-r --resizefs" option that looks promising for what I'm attempting, but so far, no joy:
--------------------
lvm> lvresize -r
Please specify either size or extents but not both.
lvm> lvresize -r +800G
Please specify either size or extents but not both.
-----------------
So at this point I wanted to stop for a moment and check in here to see if I'm on the right track AT ALL, or off in a blind alley, and to get directed to previous threads that perhaps already covered this (I haven't yet found them.) Is the lvresize -poolmetadatasize thing what I need to get right? or the lvresize - resizefs? Or is there a shell command outside of lvm that correctly handles this now? Thanks.
(It will be stickier with the main host I need to upgrade, because on that 3.4 machine, LVM is spanning more than one physical drive, and I'm unsure whether that volume will exist/persist after I throw in the 4.4 install disk and do a clean install ... But the question of the moment is getting my filesystem extended on this simpler system.)