Extending file system after v.3 to v. 4.4 upgrade?

unleeshop

Member
Jul 21, 2009
38
0
6
Briefly: needing to upgrade our two ProxMox 3.4 hosts to a newer release, I followed the recommendation to back up VMs and save the /etc directory someplace, then pop the install disk in to do a clean install of 4.4. I did this on our backup host and after a couple hiccups and tweaks it is now running v. 4.4 and giving NFS access to our main host. But this morning I realized that the new installation is giving me only using 100Gb out of the 1Gb LVM storage I previously set up. (Note: storage is all on one drive.) fdisk -l tells me that I've got a 931Gb LVM volume available (see below), and my old setup notes suggested that probably I just needed to tell the filesystem to use it all.
---------------------------
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E7E8DD0C-2AAB-42AB-9CA4-A5C3495014AA

Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 528383 524288 256M EFI System
/dev/sda3 528384 1953525134 1952996751 931.3G Linux LVM

Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

---------------------------
My notes say to use "resize2fs /dev/mapper/pve-data", but that gives me the error

resize2fs: Bad magic number in super-block while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.


Searching the online resources I learned that the whole LVM scheme has changed SIGNIFICANTLY.
Just to see what LVM "thinks" I have, I checked:
-------------------
lvm> vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.26 GiB
PE Size 4.00 MiB
Total PE 238402
Alloc PE / Size 234359 / 915.46 GiB
Free PE / Size 4043 / 15.79 GiB
VG UUID KneCxs-rI8a-R901-nAbb-Buxv-qLR0-f3bSy4

-------------------------------
Looks OK! I'm thinking I just need the magic words to extend the filesystem. Searching the wiki, and reading about the new LVM-thin scheme, I see this command for to "resizing the metadata pool"...
--------
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
---------
.....but so far several attempts at a valid command have failed, probably because I haven't got the syntax right:
----------------
lvm> lvresize --poolmetadatasize +800G pve
Path required for Logical Volume "pve".
Please provide a volume group name

-----------------
Checking the help for lvresize, I see it has a "-r --resizefs" option that looks promising for what I'm attempting, but so far, no joy:
--------------------
lvm> lvresize -r
Please specify either size or extents but not both.
lvm> lvresize -r +800G
Please specify either size or extents but not both.

-----------------
So at this point I wanted to stop for a moment and check in here to see if I'm on the right track AT ALL, or off in a blind alley, and to get directed to previous threads that perhaps already covered this (I haven't yet found them.) Is the lvresize -poolmetadatasize thing what I need to get right? or the lvresize - resizefs? Or is there a shell command outside of lvm that correctly handles this now? Thanks.

(It will be stickier with the main host I need to upgrade, because on that 3.4 machine, LVM is spanning more than one physical drive, and I'm unsure whether that volume will exist/persist after I throw in the 4.4 install disk and do a clean install ... But the question of the moment is getting my filesystem extended on this simpler system.)
 
I may be on the wrong track. The web GUI on the server I just upgraded to 4.4 shows, under "storage", that the drive I'm seeing from the command line is a 95Gb "local" drive, but the GUI also shows me an active, enabled, "local-lvm" drive of 812 Gb of type "LVM-Thin" - that's the storage that I do not seem to be able to yet access. Do I just not have the extra space mounted? My old 3.4 fstab was:
------------------
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=1b2267f2-6388-4301-8cbf-3e1b1f695388 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

-----------------

And the new, clean-install fstab is ...
----------------
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

-------------------------

....and I don't see that 800Gb data drive "/dev/pve/data" mounted, so that's maybe the direction I need to go. Not sure if the clean 4.4 install would have reformatted that as ext4 or whether it might still be the original ext3 file system, guess I'll find out.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!