How to resize shared FC LUN

Discussion in 'Proxmox VE: Installation and configuration' started by Spiros Papageorgiou, Feb 11, 2019.

Tags:
  1. Spiros Papageorgiou

    Joined:
    Aug 1, 2017
    Messages:
    57
    Likes Received:
    0
    Hi all,

    I have a proxmox cluster with a shared FC LUN (managed by LVM). The LUN is full and I would like to resize it. I have resized it on the storage but how do i do it in proxmox?

    Keep in mind that I'm running multipathd for redudancy and that I have a partition in the LUN. My lsblk looks like is:
    sdr 65:16 0 5.9T 0 disk
    ├─sdr1 65:17 0 2T 0 part
    └─mpath0 253:0 0 5.9T 0 mpath
    └─mpath0-part1 253:1 0 2T 0 part
    ├─volgrp1--3par-test 253:4 0 20G 0 lvm
    ├─volgrp1--3par-vm--103--disk--1 253:5 0 32G 0 lvm
    ├─volgrp1--3par-vm--100--disk--1 253:6 0 32G 0 lvm
    ......

    volgrp1--3par is the VG I want to extend.

    Thanx,
    Sp
     
    #1 Spiros Papageorgiou, Feb 11, 2019
    Last edited: Feb 11, 2019
  2. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,834
    Likes Received:
    158
    Hi,
    you must use pvresize to tell lvm an resized physical volumme.
    If you have an partition-table on this disk, you have to expand the partition first (but then normaly an reboot is required, because the kernel hold the old (active) partition info in ram) or - create an second partition and add thi to the volume group.

    Look with
    Code:
    pvs
    vgs
    
    Udo
     
  3. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,696
    Likes Received:
    331
    You normally need to:
    - rescan all disks involved in a multipathed device
    - rescan the multipath itself, so that your mpath devices shows the new size in multipath -ll
    - then continue as @udo said with pveresize
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice