mariuscotan

Member
Jan 27, 2014
10
0
21
Hello,

my server is using a raid1 with 2 x 2 TB HDDs.
Originally, proxmox was installed on 2 x 320G HDDs with raid1, but one HDD crashed and i have replaced both hdds with 2 x 2 TB hdds (i have cloned the date with dd).

The system is up and running since more than a year but now i have a problem with the free space because it was originally configured to use a 320 GB hdd.

I have managed to extend the raid array to use the maximum size, but unfortunately, i am not able to extend/resize the VGs.

I have to mention that i have no KVM console to my server. Is there a safe way to resize it online without loosing any data?

Thank you,
Marius


Code:
 pveversion
pve-manager/5.2-9/4b30e8f9 (running kernel: 4.15.18-5-pve)


Raid arrays:

Code:
mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Dec 13 14:07:07 2012
     Raid Level : raid1
     Array Size : 523252 (510.99 MiB 535.81 MB)
  Used Dev Size : 523252 (510.99 MiB 535.81 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sun Nov  4 00:57:07 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : radio-mh:0  (local to host radio-mh)
           UUID : 212115dc:2589d759:09072e26:51c01684
         Events : 518

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       3       8        1        1      active sync   /dev/sda1
root@radio-mh:~# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Thu Dec 13 14:08:29 2012
     Raid Level : raid1
     Array Size : 1952989272 (1862.52 GiB 1999.86 GB)
  Used Dev Size : 1952989272 (1862.52 GiB 1999.86 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Nov 12 21:08:24 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : radio-mh:1  (local to host radio-mh)
           UUID : 6b54e7f7:9362e803:789b1e27:740ab668
         Events : 1891579

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync   /dev/sdb2
       3       8        2        1      active sync   /dev/sda2
Code:
pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/md1   pve1 lvm2 a--  297.59g    0



 vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  pve1   1   3   0 wz--n- 297.59g    0


lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve1 -wi-ao---- 208.09g
  root pve1 -wi-ao----  74.50g
  swap pve1 -wi-ao----  15.00g



df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                   7.7G     0  7.7G   0% /dev
tmpfs                  1.6G  116M  1.5G   8% /run
/dev/mapper/pve1-root   74G   59G   11G  85% /
tmpfs                  7.8G   37M  7.7G   1% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/md0               487M  398M   64M  87% /boot
/dev/mapper/pve1-data  205G  194G  762M 100% /var/lib/vz
/dev/sdc1              932G  502G  431G  54% /mnt/backup
/dev/fuse               30M   20K   30M   1% /etc/pve
tmpfs                  1.6G     0  1.6G   0% /run/user/0
 
If you want to use RAID we recommend to use ZFS, mdadm is not supported (even it may work).

However, the save way to extend a volume group is to add a partition to it (rather than to extend the existing one). In your case it is probably (but I've never tried it with madadm) necessary to reduce first /dev/md1 to it's original size and then to create a /dev/md2 which can be then used to extend the vg:

Code:
vgextend pve1 /dev/md2
 
  • Like
Reactions: mariuscotan
Just create another partition of now bigger md1. So, fdisk first, then pvcreate and then add it to VG vith vgextend. When you VG gets bigger and you can also extend your logical volumes with lvextend.
 
  • Like
Reactions: mariuscotan
@Richard @mailinglists :

Thank you for the replies. Unfortunately it was not possible to do changes without un-mounting the root file system,

I have used a pivot_root. I've found the solution here:
https://unix.stackexchange.com/ques...rink-root-filesystem-without-booting-a-livecd



After that, i had to :

Code:
e2fsck -f /dev/pve1/data

resize2fs /dev/pve1/data

pvresize /dev/md1

lvextend -l +400121 /dev/pve1/data

resize2fs /dev/pve1/data

e2fsck -f /dev/pve1/data


My vgs/lvs/pvs looks now like this:

Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb2[2] sda2[3]
      1952989272 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[2] sda1[3]
      523252 blocks super 1.2 [2/2] [UU]

 
 
  vgdisplay
  --- Volume group ---
  VG Name               pve1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1209
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TiB
  PE Size               4.00 MiB
  Total PE              476803
  Alloc PE / Size       476803 / 1.82 TiB
  Free  PE / Size       0 / 0
  VG UUID               6rRw2H-qdf7-GZwr-vJtW-ajnV-HYqz-OwS52Q

 
  pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               pve1
  PV Size               1.82 TiB / not usable 3.09 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476803
  Free PE               0
  Allocated PE          476803
  PV UUID               7I6GjO-Ic1s-n0oq-dcM1-qbQ3-cT6B-DZeUSJ
 
 
  lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve1/swap
  LV Name                swap
  VG Name                pve1
  LV UUID                tGfNB4-gIws-dhxu-1S8j-eimc-fpqa-0CJwca
  LV Write Access        read/write
  LV Creation host, time radio-mh, 2012-12-13 12:30:07 +0000
  LV Status              available
  # open                 2
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Path                /dev/pve1/root
  LV Name                root
  VG Name                pve1
  LV UUID                CdPaVw-3LNi-0Fqz-x82B-OD8R-euFW-510fgr
  LV Write Access        read/write
  LV Creation host, time radio-mh, 2012-12-13 12:30:31 +0000
  LV Status              available
  # open                 0
  LV Size                74.50 GiB
  Current LE             19072
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/pve1/data
  LV Name                data
  VG Name                pve1
  LV UUID                19L1iD-FoWw-S7Zn-0e3q-w0fY-XoII-ltjNjJ
  LV Write Access        read/write
  LV Creation host, time radio-mh, 2012-12-13 12:31:00 +0000
  LV Status              available
  # open                 0
  LV Size                1.73 TiB
  Current LE             453891
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2


Maybe there is an easy way, but this is what worked for me.



@Richard
You mentioned the ZFS... is it possible to convert my setup from raid to ZFS without data loss?
Does it make sense? I have no raid controller, this is only a raid1 software.


Thank you again,
Marius
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!