Expanding virtual disk without downtime - is it possible?

offerlam

Renowned Member
Dec 30, 2012
218
0
81
Denmark
Hi all,

The only way I have been able to do this is with downtime. where i boot to a ubuntu cd and use gparted..

But it would be REALLY sweet if i can do this without any downtime..

this is the disk layout of a vm

Code:
dingit@Owncloud01:~$ df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/Owncloud01--vg-root   85G   38G   43G  47% /
udev                             2.0G  8.0K  2.0G   1% /dev
tmpfs                            396M  280K  396M   1% /run
none                             5.0M     0  5.0M   0% /run/lock
none                             2.0G     0  2.0G   0% /run/shm
/dev/vda1                        228M  182M   35M  85% /boot
dingit@Owncloud01:~$

So if i wanted to expand my disk i would first encrease its size in proxmox but that does not expand the patition.

I have been reading up on it on proxmox but can't get the command syntax right if it is at all possible and im just misreading the proxmox documentation..

THANKS

Casper
 
You could try:
Code:
qm set <vmid> -hotplug 1
But I would suggest that you use unpartioned space instead of a LVM volume inside the VM (eg /dev/vdc without any partition that maps to a host LVM Volume).
 
IF you succeeded in extending the virtual disk, then you are at a good point.
But first you have to:
- extend the physical drive seen by your VM, if it didn't do it automatically (for SCSI disks: echo "1" > /sys/bus/scsi/devices/0\:0\:1\:0/rescan forces a rescan of SCSI ID 0:0:1... I don't know if something similar is needed for paravirtualized drives) -- check if dmesg|tail reports something like "vda: detected capacity change from xxxxx to yyyy"
- resize the LVM partition (that depends on the partition table you're using), or just add another partition in the empty space, create a PV on it, and add that PV to the existing VG
- resize the LV
- resize the filesystem in the LV -- XFS requires online resizing, while most others require an offline (unmounted = downtime) fs resizing. This is the only step that would require downtime -- and one of the reasons I use XFS on big filesystems... another is sub-second check after reboots.

Resizing an LVM partition in a GPT drive requires deleting a re-creating the partition (as long you keep the same starting sector and the new partition is bigger than the old, you shouldn't have problems, but parted requires that the partition is unmounted, so if you really need no downtime you'll have to add another PV).

Here's what I did on one of my VMs:
Code:
root@str957-share:~# umount /srv/ArchivioStorico/
root@str957-share:~# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) u s
(parted) print                                                            
Error: The backup GPT table is not at the end of the disk, as it should be.  This might mean that another
operating system believes the disk is smaller.  Fix, by moving the backup to the end (and removing the
old backup)?
Fix/Ignore/Cancel? f                                                      
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of
the space (an extra 8589934592 blocks) or continue with the current setting? 
Fix/Ignore? f                                                             
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sdb: 15032385536s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name             Flags
 1      2048s  6442448895s  6442446848s  xfs          ArchivioStorico

(parted) rm 1                                                             
(parted) mkpart ArchivioStorico 2048 100%
(parted) p                                                                
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sdb: 15032385536s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End           Size          File system  Name             Flags
 1      2048s  15032383487s  15032381440s  xfs          ArchivioStorico

(parted) q                                                                
Information: You may need to update /etc/fstab.                           

root@str957-share:~# mount /srv/ArchivioStorico/
root@str957-share:~# xfs_growfs /srv/ArchivioStorico/
meta-data=/dev/sdb1              isize=256    agcount=7, agsize=134217600 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=805305856, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=262143, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 805305856 to 1879047680
root@str957-share:~# df
File system        blocchi di 1K   Usati   Dispon. Uso% Montato su
/dev/sdc1             15378832    789776  13807848   6% /
tmpfs                  1030548         0   1030548   0% /lib/init/rw
udev                   1025696       140   1025556   1% /dev
tmpfs                   841740         0    841740   0% /dev/shm
/dev/sdc4              2843300     69792   2629076   3% /tmp
/dev/sdc3             13841760   1093880  12044752   9% /var
/dev/sda1            5366620160 3712345160 1654275000  70% /srv/shared
/dev/sdb1            7515142148 3159558612 4355583536  43% /srv/ArchivioStorico
As you can see there's been a short downtime at the start, when I resized the partition -- but a reboot takes way longer... Your users will hardly notice.
 
Hi you two,

thanks for answering!

I think Ndk73 is on to something..

a few questions though...

as you see im using lvm... is it possible by using lvm to extend a lvm patition without downtime? from my reading i was lead to belive this should be possible?

as for filesystem im a little confused... you say xfs doesn't take any downtime but still in your code example you say you have a little downtime and that shows you are using xfs? am i missing something?

in your example you start by unmounting /srv/archiviostorcio/ - what is that?
are you doing it all from the machine terminal? by that i mean you don't beet into a start cd where you run parted from? im puzzled how i should do it since all my partitions are on one disk. but perhaps it is possible?

I haven't chosen any file system but i think ubuntu uses ext4 as standard? if so is there anyway to convert a ext4 to xfs?

Thanks

Casper
 
By default proxmox has hotplug turned off. To enable hotplug simply put this is vmid.conf: hotplug: 1

just to note, hotplug is not resizing. resizing disk works out of box with just a few clicks on the gui. (of course inside the VM you need to do some adaption, depending on the used OS).
 
Look at what I'm doing when fs is unmounted: I delete and recreate (with exactly the same starting sector! miss this and say adios to all your data) the partition! That can't be done with a mounted fs. Simply, parted won't allow it. If you host your fs on the raw device, you probably can resize it while online. But I have not tested and that's usually considered bad practice.

/srv/archiviostorico is the filesystem I needed resized (from 3TB to 7TB). It's backed by an LV on a Dell MD3200 (that probably I wouldn't buy again... but that's another story).
The machine where I mount /mnt/archiviostorico exports other shares (backed by another LV), that this way could remain online. I did that from an ssh shell.

The IMVHO downside of extN is its need for periodic checks: on multi-TB volumes, it could take days to return online after a simple reboot. More if it was "dirty". Sure, you can disable 'em (root knows what he's doing), but I think they're there for very good reasons!

The only way I know to convert from ext4 to xfs is a complete backup & restore, and you'll need more or less twice the used space (depending on how much you can compress the source data)..
 
just to note, hotplug is not resizing. resizing disk works out of box with just a few clicks on the gui. (of course inside the VM you need to do some adaption, depending on the used OS).

What would those steps be?

I would guess "pvresize /dev/sd*" would do the job after using the GUI to extend ex. 5GB, but I was wrong. Could you elaborate?
 
proxmox handles the resizing of the volume but this will not be noticed from inside by the VM's OS. There you have to expand the partition using whatever tool is available for the OS in question. After doing this will will likely have to expand the filesystem as well using what ever tool is available for your OS's filesystem in question.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!