increase vm disk size

M

mndave668

Guest
Hello,
I severely underestimated on of my vm's hard disk siize. Is there a way to increase its size from 100 gig to 500 gig without recreating the vm?

Thanks
 
Hello,
I severely underestimated on of my vm's hard disk siize. Is there a way to increase its size from 100 gig to 500 gig without recreating the vm?

Thanks
Hi,
yes - it's depends on your disk-format/storage. If the storage is lvm-storage, you can simply expand the logical volume.
qcow2-files can you expand with "qemu-img resize filename +size".
If you use raw, you can also expand (append space on the raw-file), but be sure not to overwrite your data. Perhaps it's easier to create a new bigger disk and copy with dd the content from small-disk to big-disk.

Anyway you must after that resize the disk inside the vm - you can use an live-distro like partitionmagic for that.

Udo
 
I had an OpenVZ server created with the Wordpress template.
When I had this problem I was amazed I could just resize the disk size using the Proxmox webIF.

Now I have the same problem and ran out of space (other VM)
I'm not able to do that again....

Why isn't it possible now?
 
What is the problem (exactly)?
The harddisk is full on my OpenVZ system.
In the WebIF I set it to 140 GB, but the size never followed the size in the WebIF.
I did this before with another VM (but also based on the same template, "wordpress").
I believe it didn't even need a restart of the system.


The 2 machines have both a 150 GB harddisk and a 1 TB.
It is also running DRBD.



OpenVZ machine
Code:
root@zabbix:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/simfs             28G   28G     0 100% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
tmpfs                 3.9G     0  3.9G   0% /dev/shm
overflow              1.0M     0  1.0M   0% /tmp

Its Host:
Code:
proxmox-2:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root   35G  1.3G   32G   4% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
udev                   10M  744K  9.3M   8% /dev
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/mapper/pve-data   93G   93G     0 100% /var/lib/vz
/dev/sda1             504M   44M  435M  10% /boot

proxmox-2:~# blkid
/dev/sda1: UUID="13987169-1556-4900-91c2-f9278713c5b8" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda2: UUID="jotiQ0-f4Bo-yKNd-M0LZ-eSGv-7SBl-vjS2Hg" TYPE="lvm2pv"
/dev/dm-0: TYPE="swap" UUID="52c04db1-a5c4-4ef5-8a96-655b4dda0f14"
/dev/dm-1: UUID="2ef9246f-8a3e-42a4-b114-936b663b14cc" TYPE="ext3"
/dev/dm-2: UUID="d75d2126-9670-4c11-a26f-e3186caf55c9" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb1: UUID="upE4dZ-ccwF-9GXc-ob9b-eaFY-FCzc-N244TH" TYPE="lvm2pv"
/dev/drbd0: UUID="upE4dZ-ccwF-9GXc-ob9b-eaFY-FCzc-N244TH" TYPE="lvm2pv"
 
Last edited:
Well, the filesystem on the host is simple full!

Code:
/dev/mapper/pve-data   93G   93G     0 100% /var/lib/vz
 
well, the filesystem on the host is simple full!

Code:
/dev/mapper/pve-data   93g   93g     0 100% /var/lib/vz
omg...

Of course it is....
I started to use proxmox for full virtualization and then learned about OpenVZ.
But these OpenVZ machines will go on the normal boot disk and not on the DRBD-volume, which I never realized because I was too busy with these machines themselves..

This also means I have no redundancy for OpenVZ machines :-(

Thanks, I will now quickly delete an obsolete VM on that host and get it going again....:D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!