Help to reduce the capacity of a hard disk of a virtual machine

jolom

New Member
Mar 21, 2017
4
0
1
32
It helps to reduce the capacity of a hard disk of a virtual machine.
I need to recite the hard disk capacity that was assigned to a virtual machine.
It has a capacity of 600 gigabytes;
lsblk
NAME MAJ: MIN RM SIZE RO TYPE MOUNTPOINT
sda 8: 0 0 600G 0 disk
├─sda1 8: 1 0 118G 0 part /
├─sda2 8: 2 0 1K 0 part
└─sda5 8: 5 0 4G 0 part [SWAP]
sr0 11: 0 1 175M 0 rom

After an analysis that decided to reduce the capacity of 125G, for that I used the GPARTED tool, my problem is that I can not find the way to carry the real capacity of 125G of the disk, so that it does not occupy so much space.
df -h
File system size used Avail Use% Assembled
/ dev / sda1 117G 60G 51G 55% /
udev 8,9G 4,0K 8,9G 1% / dev
tmpfs 1.8G 300K 1.8G 1% / run
none 5.0M 0 5.0M 0% / run / block
none 8.9G 0 8.9G 0% / running / shm
In the different procedures I found, one of them was oriented to work directly on the proxmox server, using the resize2fs command.
With the virtual machine turned off and with the resize2fs command "/var/lib/vz/images/100/vm-100-disk-1.raw" 125G, this error appears: resize2fs 1.43.4 (31-Jan-2017)
resize2fs:
: ~ # resize2fs "/var/lib/vz/images/100/vm-100-disk-1.raw" 125G
resize2fs 1.43.4 (31-Jan-2017)
resize2fs: Bad magic number in super-block while trying to open /var/lib/vz/images/100/vm-100-disk-1.raw
Could not find valid filesystem superblock.

I need to know how to proceed to make an effective reduction of the hard disk of the VM.
 

alexskysilk

Active Member
Oct 16, 2015
582
61
28
Chatsworth, CA
www.skysilk.com
resize2fs: Bad magic number in super-block while trying to open /var/lib/vz/images/100/vm-100-disk-1.raw
Could not find valid filesystem superblock.
the raw device is the disk, not a partition. use a livecd with parted to boot the vm and you'll be able to resize it then. Bear in mind that you'd still not be able to shrink the raw device, so it may be more fruitful to simple backup the disk and restore it as thin provisioned.
 

jolom

New Member
Mar 21, 2017
4
0
1
32
Make a save by modifying the .Conf file belonging to the VM
nano /etc/pve/qemu-server/100.conf
In the configuration of the 500 hard disk, it goes down to 130, thus:
ide0: local: 100 / vm-100-disk-0.raw, size = 130G

Make a save and restore from the web in another server
Backups:
INFO: starting new backup job: vzdump 100 --storage NAS --compress lzo --remove 0 --node prox11 --mode stop
INFO: Starting Backup of VM 100 (qemu)
INFO: status = stopped
INFO: update VM 100: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: squid
INFO: include disk 'ide0' 'local: 100 / vm-100-disk-0.raw' 130G
INFO: creating archive '/mnt/pve/NAS/dump/vzdump-qemu-100-2019_02_11-10_47_19.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '2368982e-12c4-44f1-9798-93f69ad9fdbf'
INFO: status: 0% (1125122048/536870912000), sparse 0% (182923264), duration 3, read / write 375/314 MB / s
INFO: status: 1% (5887885312/536870912000), sparse 0% (2005385216), duration 26, read / write 207/127 MB / s
INFO: status: 2% (10905321472/536870912000), sparse 0% (4322287616), duration 60, read / write 147/79 MB / s

And when Restore on another server I continue to put 500GB
restore vma archive: lzop -d -c /mnt/pve/NAS-SERVER/dump/vzdump-qemu-100-2019_02_11-10_04_22.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp1015822.fifo - / var / tmp / vzdumptmp1015822
CFG: size: 324 name: qemu-server.conf
DEV: dev_id = 1 size: 536870912000 devname: drive-ide0
CTIME: Mon Feb 11 10:04:23 2019
Using default stripesize 64.00 KiB.
For thin pool auto extension activation / thin_pool_autoextend_threshold should be below 100.
Logical volume "vm-150-disk-0" created.
WARNING: Sum of all thin volume sizes (500.00 GiB) exceeds the size of the thin pool pv / data and the size of the whole volume group (278.86 GiB)!
new volume ID is 'local-lvm: vm-150-disk-0'
map 'drive-ide0' to '/ dev / pve / vm-150-disk-0' (write zeros = 0)
progress 1% (read 5368709120 bytes, duration 23 sec)
progress 2% (read 10737418240 bytes, duration 39 sec)
 

alexskysilk

Active Member
Oct 16, 2015
582
61
28
Chatsworth, CA
www.skysilk.com
Make a save by modifying the .Conf file belonging to the VM
nano /etc/pve/qemu-server/100.conf
In the configuration of the 500 hard disk, it goes down to 130, thus:
ide0: local: 100 / vm-100-disk-0.raw, size = 130G
-- snip

And when Restore on another server I continue to put 500GB
restore vma archive: lzop -d -c /mnt/pve/NAS-SERVER/dump/vzdump-qemu-100-2019_02_11-10_04_22.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp1015822.fifo - / var / tmp / vzdumptmp1015822
CFG: size: 324 name: qemu-server.conf
DEV: dev_id = 1 size: 536870912000 devname: drive-ide0
CTIME: Mon Feb 11 10:04:23 2019
Using default stripesize 64.00 KiB.
For thin pool auto extension activation / thin_pool_autoextend_threshold should be below 100.
Logical volume "vm-150-disk-0" created.
WARNING: Sum of all thin volume sizes (500.00 GiB) exceeds the size of the thin pool pv / data and the size of the whole volume group (278.86 GiB)!
new volume ID is 'local-lvm: vm-150-disk-0'
map 'drive-ide0' to '/ dev / pve / vm-150-disk-0' (write zeros = 0)
progress 1% (read 5368709120 bytes, duration 23 sec)
progress 2% (read 10737418240 bytes, duration 39 sec)
That is the correct behavior. it doesnt really matter what you put in the config, the disk image you have is provisioned for 500GB. You're being warned that the disk is provisioned for more space then the available storage has. This is ok so long as you dont use the unallocated area of your disk.

Long term, you may want to attach a new disk to the system and then use partition magic livecd or equivalent to clone your original disk to the new, smaller disk.
 

jolom

New Member
Mar 21, 2017
4
0
1
32
That is the correct behavior. it doesnt really matter what you put in the config, the disk image you have is provisioned for 500GB. You're being warned that the disk is provisioned for more space then the available storage has. This is ok so long as you dont use the unallocated area of your disk.

Long term, you may want to attach a new disk to the system and then use partition magic livecd or equivalent to clone your original disk to the new, smaller disk.
Try a cloning of the 500 disk, passing only the space occupied a smaller disk, all OK, only the virtual machine to the initial told me that I could not boot because the uuid not cohincid, after several attempts finally start, throwing me another series of problems.
By then, how can I clone the occupied space to another disk of less capacity without dying in the attempt?
 

ness1602

Member
Oct 28, 2014
31
2
8
This is what i would do, add another disk,mount any linux,recreate partitions and cp -a from / and all other partitions.
Rebuild initramfs and start the machine
 

alexskysilk

Active Member
Oct 16, 2015
582
61
28
Chatsworth, CA
www.skysilk.com
Try a cloning of the 500 disk, passing only the space occupied a smaller disk, all OK, only the virtual machine to the initial told me that I could not boot because the uuid not cohincid, after several attempts finally start, throwing me another series of problems.
By then, how can I clone the occupied space to another disk of less capacity without dying in the attempt?
I dont know what your OS is but you can always try boot repair (https://help.ubuntu.com/community/Boot-Repair)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!