Move VM (VM) to second harddsik [noob alert]

hfw

New Member
Jan 18, 2014
5
0
1
Hi there,

we have a proxmox 3.1 server with 7 KVM-VM's. We added a second harddisk and via the gui moved 2 of the VM's to the second harddisk. But get moved over fine, but their size is the actual size of the HD space allocated to the VM's. I was under the impression the VM HD's when a new KVM is created are incremental?

What do I miss please, any guidance to a noob is appreciated.

Regards,
-HF
 
Hi there,

we have a proxmox 3.1 server with 7 KVM-VM's. We added a second harddisk and via the gui moved 2 of the VM's to the second harddisk. But get moved over fine, but their size is the actual size of the HD space allocated to the VM's. I was under the impression the VM HD's when a new KVM is created are incremental?

What do I miss please, any guidance to a noob is appreciated.

Regards,
-HF
Hi,
don't realy understand the problem! Sorry, my bad english don't fit yours.
If you mean, that the allocated space on "local" don't change after moving VM-hdds, you have perhaps not enable the flag "delete source"?

To understand the problem please give also following additional infos:
Code:
cat /etc/pve/storage.cfg
cat /etc/pve/qemu-server/VMID.conf
pvs
vgs
lvs
fdisk -l
Udo
 
Hi Udo,

thanks for the reply and directions. Let me quickly explain in more detail what I meant, before I flud this thread with config files. I've got a feeling that it is my stupidity not understanding some basic concepts with proxmox.

- We have 1 server with 1 harddisk (HD1) sized 250Gb
- Proxmox 3.1 was installed on HD1 with all install defaults
- We uploaded some ISO's to be used to install new KVM's
- We then created 7 KVM VM's with different sizes of disk files (30, 50, 90)
- In total the assigned hardisk file sizes for each individual KVM well exceeded the real capacity of HD1
- So we ASSUMED the harddisk files for each KVM must be incremental

then

- we added a second harddisk (hd2) with also 250Gb capacity (also local)
- we moved 2 KVM's (via GUI and with 'delete source' ticked) from hd1 to hd2
- hd2 was already filled up with 180Gb out of the 250Gb

So the main question is, how can hd1 hold so many KVM's whislt hd2 can only hold 2 of them

Do I make any sense?

TIA

-HFW
 
Hi,
depends on the format of the VM-HDD. If you use qcow2 as format (storage must have an filesystem) only the content use space.
If you copy this hdd to an lvm-storage the destination is in raw-format and the logical volume have the same size like the virtual hdd (the logical volume is an image of an real hdd).

If you use raw-format on an filesystem the file is an sparse-file (also not the whole size in the beginning).

Udo
 
Hi Udo,

the VM-HDD are all qcow2, except for 1. So we moved 4 x KVM-qcow2-hdd from hd1 to hd2. hd2 ended full with only 4 KVM's where hd1 could hold all of them. Moving them back to hd1 from hd2 is not possible for it will acceed the capacity of hd1.

Thanks,
-HF
 
Hi Udo,

the VM-HDD are all qcow2, except for 1. So we moved 4 x KVM-qcow2-hdd from hd1 to hd2. hd2 ended full with only 4 KVM's where hd1 could hold all of them. Moving them back to hd1 from hd2 is not possible for it will acceed the capacity of hd1.

Thanks,
-HF
Hi,
to find the issue, please post the output of the commands posted before (plus mount):
Code:
cat /etc/pve/storage.cfg
cat /etc/pve/qemu-server/VMID.conf
pvs
vgs
lvs
fdisk -l
mount
df -h
Udo
 
Last edited:
Thanks for the assistance. Please see below. (xxxx hashed out private data)

root@test:~# cat /etc/pve/storage.cfg
dir: backup
path /backup
shared
content backup
maxfiles 5


dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0


dir: pve-disk2-backups
path /pve-disk2/backups
shared
content backup
maxfiles 5


dir: pve-disk2-data
path /pve-disk2/data
content images,iso,vztmpl,rootdir
maxfiles 5


root@test:~# cat /etc/pve/qemu-server/511.conf
#FQDN%3A xxxxxxx
#LAN%3A 10.0.4.2
#WAN%3A xxxxxxx
#Patched%3A YES
bootdisk: virtio0
cores: 2
cpu: host
cpuunits: 10000
ide2: none,media=cdrom
memory: 4096
name: xxxx
net0: e1000=56:49:B3:85:D2:E0,bridge=vmbr0
net1: rtl8139=BE:BF:28:6A:7A:52
ostype: l26
sockets: 1
tablet: 0
vga: qxl
virtio0: pve-disk2-data:511/vm-511-disk-1.qcow2,format=qcow2,size=90G



root@test:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 pve lvm2 a-- 232.38g 16.00g




root@test:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 232.38g 16.00g



root@test:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 154.39g
root pve -wi-ao--- 58.00g
swap pve -wi-ao--- 4.00g


root@test:~# fdisk -l


Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001c001


Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1048575 523264 83 Linux
/dev/sda2 1048576 488396799 243674112 8e Linux LVM


WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.




Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Device Boot Start End Blocks Id System
/dev/sdb1 1 488397167 244198583+ ee GPT


Disk /dev/mapper/pve-root: 62.3 GB, 62277025792 bytes
255 heads, 63 sectors/track, 7571 cylinders, total 121634816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-root doesn't contain a valid partition table


Disk /dev/mapper/pve-swap: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-swap doesn't contain a valid partition table


Disk /dev/mapper/pve-data: 165.8 GB, 165771476992 bytes
255 heads, 63 sectors/track, 20153 cylinders, total 323772416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-data doesn't contain a valid partition table


root@test:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=2042095,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1635652k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3271300k)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/sda1 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
none on /sys/kernel/config type configfs (rw,relatime)
beancounter on /proc/vz/beancounter type cgroup (rw,relatime,blkio,name=beancounter)
container on /proc/vz/container type cgroup (rw,relatime,freezer,devices,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,relatime,cpuacct,cpu,cpuset,name=fairsched)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
/dev/sdb1 on /pve-disk2 type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
 
Hi,
you have defined /pve-disk2/ twice - for backup and for images.
I guess the space for backups fill the disk?

Look with
Code:
du -hs /pve-disk2/*
BTW, it's enough to define the storage one time like
Code:
dir: pve-disk2
path /pve-disk2
You can select for which type the storage is usable - like backup/images/template...
pve create subdirs for the type, like /pve-disk2/dump, /pve-disk2/images and so on.

BTW 2: You defined the storage on the second disk as shared, but this isn't shared storage. Shared storage mean, that the same storage is accessible on more than one node (NFS-mounts, iSCSI, FC-Raids, Ceph, DRBD...).

Udo
 
Hi Udo,

the backup dir on pve-disk2 holds no data.

So we just created a test KVM qcow2 10Gb on hd1. After installation of the OS the real file on the hd is only 2.4Gb. We then moved the test KVM to hd2 and after moving the real size of the file is 10Gb.

So it expands.....
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!