I/O and bandwidth limit

Would you please explain what does this command do?
I really need to get I/O rate per each VM and as I've about 10 vm on my server do you think it's a good idead to make a partition for each one?
Hi,
"iostat -dm 5 device" shows the io-rate of the device averaged over 5 seconds - or every 5 seconds show iostat the ios:
Code:
 iostat -dm 5 sdb
Linux 2.6.32-1-pve (proxmox1)     27.04.2010     _x86_64_

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb               7,55         0,27         0,20    2109711    1582674

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb             362,60         0,27        25,02          1        125

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb             898,60         0,62        63,09          3        315

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb             906,00         0,62        63,36          3        316
If you need to see the differents between the VMs it's perhaps the best to use a lvm-storage. Each disk can select seperatly with dm-xx.
What's about monitoring inside the VM (eg. with icinga (nagios))?

Udo
 
What's about monitoring inside the VM (eg. with icinga (nagios))?
All of my VMs are KVM based Windows guests,

I've installed Proxmox from it's installation disk, so how can I make a partition for each VM when I want to create them, and is it possible to resize partitions?
 
All of my VMs are KVM based Windows guests,

I've installed Proxmox from it's installation disk, so how can I make a partition for each VM when I want to create them, and is it possible to resize partitions?
Hi,
if you use lvm-storage your (ontop) defined disks are logical volumes, which are like single partitions... for testing you can add a single hd to your server and make a volume-group on the disk (eg. pvcreate /dev/sdb1; vgcreate -s 4M kvmvg /dev/sdb1; vgscan) and add the storage in the web-gui.
If all runs fine you can shrink your existing (raid?) hd and use a seperate pasrtition for the kvm-lvm (e.g. sda3) but you must boot a cd-distro, save everything, remove the existing lvm, repartition the disk, create the pve-lvm and the new kvm-lvm, create the old lvs and restore the data. Its no magic, but you should have a little experience with it.

Udo
 
Hi,
if you use lvm-storage your (ontop) defined disks are logical volumes, which are like single partitions... for testing you can add a single hd to your server and make a volume-group on the disk (eg. pvcreate /dev/sdb1; vgcreate -s 4M kvmvg /dev/sdb1; vgscan) and add the storage in the web-gui.
If all runs fine you can shrink your existing (raid?) hd and use a seperate pasrtition for the kvm-lvm (e.g. sda3) but you must boot a cd-distro, save everything, remove the existing lvm, repartition the disk, create the pve-lvm and the new kvm-lvm, create the old lvs and restore the data. Its no magic, but you should have a little experience with it.

Udo
Can I split the /dev/mapper/pve-data into few partitions before creating new VMs and then use Proxmox to create each VM on one of them so it will be possible to get IO rate per vm?
 
Can I split the /dev/mapper/pve-data into few partitions before creating new VMs and then use Proxmox to create each VM on one of them so it will be possible to get IO rate per vm?
Hi,
no not so easy. You can shrink pve-data and create with the web-gui a lvm-storage on the pve-lvm to use the space even for kvm-logical volumes (i have this not testet). But this is not recomended!! And you can get trouble if you leave to little free space in the VG (backup fails).

Udo
 
Hi,
no not so easy. You can shrink pve-data and create with the web-gui a lvm-storage on the pve-lvm to use the space even for kvm-logical volumes (i have this not testet). But this is not recomended!! And you can get trouble if you leave to little free space in the VG (backup fails).
My needs are not ordinary so getting into them isn't easy as I know,
I will make backups on the other server so don't worry about backups, and please help me to configure such scenario,
Regards
 
My needs are not ordinary so getting into them isn't easy as I know,
I will make backups on the other server so don't worry about backups, and please help me to configure such scenario,
Regards
Hi,
here a short example how use the pve-lvm also for logical volumes of kvm-vms.
Note, this is not the recommended way and for test only (but i think it works).

First look how much data you use at /var/lib/vz:
Code:
df -k /var/lib/vz
perhaps you need a external disk to save the whole content of /var/lib/vz. In my example it fits on /
Stop all VMs.
Save all from /dev/pve/data:
Code:
cd /var/lib/vz/
tar cvzf /root/data.tar.gz .
umount /var/lib/vz
look how big are the lvs and the volume group:
Code:
vgdisplay
lvdisplay
remove the data-lv ! if your backup aren't valid you lost all disk of the VMs!
Code:
lvremove /dev/pve/data
and create a new one (not so great, so you get enough space for the kvm-disks):
Code:
lvcreate -L 150G -n data /dev/pve
mkfs.ext3 /dev/pve/data
mount /var/lib/vz
restore the content
Code:
cd /var/lib/vz
tar xvzf /root/data.tar.gz
look at free space with vgdisplay (remeber - you need min. 4G free space for backup).
Add lvm-group at the storage-menu and select pve (name eg. pve-vg).
Now you can create a kvm-VM and select as storage for the disk pve-vg.
look with lvdisplay and you see your disk as logical volume.
Now you can use iostat on this disk (e.g. dm-3).

Udo
 
Network bandwidth you might be able to restrict using standard linux packet shaping tools, if you're running KVM images then each image will have its own network device such as vmtab123i1, just restrict the available bandwidth on the appropriate interface...
 
Udo, is there any restriction on number of partitions on a device? I might need about 10 partitions.
Network bandwidth you might be able to restrict using standard linux packet shaping tools, if you're running KVM images then each image will have its own network device such as vmtab123i1, just restrict the available bandwidth on the appropriate interface...
Would you please explain a little bit more? I'm using just KVM.
 
Udo, is there any restriction on number of partitions on a device? I might need about 10 partitions.
...
Hi,
in my explanation you don't use partitions, you use logical volumes on a lvm-storage. There a no (practical) limits. It's a little bit like container-files (logical volume) on a filesystem. But the volume-group don't have a filesystem... see lvm2-howto!

Udo
 
Thanks,
I've made a LVM group an introduced it to Proxmox, Now each VM has it's own record in iostat, but I've two more questions
1- Proxmox is creating a file for each VM in /dev/vm-data ( which I've configured to ) but I cant get the size of file using ls -s command, it's always zero, how can I see the actual virtual disk file and maybe copy it for backup purposes,
2- with iostat -dm 2 I can see the actual I/O rate per second but how can I know statistics for last hour or last 24 hours?
 
Thanks,
I've made a LVM group an introduced it to Proxmox, Now each VM has it's own record in iostat, but I've two more questions
1- Proxmox is creating a file for each VM in /dev/vm-data ( which I've configured to ) but I cant get the size of file using ls -s command, it's always zero, how can I see the actual virtual disk file and maybe copy it for backup purposes,
Hi,
use simply the lvdisplay-command and you see the size (in this case 8GB)
Code:
# lvdisplay /dev/pve/vm-102-disk-1
  --- Logical volume ---
  LV Name                /dev/pve/vm-102-disk-1
  VG Name                pve
  LV UUID                n3HwVU-IlpH-ZC0S-SRJY-Y91K-E9cF-XhOHYI
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                8,00 GB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:3
For backup use the backup-function from proxmox - works very well.
You can also save the logical volume with dd, but then the VM should not run. Or you make a snapshot on the lv and save the data (this is that, what the proxmox-backup make for you).
2- with iostat -dm 2 I can see the actual I/O rate per second but how can I know statistics for last hour or last 24 hours?
If you do a "iostat -m /dev/dm-3" every hour and grep the MB_read and MB_wrtn values, you can graph the access with the difference between the values... it's a little bit scripting.
This kind of graph can be good done with incinga - i wrote a wiki-entry at the proxmox-wiki about this (monitoring). You can easy adapt this to monitor also the disk-io.

Udo
 
This is output of iostat -m /dev/dm-5 on my server
Linux 2.6.18-2-pve 04/30/10 _x86_64_

avg-cpu: %user %nice %system %iowait %steal %idle
0.66 0.00 2.01 3.51 0.00 93.82

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
dm-5 62.96 0.23 0.01 1348 82

does it means the average read rate per last hour was 0.23 MB/s and the total I/O transfer was .23*60*60 ?
 
This is output of iostat -m /dev/dm-5 on my server
Linux 2.6.18-2-pve 04/30/10 _x86_64_

avg-cpu: %user %nice %system %iowait %steal %idle
0.66 0.00 2.01 3.51 0.00 93.82

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
dm-5 62.96 0.23 0.01 1348 82

does it means the average read rate per last hour was 0.23 MB/s and the total I/O transfer was .23*60*60 ?
Hi,
no - it means that since the disk is active, you have read 1348MB from the Disk and write 82MB to the disk. If you execute the iostat after one hour again you get perhaps 1400 90, which mean that in the last hour 52MB (1400-1348) read from the disk and 8MB write to the disk. You can also use -k to get kilobytes or nothing to get 512byte.

Udo
 
use simply the lvdisplay-command and you see the size (in this case 8GB)
I've built an image for start of VMs so there will be no need to install Windows from cdrom for each VM but this way I can't have access to .raw file of each VM.
 
I've built an image for start of VMs so there will be no need to install Windows from cdrom for each VM but this way I can't have access to .raw file of each VM.
Hi,
where is the problem? Copy your template on the lv and all is fine.
Example:
Code:
cd /where/the/template/is
dd if=template-win-disk-1.raw of=/dev/test-vg/vm-121-disk-1 bs=1024k
See also "man dd".

Udo
 
Code:
cd /where/the/template/is
dd if=template-win-disk-1.raw of=/dev/test-vg/vm-121-disk-1 bs=1024k
See also "man dd".

Udo
Thanks,
it works but this way I will always have raw files with big size, for example maybe the VM just uses 4 GB of files but because the actual virtual HDD is 32 GB the backup files will always be 32 GB
it's just sample:
dd if=/dev/vms-data/vm-116-disk-1 of=/template-win-disk.raw bs=1024k
15360+0 records in
15360+0 records out
16106127360 bytes (16 GB) copied, 225.731 s, 71.4 MB/s
 
Thanks,
it works but this way I will always have raw files with big size, for example maybe the VM just uses 4 GB of files but because the actual virtual HDD is 32 GB the backup files will always be 32 GB
...
Hi,
that's right - with lvm you must allways using raw. qcow2 is only usable with local or nfs-storage(? i think so, but never tried). On the other side, you get a better performance with raw and you can compress your template (with bzip2).

Udo
 
Thanks,
it works but this way I will always have raw files with big size, for example maybe the VM just uses 4 GB of files but because the actual virtual HDD is 32 GB the backup files will always be 32 GB

Maybe you can use 'cp' with otpion '--sparse' instead.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!