zfs and I/O delay measurements in proxmox 4.1.33

The Samsung is normally irrelevant in my problem since the system of proxmox is installed on it and of course none of my vm is launched, then no iodelay. I always separate system from data storage or vm hosting.

I will try to find some documentation about the fio based test because I m not familiar with this terminology.

So which of my pveperf f test was the good one for my raid 10 then?

And what do you recommend for a test to do during a iostat measurement?
 
So which of my pveperf f test was the good one for my raid 10 then?

That was not good either, I'd expect at least twice the amount.

And what do you recommend for a test to do during a iostat measurement?

You can use iostat in the case that it feels slow. Otherwise any read/write operation should yield similar results for all your disks, if it does not, then something is fishy.

I don't know if it was already mentioned, but normally ZFS uses up to half the amount of RAM and this means that you should also use in KVM/LXC also half amount of ram of you machine.
 
so this thing that I put earlier wasn't good either:
Code:
root@ns001007:~# pveperf /dev/zpoolada/
CPU BOGOMIPS:      63980.16
REGEX/SECOND:      3302348
HD SIZE:           0.01 GB (udev)
FSYNCS/SECOND:     161446.17
DNS EXT:           43.48 ms
DNS INT:           26.18 ms

I will put in a couple of hours my iostat when just 3 VM is launched and so mostly idle because of my weekly backup which is running right now. Because my first problem is the iodelay when the system shouldn't be busy to do anything. When you have 1% of cpu busy and 5 or 10% of iodelay, that shouldn't happen.

you said half the ram? the total amount of the machine? But from what I red ZFS use 7/8 of the RAM max because of the old solaris ystem which needed 1/8 of 8GB of RAM to work properly and it's dynamically managed. So should I consider that I only have then 16 GB of RAM for my VM and lxc, that's what you meant?
 
ZFS on Linux uses at most half of the RAM as ARC. This is compiled it. It does not matter what Solaris or FreeBSD does, because their implementation is different.

You can limit the amount of RAM used for ZFS if you like. More RAM for ZFS means faster operation.

I cannot say what is causing your high IO-wait times, but it has to be the VMs. Are all on zvols? You can optimize by running them with discard and virtio, it it is not already the case.
 
ZFS on Linux uses at most half of the RAM as ARC. This is compiled it. It does not matter what Solaris or FreeBSD does, because their implementation is different.

You can limit the amount of RAM used for ZFS if you like. More RAM for ZFS means faster operation.

I cannot say what is causing your high IO-wait times, but it has to be the VMs. Are all on zvols? You can optimize by running them with discard and virtio, it it is not already the case.

mmmh okey different implementation now i can understand better.
well i don't how to identify if it's zvol. I've restored it from backup and they were as disks themselves so raw partition? so they are not files like qcow2 i mean. but maybe i should do a special thing to do it?
there are in IDE most of them and no discard option crossed.
so you confirmed to me that nearly contant I/O is not normal?

here is the log where I launch windows10 without what you ask as optimization but with some program launched + differents program which are at startup. telle me if you find something in my mess but doesn't seem that my disk are not at same speed is it? I guess that the different speed between the two mirrors are normal?
the different zd, is the the different zvol for each vm?
sorry about the zip but it was 1.5MB
 

Attachments

This is really not much traffic and the disks have similar performance in all files. Thanks for sharing.

One big this is IDE, but that cannot be the problem, because the very same VMs were not slow. so this is not really the problem you have at the moment.

zd are the ZFS block devices, so you use zvols (ZFS volume). Here an example from my laptop:

Code:
$ zfs list -t all -o name,type -r zpool/proxmox
NAME                                           TYPE
zpool/proxmox                                  filesystem
zpool/proxmox/base-2000-disk-2                 volume
zpool/proxmox/base-2000-disk-2@__base__        snapshot
zpool/proxmox/subvol-100-disk-1                filesystem
zpool/proxmox/subvol-1003-disk-1               filesystem
zpool/proxmox/subvol-1003-disk-1@zfs-export    snapshot
zpool/proxmox/subvol-1004-disk-1               filesystem
zpool/proxmox/subvol-1004-disk-1@zfs-export    snapshot
zpool/proxmox/subvol-1005-disk-1               filesystem
zpool/proxmox/subvol-1005-disk-1@zfs-export    snapshot
zpool/proxmox/vm-1000-disk-1                   volume
zpool/proxmox/vm-1001-disk-1                   volume
zpool/proxmox/vm-1001-disk-1@updates-20160412  snapshot
zpool/proxmox/vm-2000-disk-1                   volume
 
I really don't know what to do. I've tried to restore one vm and just launch that one ut nothing change. should I redo the zfs pool? or maybe reinstall the whole system?
 
Hi all. Have some problems with IO ZFS. What community recommend for use?

high_io.JPG disks.JPG zpool.JPG

I have proxmox installation Virtual Environment 5.0-23 on Dell T130 without real raid controller. Disks in ATA AHCI mode. Disks configuration hard drives in mirror mode configured when installer start and ssd in stripe mode, configured after install.
 
What is your problem with your I/O? If I read your images correctly, you installed PVE on the harddisks, not the SSDs and benchmarked the disk. 100 IOPS is in the expected range of IOPS for a two sata 7.2krpm disk, mirrored setup.

Why have you created a stripped (RAID-0) pool on the SSDs?
 
Yes. I install Proxmox on standard disk, because my configuration must be with large virtual hard disks, but the system installed on the same disks. Thats i choose mirror zfs raid. SSD will be used with some virtual machines for system disks. And my problem is high value in summary tab when virtual machine working.
 
SSD will be used with some virtual machines for system disks.

In a RAID0 setup? Why would you run a RAID1 and a RAID0 both with crucial data? Please use RAID1 for the SSDs.

And my problem is high value in summary tab when virtual machine working.

I'd have a look at the VMs causing this. On the PVE side, you only see that you wait a lot. Having only two disks is also very bad for disk intensive workloads. As I said already, the numbers you provided are in the expected range for such a setup.

I'd check the single drive io response times with if there lays the problem:

Code:
iostat -x 5
 
The fact is that virtual machines on the SSD will often be reserved and where the DB is planned, and on the usual disks on which Proxmox is located, the files will be stored and will not be OK. So this behavior is considered normal? Why is so much RAM consumed? The virtual machine uses 2 GB + what does the host use, the rest is 30+ GB where?
 
You still have not answered any of my questions .. e.g. RAID and iostat output.

The fact is that virtual machines on the SSD will often be reserved and where the DB is planned, and on the usual disks on which Proxmox is located, the files will be stored and will not be OK.

What files and what do you mean by "reserved"? It's not clear to me where do you want to store your database files, on the SATA disks? I hope not.

So this behavior is considered normal?

That a two disk mirror of cheap sata disks is very slow (in the 100-200 IOPS range)? Yes, this is normal. Without any caching, BBU-backed write-back RAID controller, you won't improve there.

Why is so much RAM consumed? The virtual machine uses 2 GB + what does the host use, the rest is 30+ GB where?

A lot of information is on the forums about that. Just recently this one:

https://forum.proxmox.com/threads/memory-usage-more-than-summary-of-vms.36054/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!