Hi All,
I have a proxmox sytem that has been installed onto a raid 10 system running on a LSI megaraid controller.
Now.. it seems that their are some issues in that the system will hang up when the drives are being driven hard. Not a crash as such - but the system seems to get into a state where it is waiting for io response... that never comes.
I recall somewhere it being mentioned that the proxmox kernel is redhat based? Digging around the net has shown that this issue also occurs on redhat kernels generally.
So....
A little digging and I found this article:
http://docs.redhat.com/docs/en-US/R.../6/html/Performance_Tuning_Guide/ch06s04.html
This is very good reading!
The long and short of it is I have at this point done some configuration that seems to help on my systems.
I simply run the following:
in /etc/rc.local
The place to start playing is really these values:
echo 16 > /sys/block/sda/queue/iosched/slice_idle (default 8)
echo 64 > /sys/block/sda/queue/iosched/quantum
echo 1 > /sys/block/sda/queue/iosched/group_idle
Changing the slice_idle setting to 16 seems to have really helped when under load.
No doubt this is specific to the mega raid controllers I have.. but you never know. May help all round!
Rob
I have a proxmox sytem that has been installed onto a raid 10 system running on a LSI megaraid controller.
Now.. it seems that their are some issues in that the system will hang up when the drives are being driven hard. Not a crash as such - but the system seems to get into a state where it is waiting for io response... that never comes.
I recall somewhere it being mentioned that the proxmox kernel is redhat based? Digging around the net has shown that this issue also occurs on redhat kernels generally.
So....
A little digging and I found this article:
http://docs.redhat.com/docs/en-US/R.../6/html/Performance_Tuning_Guide/ch06s04.html
This is very good reading!
The long and short of it is I have at this point done some configuration that seems to help on my systems.
I simply run the following:
Code:
# raid controller tuning
echo cfq > /sys/block/sda/queue/scheduler
echo 512 > /sys/block/sda/queue/nr_requests
echo 16 > /sys/block/sda/queue/iosched/slice_idle
echo 64 > /sys/block/sda/queue/iosched/quantum
echo 1 > /sys/block/sda/queue/iosched/group_idle
echo 500000000 >/proc/sys/vm/dirty_bytes
echo 30 > /proc/sys/vm/dirty_background_ratio
echo 60 > /proc/sys/vm/dirty_ratio
/sbin/blockdev --setra 32768 /dev/sda
/sbin/blockdev --setra 32768 /dev/mapper/pve-data
/sbin/blockdev --setra 32768 /dev/mapper/pve-root
in /etc/rc.local
The place to start playing is really these values:
echo 16 > /sys/block/sda/queue/iosched/slice_idle (default 8)
echo 64 > /sys/block/sda/queue/iosched/quantum
echo 1 > /sys/block/sda/queue/iosched/group_idle
Changing the slice_idle setting to 16 seems to have really helped when under load.
No doubt this is specific to the mega raid controllers I have.. but you never know. May help all round!
Rob