advice needed on IO Load

We are using an Adaptec Raid Card (3105)
8 Core Xeon Server
We have 8 drives connected in Raid 50 (Seagate 7200Sata)

The IO Load when doing a backup however we are seeing IO Load over 70% at times

On another system I tried to configure differently -
set each drive up in a new array

simple mirror drive on drive for 4 1TB drive arrays
on this server if we symlink /var/lib/vz/private/#ofvps to another array
mysql does not work under that array

any suggestions here ?

these loads are killing us.
 
We are using an Adaptec Raid Card (3105)
8 Core Xeon Server
We have 8 drives connected in Raid 50 (Seagate 7200Sata)

The IO Load when doing a backup however we are seeing IO Load over 70% at times

On another system I tried to configure differently -
set each drive up in a new array

simple mirror drive on drive for 4 1TB drive arrays
on this server if we symlink /var/lib/vz/private/#ofvps to another array
mysql does not work under that array

any suggestions here ?

these loads are killing us.

I (and also the adaptec website) do not know Adaptec 3105 - do have more details about this?

and how do make backups, settings? backup destination? backup logs?

unless you give full information it will be hard to analyze - the only thing I see based on the limited information is that you use slow disks.
 
io issues continued

on a regular server - these disks seem to work just fine
and we dont have that much running on the vps - so not sure why it would matter

card details:

Cache Memory
128MB of DDR2 memory
RAID Levels
RAID 0, 1, 1E, 5, 5EE, 6, 10, 50, 60, JBOD
Key RAID Features
• Supports 8 direct-attached or up to 128 SAS or SATA disk drives using SAS expander
• RAID Levels 0, 1, 10, 5, 50, JBOD
• Advanced Data Protection Suite
• RAID Levels 1E (Striped Mirror), 5EE (Hot Space), 6 and 60 (Dual Drive Failure Protection)
• Copyback Hot Spare
• Snapshot Backup (optional)
• Dynamic caching algorithm
• Online Capacity Expansion
• RAID Level Migration
• Optimized Disk Utilization
• Quick Initialization
• Native Command Queuing (NCQ)
• Hot spares – global, dedicated, and pooled
• Background initialization
• Automatic/manual rebuild of hot spares
• SAF-TE enclosure management support
• Configurable stripe size
• S.M.A.R.T. support
• Up to 512TB array sizes
• Multiple arrays per disk drive
• Bad stripe table
• Dynamic sector repair
• Staggered drive spin-up
• Bootable array support
• Hot-plug drive support
• Redundant path failover
3805 sorry bout that

not sure where backup logs would be located - any ideas?
 
on a regular server - these disks seem to work just fine
and we dont have that much running on the vps - so not sure why it would matter

card details:

Cache Memory
128MB of DDR2 memory
RAID Levels
RAID 0, 1, 1E, 5, 5EE, 6, 10, 50, 60, JBOD
Key RAID Features
• Supports 8 direct-attached or up to 128 SAS or SATA disk drives using SAS expander
• RAID Levels 0, 1, 10, 5, 50, JBOD
• Advanced Data Protection Suite
• RAID Levels 1E (Striped Mirror), 5EE (Hot Space), 6 and 60 (Dual Drive Failure Protection)
• Copyback Hot Spare
• Snapshot Backup (optional)
• Dynamic caching algorithm
• Online Capacity Expansion
• RAID Level Migration
• Optimized Disk Utilization
• Quick Initialization
• Native Command Queuing (NCQ)
• Hot spares – global, dedicated, and pooled
• Background initialization
• Automatic/manual rebuild of hot spares
• SAF-TE enclosure management support
• Configurable stripe size
• S.M.A.R.T. support
• Up to 512TB array sizes
• Multiple arrays per disk drive
• Bad stripe table
• Dynamic sector repair
• Staggered drive spin-up
• Bootable array support
• Hot-plug drive support
• Redundant path failover
3805 sorry bout that

not sure where backup logs would be located - any ideas?

ok. 3805 is well known. can you post the results of pveperf, make sure that your system is NOT under load.

backup logs: how do you make backups, which tool? if you use vzdump, you will an email report.
 
pve results

baptism:~# pveperf
CPU BOGOMIPS: 42005.15
REGEX/SECOND: 375928
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 68.68 MB/sec
AVERAGE SEEK TIME: 28.86 ms
FSYNCS/SECOND: 57.89
DNS EXT: 31.27 ms
DNS INT: 1.02 ms (vine.......org)


I am using the built in backup system

that seek time - ouch thats kinda high I think

next system showing:

proxmox:~# uptime
16:39:30 up 9 days, 8:30, 1 user, load average: 1.00, 0.69, 0.55
proxmox:~# pveperf
CPU BOGOMIPS: 32004.88
REGEX/SECOND: 417167
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 188.82 MB/sec
AVERAGE SEEK TIME: 12.88 ms
FSYNCS/SECOND: 132.87
DNS EXT: 290.29 ms
DNS INT: 78.62 ms (vine...........com)


That is setup in a mirror system - 2 drives same controller in another system

:-( do you think the drive speed is the biggest issue?
I hate to see SAS being needed since they are little drives

trying to get the best bang for the buck here
 
baptism:~# pveperf
CPU BOGOMIPS: 42005.15
REGEX/SECOND: 375928
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 68.68 MB/sec
AVERAGE SEEK TIME: 28.86 ms
FSYNCS/SECOND: 57.89
DNS EXT: 31.27 ms
DNS INT: 1.02 ms (vine.......org)


I am using the built in backup system

that seek time - ouch thats kinda high I think

next system showing:

proxmox:~# uptime
16:39:30 up 9 days, 8:30, 1 user, load average: 1.00, 0.69, 0.55
proxmox:~# pveperf
CPU BOGOMIPS: 32004.88
REGEX/SECOND: 417167
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 188.82 MB/sec
AVERAGE SEEK TIME: 12.88 ms
FSYNCS/SECOND: 132.87
DNS EXT: 290.29 ms
DNS INT: 78.62 ms (vine...........com)


That is setup in a mirror system - 2 drives same controller in another system

:-( do you think the drive speed is the biggest issue?
I hate to see SAS being needed since they are little drives

trying to get the best bang for the buck here

Based on both pveperf results I assume that you do NOT use the raid controller cache. (due to the VERY bad fsync/sec - should not be lower than 1000).

see this: (adaptec 3805, batterie backup, cache enabled on controller, disabled on disks, raid10 with 4 x 500 GB WD SATA Disks)

CPU BOGOMIPS: 17027.42
REGEX/SECOND: 410122
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 175.71 MB/sec
AVERAGE SEEK TIME: 11.08 ms
FSYNCS/SECOND: 1115.26
DNS EXT: 29.56 ms
DNS INT: 1.10 ms

you can run pveperf serveral times to see if the results are similar.

I suggest you install ASM (adaptec raid management software) and check the cache settings (you need to batterie module to protect the raid cache, but for testing its ok to enable without batterie)

and please add your email address to the backup job to get the log via email to see whats going on.
 

Attachments

  • raid10-cache-setting-adaptec-raid-manager-ASM.png
    raid10-cache-setting-adaptec-raid-manager-ASM.png
    28.5 KB · Views: 16

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!