BackupPc and chkrootkit scan throttle down VMs and Hypervisor performance

timeJunky

New Member
Dec 21, 2010
23
0
1
I try to backup my VMs individualy. Unfortunatelly, the virtual machines suffer in perfomence because of the backup action. There is only one rsync-connection at one time configured to each VM at the moment.

Besides that, I get in trouble within the VMs even I start a scan with chkrootkit.

Any idea, what to re-configure?

----------------

8 x Intel(R) Xeon(R) CPU L5630 @ 2.13GHz
24GB RAM
1.7TB ... partioned automatically
Version (package/version/build)pve-manager/1.7/5323 Kernel VersionLinux 2.6.32-4-pve #1 SMP Thu Oct 21 09:35:29 CEST 2010
 
what raid/harddrive setup do you have? and post the result of 'pveperf'. run this little benchmark tool when the server is idle.

do you talk about OpenVZ or KVM?
 
Hi tom,

thanks. I have a raid 10 system.


Raid controller -> RAID bus controller: LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 03)

Everything what i mounted. See below.

->/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
->tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
->proc on /proc type proc (rw,noexec,nosuid,nodev)
->sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
->udev on /dev type tmpfs (rw,mode=0755)
->tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
->devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
->/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
->/dev/sda1 on /boot type ext3 (rw)
->/var/lib/vz/raid on /raid type none (rw,bind)

fdisk -l executing returns those information.


->Disk /dev/sda: 1999.3 GB, 1999307276288 bytes
->255 heads, 63 sectors/track, 243068 cylinders
->Units = cylinders of 16065 * 512 = 8225280 bytes
->Sector size (logical/physical): 512 bytes / 512 bytes
->I/O size (minimum/optimal): 512 bytes / 512 bytes
->Disk identifier: 0x00000000

-> Device Boot Start End Blocks Id System
->/dev/sda1 * 1 66 524288 83 Linux
->Partition 1 does not end on cylinder boundary.
->/dev/sda2 66 243068 1951919390 8e Linux LVM

->Disk /dev/dm-0: 103.1 GB, 103079215104 bytes
->255 heads, 63 sectors/track, 12532 cylinders
->Units = cylinders of 16065 * 512 = 8225280 bytes
->Sector size (logical/physical): 512 bytes / 512 bytes
->I/O size (minimum/optimal): 512 bytes / 512 bytes
->Disk identifier: 0x00000000

->Disk /dev/dm-0 doesn't contain a valid partition table
->Disk /dev/dm-1: 11.8 GB, 11811160064 bytes
->255 heads, 63 sectors/track, 1435 cylinders
->Units = cylinders of 16065 * 512 = 8225280 bytes
->Sector size (logical/physical): 512 bytes / 512 bytes
->I/O size (minimum/optimal): 512 bytes / 512 bytes
->Disk identifier: 0x00000000

->Disk /dev/dm-1 doesn't contain a valid partition table

->Disk /dev/dm-2: 1879.6 GB, 1879585062912 bytes
->255 heads, 63 sectors/track, 228513 cylinders
->Units = cylinders of 16065 * 512 = 8225280 bytes
->Sector size (logical/physical): 512 bytes / 512 bytes
->I/O size (minimum/optimal): 512 bytes / 512 bytes
->Disk identifier: 0x00000000

->Disk /dev/dm-2 doesn't contain a valid partition table

The pveperf executing results.

->CPU BOGOMIPS: 34134.13
->REGEX/SECOND: 819721
->HD SIZE: 94.49 GB (/dev/mapper/pve-root)
->BUFFERED READS: 177.08 MB/sec
->AVERAGE SEEK TIME: 9.97 ms
->FSYNCS/SECOND: 49.93
->DNS EXT: 50.11 ms

In proxmox i configured whole virtual machines as KVM's and during the executing of pveperf, the proxmox with the virtual machines are still running.
I realized that when the BackupPc runs then the FSYNC/Seconds had a very low rate.


Regard

TimeJunky
 
Hmm i expected that the value FSYNC/Seconds should be higher when the backupPc runs. Am I wrong?
 
Last edited:
Code:
pm:/home/xyz# pveperf
CPU BOGOMIPS:      34134.13
REGEX/SECOND:      823830
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    2.48 MB/sec
AVERAGE SEEK TIME: 65.42 ms
FSYNCS/SECOND:     0.65
DNS EXT:           64.50 ms
While running:
Code:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3171 root      20   0 5668m 1.1g 1432 S    6  4.8 134:38.27 kvm
 3153 root      20   0 1626m 1.3g 1480 S    4  5.7 120:01.45 kvm
 3201 root      20   0  863m 335m 1436 R    4  1.4  96:15.08 kvm
 3068 root      20   0 1625m 519m 1416 S    4  2.2 101:28.57 kvm
 3138 root      20   0  864m 552m 1440 S    4  2.3 122:24.75 kvm
 3083 root      20   0  474m 169m 1372 R    4  0.7  76:57.04 kvm
 3089 root      20   0  863m 300m 1416 S    4  1.2  90:33.30 kvm
23527 backuppc  20   0  229m 177m 1408 D    3  0.7   2:28.50 BackupPC_dump
23519 backuppc  20   0 46900 7128 2184 S    1  0.0   0:08.93 ssh
23513 backuppc  20   0 56096  10m 2228 D    1  0.0   0:08.43 BackupPC_nightl
 2763 root      20   0  302m 5152 1456 S    0  0.0   2:49.86 fail2ban-server
 3047 root      20   0 5861m 1.1g 1416 D    0  4.8 143:19.16 kvm
 3186 root      20   0  482m 269m 1416 D    0  1.1  82:22.53 kvm
23512 backuppc  20   0 56096  10m 2236 D    0  0.0   0:08.29 BackupPC_nightl
23597 root      20   0 19064 1416 1004 R    0  0.0   0:00.11 top
    1 root      20   0  8352  676  632 S    0  0.0   0:01.99 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd
    3 root      RT   0     0    0    0 S    0  0.0   0:00.08 migration/0
    4 root      20   0     0    0    0 S    0  0.0   0:03.82 ksoftirqd/0
second snapshot:

Code:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3047 root      20   0 5861m 1.1g 1416 S    7  4.8 143:23.61 kvm
23527 backuppc  20   0  229m 177m 1408 D    5  0.7   2:33.30 BackupPC_dump
 3171 root      20   0 5668m 1.1g 1432 R    5  4.8 134:43.72 kvm
 3201 root      20   0  863m 335m 1436 S    5  1.4  96:18.24 kvm
 3068 root      20   0 1625m 519m 1416 S    4  2.2 101:31.37 kvm
 3105 root      20   0 1626m 589m 1452 S    4  2.4  93:53.77 kvm
 3138 root      20   0  864m 552m 1440 S    4  2.3 122:28.10 kvm
 3089 root      20   0  863m 300m 1416 S    4  1.2  90:35.85 kvm
 3153 root      20   0 1626m 1.3g 1480 S    4  5.7 120:04.37 kvm
 3186 root      20   0  482m 269m 1416 S    4  1.1  82:25.21 kvm
 3083 root      20   0  474m 169m 1372 S    4  0.7  76:59.45 kvm
23519 backuppc  20   0 46900 7128 2184 S    2  0.0   0:10.86 ssh
23512 backuppc  20   0 56096  10m 2236 D    0  0.0   0:08.62 BackupPC_nightl
23513 backuppc  20   0 56096  10m 2228 D    0  0.0   0:08.76 BackupPC_nightl
23515 backuppc  20   0  150m  99m 2484 S    0  0.4   1:28.97 BackupPC_dump
23597 root      20   0 19064 1416 1004 R    0  0.0   0:00.27 top
    1 root      20   0  8352  676  632 S    0  0.0   0:01.99 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd
    3 root      RT   0     0    0    0 S    0  0.0   0:00.08 migration/0
    4 root      20   0     0    0    0 S    0  0.0   0:03.82 ksoftirqd/0
    5 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/0
    6 root      RT   0     0    0    0 S    0  0.0   0:00.02 migration/1
    7 root      20   0     0    0    0 S    0  0.0   0:00.59 ksoftirqd/1
Further System details:

Linux this-domain.com 2.6.32-4-pve #1 SMP Thu Oct 21 09:35:29 CEST 2010 x86_64 GNU/Linux
Debian lenny

pm:/home/xyz# cat /etc/mtab
/dev/mapper/pve-root / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw 0 0
/dev/sda1 /boot ext3 rw 0 0
/var/lib/vz/raid /raid none rw,bind 0 0

Code:
pm:/home/xyz# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root   95G  9.2G   81G  11% /
tmpfs                  12G     0   12G   0% /lib/init/rw
udev                   10M  140K  9.9M   2% /dev
tmpfs                  12G  4.0K   12G   1% /dev/shm
/dev/mapper/pve-data  1.7T  110G  1.6T   7% /var/lib/vz
/dev/sda1             504M   60M  419M  13% /boot
 
Last edited:
why do you have such a slow fsync/sec rate? seems you did not enabled the write cache on the raid controller (write-back).?
 
oki, tom. We got it! Thx u again.

One of the raid hd was defect. After changing we get the following issue:

Code:
 pveperf
CPU BOGOMIPS:      34134.29
REGEX/SECOND:      810532
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    224.47 MB/sec
AVERAGE SEEK TIME: 9.71 ms
FSYNCS/SECOND:     458.93
DNS EXT:           43.07 ms
=> About 10x faster :)


We found a link to manufacturer's web with the monitoring SW for corresponding raid controller at:
http://www.lsi.com/storage_home/pro...id_sas/6gb_s_value_line/sas9260-4i/index.html

**** Problem SOLvED ***



Rregards,
TimeJunky
 
Last edited: