Nothing works anymore

ext4

What means fsync? How to increase? The machine does not have a HW-RAID, and i alsways made bad experiances with SW-RAID.

Code:
root@kvm:/home/le# mount
/dev/mapper/vg0-root on / type ext4 (rw)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext2 (rw)
/dev/mapper/vg0-backup on /backup type ext4 (rw)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)
 
ask google about fsync details.

looks you ignored two recommendations regarding your storage stack:

1. use HW raid
2. use ext3

fix both.
 
ext4 is quite slow in some combinations (slow fsyncs), also ext3 is the most stable regarding OpenVZ.
 
Updated to raid5 and ext3 now

this is my pveperf
Code:
root@kvm:/home/le# pveperf 
CPU BOGOMIPS:      52794.01
REGEX/SECOND:      1545265
HD SIZE:           49.61 GB (/dev/sda2)
BUFFERED READS:    274.44 MB/sec
AVERAGE SEEK TIME: 6.74 ms
FSYNCS/SECOND:     232.19
DNS EXT:           57.00 ms
DNS INT:           12.28 ms

you said it should be at least 1000fsync/sec?
whats wrong here?
 
I am not very familiar with administrating raid controllers

It seems that there is no battery status and no write-back cache

Code:
root@kvm:/home/le# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -aAll
                                     
Adapter 0: Get BBU Status Failed.
Adapter 0: Get BBU ChargeTime Failed.
Adapter 0: Get BBU Capacity Info Failed.
Adapter 0: Get BBU Status Info Failed.
Adapter 0: Get BBU Properties Failed.

Exit Code: 0x00
root@kvm:/home/le# /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -aALL
                                     

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3
Size                : 5.457 TB
Parity Size         : 2.728 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 3
Span Depth          : 1
Default Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Is VD Cached: No



Exit Code: 0x00

Is this a faulty configuration (it is the default my hoster gave me)
 
hw-raid is expensive! :P

Speed costs money, how fast do you want to go? :D

You could run different VMs on different disks if a raid card is out of budget.
Would be better than running all the VMs on one disk.

A RAID card has other benefits besides spreading the load and surviving disk failures.
My Areca 1880i cards have 4GB of RAM on them, coupled with the optional battery I can write 2GB/sec for 2 seconds.
So when some VM decides to write a bunch of data it goes to the Battery Backed RAM cache quickly rather than holding up IO to the other VMs for an extended period of time.

Code:
# pveperf
CPU BOGOMIPS:      79996.36
REGEX/SECOND:      1259012
HD SIZE:           27.31 GB (/dev/mapper/pve-root)
BUFFERED READS:    1122.68 MB/sec
AVERAGE SEEK TIME: 6.33 ms
FSYNCS/SECOND:     [B]4728.64[/B]

Proxmox team does not support software RAID, but I have used it for over a decade on machines where IO performance is not as important.
It can be a good option for experienced users who are on a tight budget.

The problem with software raid is when things go wrong you are on your own to figure it out.
No RAID card manufacturer to give you support and secret codes that bring your array back to life, just you, man pages and your keyboard.
 
Wow, nice configuration! :D

Actually I had no performace problems with my VMs, but if you read the whole thread, i had other problems. ;)

Now I am on HW-Raid 5 and hope they are solved :)


Edit: Just thougt of 3 SSDs in an RAID Array! :D This should be fast enough hehe
 
:(((

Now i have ext3 AND hw-raid and i am getting the same errors. today in the morning the host made his backups. some hours later it was not reachable at all.
looking in the log telled me the same as mentioned above?!? What is going wrong here??
 
Hi, i managed solving my problem.
Turning back to original proxmox debian (5.0 lenny) solved it. Possibly there are problems with the newer lvm tools.
You can see in some proxmox 2.0 beta threads similar issues.

So are they confirmed? Is someone working on it? Will this be fixed in stable 2.0?

They are not going to support lenny anymore. Are all proxmox users using lenny? I think this could be a security risk.
Any solutions for this?