Is it possible to install Poxmox 2.0 with xfs ?

alain

Renowned Member
May 17, 2009
223
2
83
France/Paris
Hi all,

I just installed Proxmox 2.0 beta 3 on a new Dell PE R510 server, with 12x 1 TB drives, a Perc H700 Controller (a rather good raid controller), configured in Raid 10. As I have a very big array (6 TB), I chose to format using ext4, typing at boot 'linux ext4'

I have very low perfomances on drive IO.

Code:
# pveperf
CPU BOGOMIPS:      36265.53
REGEX/SECOND:      780324
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    533.84 MB/sec
AVERAGE SEEK TIME: 6.43 ms
FSYNCS/SECOND:     310.32
DNS EXT:           71.99 ms
DNS INT:           14.11 ms

I think that FSYNCS/SECOND (~300) should be at least between 2000 and 3000. I know that there have been already benchmarks showing that ext4 in some case performed lower than ext3.

I would like to know if it is possible to format with XFS, using 'linux xfs' ? As it is a first install, it is not a problem to reinstall. Will it give better performances ?

Code:
# pveversion -v
pve-manager: 2.0-12 (pve-manager/2.0/784729f4)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-53
pve-kernel-2.6.32-6-pve: 2.6.32-53
lvm2: 2.02.86-1pve2
clvm: 2.02.86-1pve2
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.6.0-1
redhat-cluster-pve: 3.1.7-1
pve-cluster: 1.0-12
qemu-server: 2.0-10
pve-firmware: 1.0-13
libpve-common-perl: 1.0-8
libpve-access-control: 1.0-2
libpve-storage-perl: 2.0-8
vncterm: 1.0-2
vzctl: 3.0.29-3pve3
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.1-1

Alain
 
I suggest you just create a small raid volume for the default installation, e.g. 100 GB. install Proxmox VE 2.0 on this, just use defaults (ext3).

Now you can do several file-systems tests on the rest. also you can use LVM block devices directly you do not need a filesystem at all.

the beta3 is very flexilbe now, also supporting several storage dirs for containers. ext3 is still the preferred system for the container partition.
 
Hi Tom,

Thanks for your answer. I see that indeed there are several options. In fact, the smallest raid volume I can build is 1 TB (the size of one HD). And yes, I could add afterwhile other raid volumes, format them as xfs or other filesystem, extend LVM volume group, or create LVM logical volumes on a raw space, and so on.

The point is that with such volumes, ext3 is dangerous. I think it could take several days to check the entire filesystem, and you can't wait so long when you have important VMs to restart... But ext4 shows low performance, even on Raid 10... And other point I see now, I guess grub would not boot on xfs root partition..

But in this case, it is not so important. This machine is not aimed to become a server for VMs, but a backup server, so the need for such space. I only want to test Proxmox 2.0 cluster, and the migration from 1.9 to 2.0, when it will become available, before re-installing it as backup server.

Thanks,
Alain
 
smaller it 1TB? with our adaptec controllers I can do all sizes, looks like a limitation of your raid controller.

A fsck of a 1 TB ext3 volume takes some time and depends on several factor. I my environments it never tooks more than 1 hour.

so far all performance test are better with ext3 so we have this as default. with 2.0 we support the installation based on squeeze, so you can use all file-system supported by the debian installer, so if you want to test xfs, go this way: http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Squeeze
 
I know its not very important, but, ext3/4 even with 'delayed allocation' takes many minutes just to create the filesystem, while XFS, independent of size takes seconds to create. Me, I've been using XFS on Linux since 2.4 kernel, it has never let me down. And with a little tuning, has always been faster (for me) than ext3, (ymmv ofcourse....)
 
each filesystem has pros and cons, that's why they exist (ok, mainly because of their pros). and you should use the one fits into your needs.
 
Hello!

I've a R510 too with 2 x 300Go disk and 10 x 1T disk. I installed pve on the 300Go disk in raid1. On this, I build a VM with the "NAS system" using Debian6. The rest of the disk available as a second raid array are formated with LVM and mounted in the VM as a secondary disk. As this, you have a "virtualized" NAS.

You can may be do the same with your 12 disks using 2 array. One small for pve and a larger array for the "data".

You can also add 2 disks inside the R510 to be used with pve and use your 12 external disks for the data.
 
Last edited:
To the OP: Are you sure that you are running pveperf against your array ?

Specify your array as an argument to pveperf

#pveperf /dev/md0
 
Hi,

As I have only one raid array (all in raid 10), and it is hardware raid, the answer is obviously yes.
Code:
# pveperf /dev/md0
CPU BOGOMIPS:      36265.53
REGEX/SECOND:      811415
df: `/dev/md0': No such file or directory
df: no file systems processed
DNS EXT:           58.15 ms
DNS INT:           19.91 ms

By curiosity, I tried different things, for example :
Code:
# pveperf /dev/pve
CPU BOGOMIPS:      36265.53
REGEX/SECOND:      764307
HD SIZE:           5.85 GB (udev)
FSYNCS/SECOND:     34452.09
DNS EXT:           60.18 ms
DNS INT:           13.82 ms

So against the volume group. Much better !, but un-realistic...

Alain
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!