[SOLVED] Which Filesystem to use with proxmox sofraid server

Diogo Jesus

Member
Oct 16, 2017
29
0
6
28
INTRODUCTION

In our company we have a server hosted by OVH (SoYouStart). The server says it is "SoftRaid" with 2x3To. I don't know which raid is it.

So far we used ZFS file system for a Proxmox Server. Which was the most secure we could find at the time. Now after 5 months of server usage we saw some problems coming in.


PROBLEM


  • Disk Space
  • Performance

Disk Space

In our server we have 6To of disk space. Problem is with ZFS only 2.73To are available.
After 5 months we have 1.1To used (Mail server, Website, etc, etc...)

Performance

Also we saw that the server feels kinda laggy lately, I don't know if it's due to ZFS or other hardware problems (checking hardware tomorrow night)


LOOKING FOR


Now the question is.
Which filesystem do you advise to me? Shoud I keep under ZFS? even if ZFS is good. For big servers is a big problems since it devises the disk (6To to 2.73To).

We do our Backups on a NFS separated disk and my idea is to implement backups every 5h (overwrites oldest backup) to prevent datalost in case of a disk failure.
Under ZFS we don't have that problem. Should we move into a proxmox clustering and keep ZFS even if lacks performance and will cost us 3x more and gain disk space?
The proxmox is running containers for different services (mail, website, api, etc, etc, etc, ...)
 
Hi,
In our server we have 6To of disk space. Problem is with ZFS only 2.73To are available.
Have you done Snapshots?
Because ZFS need without snapshot about 3% metadata space extra and this 3% you get back with compression.

Also we saw that the server feels kinda laggy lately, I don't know if it's due to ZFS or other hardware problems (checking hardware tomorrow night)
I would dig a bit deeper and do some tuning because ZFS is quiet fast is it is set up correctly.

1. Limit the ARC in min and max.
2. Set snappiness to 1
https://pve.proxmox.com/wiki/ZFS_on_Linux#_limit_zfs_memory_usage

More information about see here
https://github.com/zfsonlinux/zfs/wiki/Admin-Documentation
https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
 
  • Like
Reactions: Diogo Jesus
Hello, Thank you for the reply.
Have you done Snapshots?
Because ZFS need without snapshot about 3% metadata space extra and this 3% you get back with compression.
We do backups (snapshots) daily that we store in a NFS storage. Also we have a few under snapshots which we remove after the tests are done.

I would dig a bit deeper and do some tuning because ZFS is quiet fast is it is set up correctly.
It also might be a hardware failure I will be checking this in a few hours (waiting for night since we're under production at this time). From what I saw ZFS is quite good. Also we're using OVH server like I said and they provide the proxmox with ZFS filesystem under beta program. It might be that aswell. I'm just trying to figure out all possibilities to understand what is making our containers so laggy. Also realized that our SWAP memory is 99%/100% on the main server.

I would dig a bit deeper and do some tuning because ZFS is quiet fast is it is set up correctly.

1. Limit the ARC in min and max.
2. Set snappiness to 1
https://pve.proxmox.com/wiki/ZFS_on_Linux#_limit_zfs_memory_usage

More information about see here
https://github.com/zfsonlinux/zfs/wiki/Admin-Documentation
https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
I'll take a look at this while running the hardware check.

Thank you,
Diogo Jesus


P.S:

Also our company proxmox is showing completely wrong disk size on data center summary. We have a 2x3 To of disk but for some reason that info is showing 1.24 TiB of 13.35 TiB. Any idea? Tried to do some research but no lucky so far. Thank you
 
Last edited:
Can you send me the output of this command from the company proxmox?

Code:
lsblk -i
zpool status
zpool list
zfs list
pvesm status
 
Hello, here are the outputs

lsblk -i

Code:
root@learn:~# lsblk -i
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  2.7T  0 disk
|-sda1   8:1    0 1007K  0 part
|-sda2   8:2    0  2.7T  0 part
`-sda9   8:9    0    8M  0 part
sdb      8:16   0  2.7T  0 disk
|-sdb1   8:17   0 1007K  0 part
|-sdb2   8:18   0  2.7T  0 part
`-sdb9   8:25   0    8M  0 part
zd0    230:0    0    4G  0 disk [SWAP]

zpool status
Code:
root@learn:~# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 10h52m with 0 errors on Sun Feb 11 11:16:42 2018
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors

zpool list
Code:
root@learn:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  2.72T   273G  2.45T         -    13%     9%  1.00x  ONLINE  -

zfs list
Code:
root@learn:~# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                          275G  2.37T    96K  /rpool
rpool/ROOT                     270G  2.37T   136K  /rpool/ROOT
rpool/ROOT/pve-1               197G  2.37T   197G  /
rpool/ROOT/subvol-100-disk-1  2.41G  5.70G  2.30G  /rpool/ROOT/subvol-100-disk-1
rpool/ROOT/subvol-101-disk-1  11.4G  96.6G  11.4G  /rpool/ROOT/subvol-101-disk-1
rpool/ROOT/subvol-102-disk-2  1.37G  6.63G  1.37G  /rpool/ROOT/subvol-102-disk-2
rpool/ROOT/subvol-103-disk-1  15.3G  84.7G  15.3G  /rpool/ROOT/subvol-103-disk-1
rpool/ROOT/subvol-104-disk-1  4.41G  3.66G  4.34G  /rpool/ROOT/subvol-104-disk-1
rpool/ROOT/subvol-105-disk-1  1.09G  30.9G  1.09G  /rpool/ROOT/subvol-105-disk-1
rpool/ROOT/subvol-106-disk-1  5.72G  26.4G  5.63G  /rpool/ROOT/subvol-106-disk-1
rpool/ROOT/subvol-107-disk-1  4.40G  27.8G  4.25G  /rpool/ROOT/subvol-107-disk-1
rpool/ROOT/subvol-108-disk-1   712M  7.30G   712M  /rpool/ROOT/subvol-108-disk-1
rpool/ROOT/subvol-109-disk-1   877M  7.14G   877M  /rpool/ROOT/subvol-109-disk-1
rpool/ROOT/subvol-110-disk-1  13.7G  86.3G  13.7G  /rpool/ROOT/subvol-110-disk-1
rpool/ROOT/subvol-111-disk-1  2.47G  30.1G  1.87G  /rpool/ROOT/subvol-111-disk-1
rpool/ROOT/subvol-300-disk-1  5.77G  44.3G  5.71G  /rpool/ROOT/subvol-300-disk-1
rpool/ROOT/subvol-900-disk-1  3.56G  96.4G  3.56G  /rpool/ROOT/subvol-900-disk-1
rpool/data                      96K  2.37T    96K  /rpool/data
rpool/swap                    4.25G  2.37T  3.05G  -

pvesm status
Code:
root@learn:~# pvesm status
Name             Type     Status           Total            Used       Available        %
Backup            nfs     active       524288000       220033536       304254464   41.97%
Container     zfspool     active      2823399340       283288408      2540110932   10.03%
Monthly           dir     active      2746628096       206517248      2540110848    7.52%
Weekly            dir     active      2746628096       206517248      2540110848    7.52%
local             dir     active      2746628096       206517248      2540110848    7.52%
snapshot          dir     active      2746628096       206517248      2540110848    7.52%

Thank you for the help
 
Everythink is correct and the reason why you see "1.24 TiB of 13.35 TiB" is the count every storage.
The summary do not know that it is the same storage.

You can configure witch storage are counted.
There is a gear on the top right side between your login name and the Documentation button.
 
Everythink is correct and the reason why you see "1.24 TiB of 13.35 TiB" is the count every storage.
The summary do not know that it is the same storage.

You can configure witch storage are counted.
There is a gear on the top right side between your login name and the Documentation button.
Thanks, also added the vm.swappiness = 1 and removed all swap in containers, now swap went back to 0. Is it a good option to remove swap from all containers? So far I didn't see any downside but it's been only a few hours since I turned on the server.
 
In over 90% it is not good to use the swap but it is better to use the swap than kill a process.
The kernel must kill a process if it run out of memory to grand stability.

So you should use vm.swappiness = 1 to give the kernel the possibility to swapp in case of run out of memory.
 
In over 90% it is not good to use the swap but it is better to use the swap than kill a process.
The kernel must kill a process if it run out of memory to grand stability.

So you should use vm.swappiness = 1 to give the kernel the possibility to swapp in case of run out of memory.
Right, the vm.swappiness = 1 and on each CT since I always allow more RAM that what I actually need I removed the swap on the CTs. Also had 3 servers running under old snapshots that I forgot to delete. Deleted them hoping now to get the CTs faster.

Thank you,
Diogo Jesus
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!