CentOS vs Debian - DAB or some orchestration?

pipapo

Member
Aug 6, 2016
5
0
21
47
Hello,

before manually deploying all my containers, I would like to know if there is something that will make my life easier in the long run. Something like "if only I knew before..."

With Debian I sooner or later run into situations, where I need to start using version pinning and deviate from the base repos and I am not happy with the whole systemd situation. With CentOS systemd is just running smoothly. Is CentOS really as slow as suggested by the phoronix benchmark?

What would you do, if you were to recreate most of your containers? Use some external orchestration/config management with CentOS (or even Alpine), write custom DAB templates or just do things by hand (just duplicate zfs snapshots and copy configs).
 
At the moment, I use dab to create my own, specialized Debian templates and use them on ZFS with Deduplication enabled, therefore you do not need to clone and maintain the templates. For non-Debian (mostly Oracle Linux) I use my own templates I update if needed. Problem with RHEL-based systems is that you will not have 100% coverage of the features because you do not use the RHEL-based kernel. I have not noticed any slowness of RHEL-based systems besides my comparison to 24+ cores, 128+ GB RAM real hardware servers running them.

For configuration I use my own packages I already wrote years ago for real hardware or older virtualization solutions.
 
Thank you LnxBil. I was not aware that it I would be missing some features from the RHEL based kernels. I will stick with Debian.

As for orchestration, I have decided to start using ansible (just the basic system, no GUI etc). It has minimal requirements on my hosts, vms and services and seems to be easily adaptable for my needs.

Deduplication with ZFS was one of the reasons I looked at ZFS first, but I am not sure right now, whether I will be using it. There are many articles rather sceptical about space/RAM tradeoff when using ZFS dedup. My benchmark on sample user data showed a dedup ratio between 10-15% possibly slightly rising.
 
Yes, deduplication strongly depends on the data. What you can do is to use deduplication only on one filesystem and all its children, e.g. I use a container-filesystem zpool/container-dedup in proxmox only for my container root filesystems. If you have a lot of them, then you will see a very high deduplication rate (if you use only Debian e.g.) and all "naturally unique" files go to another filesystem without deduplication. This is a good tradeoff between using it for highly deduplicable stuff and do not using it for everything else. You will not have many files with only reference, therefore you will not waste memory ARC and entries in die DDT. Yes, it's a bit more work, but it pays off.
 
Here a real live example of 6 LXC machines based on Jessie:

Code:
root@i7-proxmox:~# zfs list -t all -o type,name,used,refer,lused,lrefer,compressratio,mountpoint;echo; zpool list;echo; zpool status -D zpool;echo; ./zfs_dedup_memory_usage.rb
TYPE        NAME                                                   USED  REFER  LUSED  LREFER  RATIO  MOUNTPOINT
filesystem  zpool                                                 2,49G    96K  3,81G     40K  1.73x  /zpool
filesystem  zpool/proxmox                                         2,45G   104K  3,80G   40,5K  1.73x  /zpool/proxmox
filesystem  zpool/proxmox/subvol-1002-disk-1                       878M   877M  1,29G   1,29G  1.66x  /zpool/proxmox/subvol-1002-disk-1
snapshot    zpool/proxmox/subvol-1002-disk-1@2016-08-13_01-07-19   104K   877M      0   1,29G  1.66x  -
filesystem  zpool/proxmox/subvol-1004-disk-1                       231M   231M   371M    371M  1.76x  /zpool/proxmox/subvol-1004-disk-1
snapshot    zpool/proxmox/subvol-1004-disk-1@2016-08-13_01-07-19    96K   231M      0    371M  1.76x  -
filesystem  zpool/proxmox/subvol-1005-disk-1                       259M   259M   422M    422M  1.80x  /zpool/proxmox/subvol-1005-disk-1
snapshot    zpool/proxmox/subvol-1005-disk-1@2016-08-13_01-07-19   104K   259M      0    422M  1.80x  -
filesystem  zpool/proxmox/subvol-1006-disk-1                       346M   291M   593M    475M  1.87x  /zpool/proxmox/subvol-1006-disk-1
snapshot    zpool/proxmox/subvol-1006-disk-1@pre-owncloud         55,0M   290M      0    474M  1.81x  -
snapshot    zpool/proxmox/subvol-1006-disk-1@2016-08-13_01-07-19   104K   291M      0    475M  1.81x  -
filesystem  zpool/proxmox/subvol-1007-disk-1                       582M   581M   815M    815M  1.68x  /zpool/proxmox/subvol-1007-disk-1
snapshot    zpool/proxmox/subvol-1007-disk-1@2016-08-13_01-07-19   120K   581M      0    815M  1.68x  -
filesystem  zpool/proxmox/subvol-1008-disk-1                       217M   217M   361M    361M  1.86x  /zpool/proxmox/subvol-1008-disk-1
snapshot    zpool/proxmox/subvol-1008-disk-1@2016-08-13_01-07-19    96K   217M      0    361M  1.86x  -

NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zpool   175G  1,32G   174G     18,0G     0%     0%  1.99x  ONLINE  -

  pool: zpool
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zpool       ONLINE       0     0     0
          sdb2      ONLINE       0     0     0

errors: No known data errors

dedup: DDT entries 72123, size 527 on disk, 170 in core

bucket              allocated                       referenced         
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     1    44,9K   1,22G    758M    816M    44,9K   1,22G    758M    816M
     2    12,5K    224M    131M    152M    25,8K    479M    278M    321M
     4    12,7K    374M    204M    222M    64,8K   1,93G   1,05G   1,14G
     8      291   9,80M   6,71M   7,20M    2,72K   88,9M   60,4M   65,1M
    16       44    966K    750K    848K      957   18,3M   14,3M   16,4M
    32        6     70K     60K     76K      220   2,46M   2,11M   2,69M
    64        4    130K   5,50K     16K      403   15,9M    642K   1,57M
   128        1    128K      4K      4K      132   16,5M    528K    528K
   512        2      1K      1K      8K    1,20K    616K    616K   4,81M
Total    70,4K   1,81G   1,08G   1,17G     141K   3,75G   2,14G   2,34G


DDT size in memory: 11.69 MB (72123 * 170 bytes)
DDT size ion disk: 36.21 MB (72123 * 527 bytes)

Yet it strongly depends on what you have. Also try only to use it on SSDs because if gets very, very slow for big files. I use it on e.g. hetzner/ovh servers with 2x250 GB or 2x500GB SSDs. You need deduplication there to pack more servers on the bare metal and it works great.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!