Compression in RBD

Alexey Petrenko

New Member
Aug 13, 2018
7
0
1
40
Dear Members,
Is it possible to enable compression in rbd?

ceph osd pool set mypool compression_algorithm snappy
ceph osd pool set mypool compression_mode aggressive

don't working.
 
I created a pool "mypool"
upload_2018-8-14_10-56-5.png

Then I changed compression_algorithm and compression_mode and add to storage.

root@proxmox01:/var/lib/ceph/osd/ceph-10# ceph osd pool set mypool compression_algorithm lz4
set pool 12 compression_algorithm to lz4
root@proxmox01:/var/lib/ceph/osd/ceph-10# ceph osd pool set mypool compression_mode force
set pool 12 compression_mode to force
root@proxmox01:/var/lib/ceph/osd/ceph-10# ceph osd pool get mypool all
size: 3
min_size: 2
crash_replay_interval: 0
pg_num: 64
pgp_num: 64
crush_rule: replicated_rule
hashpspool: true
nodelete: false
nopgchange: false
nosizechange: false
write_fadvise_dontneed: false
noscrub: false
nodeep-scrub: false
use_gmt_hitset: 1
auid: 0
fast_read: 0
compression_mode: force
compression_algorithm: lz4


upload_2018-8-14_11-3-31.png


Then I create a virtual machine with RHEL7 in "mypool". After the virtual machine is created, the "used total" is ~ 2GB.
upload_2018-8-14_11-30-56.png

Then I created a 6Gb zero file

upload_2018-8-14_11-40-44.png

And the used space has increased by 6GB.
upload_2018-8-14_11-41-58.png
 
root@proxmox01:/# ceph df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
14307G 14209G 100040M 0.68 20394
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED
VMs 1 N/A N/A 65193M 0.94 6737G 18309 18309 1118k 764k 127G
mypool 12 N/A N/A 8280M 0.12 4491G 2085 2085 17195 28064 16560M
 
It looks correct to me. You have 127GB + 16.5GB = 143.5GB raw used, by the pools but 100GB raw used in the global cluster wide view.
 
I added a second 32 GB disk to a virtual machine and wrote down an empty file of 31 GB in size.
upload_2018-8-14_14-19-27.png
pool raw used in the global cluster wide view increased by 17GB. This is very strange, because when compressed, it should not grow.
upload_2018-8-14_14-27-36.png
 
32 GB * 3 = 96 GB, depending on the filesystem on the VM, the size that will be written varies. If you run fstrim, you will see the same amount disappearing, because the filesystem on the VM takes already care of 0 writes.

To have a proper test, run fio directly against the cluster.
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html

Compression options in luminous
http://docs.ceph.com/docs/luminous/rados/operations/pools/#set-pool-values
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/#inline-compression
 
I deleted old virtual machine and create new virtual machine with single disk. The new virtual machine uses 1.65GiB in the "mypool".

upload_2018-8-14_18-37-0.png

"ceph df detail" show
upload_2018-8-14_18-39-35.png

Then I create 6Gb file zero file in virtual machine
upload_2018-8-14_18-41-36.png

Now the virtual machine uses 7.63GiB in the pool.
upload_2018-8-14_18-42-35.png

In "ceph df detail" global raw used increased by 3GB.
upload_2018-8-14_18-44-34.png

Then I run the fstrim command on virtual machine...
upload_2018-8-14_18-49-21.png
But space was not released
upload_2018-8-14_18-51-24.png
upload_2018-8-14_18-52-15.png

After the file is deleted, the space is reduced.
upload_2018-8-14_18-54-22.png
upload_2018-8-14_18-54-55.png
The "fstrim" command works, but compression does not work.:(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!