You missed this bit :
You also need to copy the keyring to a predefined location.
Note that the file name needs to be storage id + .keyring . storage id is the expression after 'rbd:' in /etc/pve/storage.cfg which is my-ceph-storage in the current example.
# cd /etc/pve/priv/
# mkdir ceph
#...
You need to mount the disk in Linux with discard, this can be set in the fstab file.
Then the VM will automatically call a fstrim on any delete/change command, however this is a high overhead to this, hence it is best left with a weekly cronjob as built into most Linux OS's.
Its the same issue as the post I linked from 2014, so wanted to check if that had since been resolved and this is a bug or if it was still the same as the 2014 post.
Happy to log a bug if that's what is required, as I said if its an empty RBD image with no file-system it is fine, however...
A 2 node CEPH cluster does work, however its recommended to always use a replication of 3 on CEPH.
Something you can't really do too well with 2 nodes, and will always require a 3rd small node for quorm.
Correct, so a RBD disk which has a size of 200GB but for example only has a use of 50GB before a disk move will show as only using 50GB within CEPH "rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'", doing a disk move the progress bar will show it "moving" the full 200GB...
So have done a few tests, and seems VM on or off the full image is migrated, from what i have heard its possible with CEPH to ignore "whitespace" when doing a RBD copy/move.
1/ If it's an empty RBD disk with no file-system it remains 0.
2/ If it's an RBD disk with a file-system it will take up...
Hello,
Wondering if the drive move (drive mirror) from RBD to RBD is thin provisioned as CEPH is as default, or if the whole disk size is wrote to the new RBD storage mount.
I have found this : http://pve.proxmox.com/pipermail/pve-devel/2014-October/012898.html
However it does not state if...
With CEPH you need to have free storage / disks available on each node you can convert into OSD's.
You can then setup a storage pool across multiple servers.
If you use CEPH and have 4 nodes you can lose any without issue, if you use one for storage if you lose that node no matter what you will loose the VM's.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.