btw you are running snapshotting and not journalbased on the pool.
{
"name": "<image>",
"global_id": "0af1a493-f8dd-483c-8b38-779f1f10be15",
"state": "up+stopped",
"description": "local image is primary",
"daemon_service": {
"service_id": "1271169854"...
@hainh I might actually be wrong here. I tried it out now and rbd_discard_granularity_bytes cannot be set to < 4096. Setting rbd_skip_partial_discard = false sets rbd_discard_granularity_bytes = 0 internally.
It's easy to see this
rbd journal inspect --image $IMAGE --verbose
Entry: tag_id=1...
@hainh
The file is /etc/pve/priv/ceph/<storage_name>.conf and the config is rbd discard granularity bytes = 0
A good place to add it is under [global].
@hainh thanks. Does it work with the setting? Note that it's the client config, not server, thus you need to set this in the conf on the proxmox host.
If you could confirm the fix that would be awesome!
I should note that there may be hidden dragons here, nothing confirmed but at least this...
Note that this in effect disables discards since the same outcome is made by setting rbd discard granularity bytes = 0.
I made a fix that stops the misaligned discards from entering the journal. Hopefully that will make sense or becomes modified to fix the problem.
This has been an issue since...
@hainh Yeah,
Add
to the ceph conf.
In our case AioDiscard that where generated by fstrim (and I guess ext4 journal) made JournalMetadata not update the rbd-image properly. Thus journal_data files were added and never removed.
There's some more info here: https://tracker.ceph.com/issues/57396...
I'm trying to get some answer about the journal.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SEOZ7Y2NTQQAKMA3PAN7WD2N7AUEIMH3/
Did a lot of debug today and the local journal replay hit an error sometime and dies.. Probably related to some I/O that the rbd-mirror handles...
Any chance people with issues could paste the output of
rbd journal status and rbd journal info ?
It should show more info about the journal right now.
So I just had an issue where a VM would not start (start get a timeout), the KVM-process however remains.
I checked the above command and turns...
Hi,
I saw that the cluster name for ceph is hard wired in the Cephtools.pm file, is it possible to open it up so that you can specify the cluster name in storage.cfg?
Thanks,
Josef
Hi,
I was wondering if it's possible to use the storage network instead of the cluster-network for migration, since you often have a lot more bandwidth there.
Cheers,
Josef
Hi,
Using Ceph and Proxmox VE, removing a VMs snapshot with memory snapshot enabled, it fails miserably. A normal snapshot without memory snapshot works splendidly.
The error in the end sounds scary, does it try to remove the base image?
Cheers,
Josef
dpkg -l pve*...
Hi,
It seems that doing a full clone is quite intense and you loose sparse on images. Any reason for not using rbd flatten instead of qemu-img convert? i.e, first clone the image and then flatten it.
Cheers,
Josef
It seems that doing a full clone is quite intense and you loose sparse on images. Any reason for not using rbd flatten instead of qemu-img convert? i.e, first clone the image and then flatten it.
Cheers,
Josef
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.