Update:
The same is true when adding a new HDD to an existing VM or when creating an altogether new VM.
So, this is not limited to migration.
If (as the error message suggests it might be) this is to do with wrong permissions, where would I start looking?
ceph auth is set to optional...
Hi all,
I just upgraded my cluster to Proxmox VE 6.1 and wanted to give the updated krbd integration a spin.
So, I set my rbd storage to use krbd and then tried to migrate a vm to a different node. This is what I got:
2019-12-08 11:49:22 starting migration of VM 211 to node 'srv01'...
I used the UI to set up the cephfs storage. There is no mention of kernel vs. fuse in the UI? (And not mentioned in the help page either)
I manually set fuse to 1 in storage.cfg and remounted. Now every client is shown as luminous.
Thanks!
I have rebooted each node at least once since the upgrade. Still, the cephfs mounts are seen as done by a jewel client.
Also, deleting the cephfs storage from the cluster and recreating it does not change this behavior either.
What is weird, though: With the cephfs removed, I could carry out...
Not only VMs as client: I mount a cephfs as well (via cluster storage) and if I disable and unmount those, all clients are (as shown by running ceph features listed as luminous.
Now the big question: Can I upgrade the cluster requirements to luminous and still somehow use cephfs?
There is still something amiss. I have upgraded my cluster to
~# pveversion
pve-manager/6.0-9/508dcee0 (running kernel: 5.0.21-3-pve)
The ceph-packages are all nautilus:
~# dpkg -l | grep ceph
ii ceph 14.2.4-pve1 amd64...
If you mean safe as in 'knowing that the VM is actually issuing flush commands, then I guess that is true.
But I was more concerned with minimizing the risk of data-loss.
That is what I'll go with, then.
Hi there,
I use VMs with ceph / rbd backend for storage and am confused about the cache settings:
On the wiki (https://pve.proxmox.com/wiki/Performance_Tweaks) the different caching options are explained.
And from the description there I would have thought that writethrough is the thing to use...
I have VMs with disk images either on ceph or on lvm.
When restoring from a backup to ceph, I benefit from:
space reduction due to 4K zero blocks
as unused / empty blocks will not be allocated on ceph.
So far so good.
Now, I wanted to move a disk image from lvm (NOT lvm-thin!) to ceph.
And in...
xcdr, do you know if qemu-kvm provide the COMPRESSIBLE / INCOMPRESSIBLE hint to the ceph backend?
E.g. will the compression_mode passive result in any compression or do I have to use aggressive mode for that?
So you are saying that the non-subscription repository is unstable? Or prevents easy upgrades? That is wild speculation, at best.
Would you 'trust' the subscription repo more than the non-subscription one? That is naive either way.
For my part, I don't trust either: I test packages on a staging...
I still don't see how this is even a topic:
The entire project is open source, of high quality and very well documented.
No one forces you to buy a subscription.
You do not lose any functionality by not having a subscription.
That bit of a nag when you log into the web interface without a...
Twice, I waited for the pveceph command to timeout on its own:
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum...
-- Logs begin at Sun 2017-09-03 03:23:10 UTC, end at Tue 2017-09-05 11:29:31 UTC. --
Sep 05 11:00:00 srv01 systemd[1]: Starting Proxmox VE replication runner...
Sep 05 11:00:01 srv01 systemd[1]: Started Proxmox VE replication runner.
Sep 05 11:00:14 srv01 pveceph[482]: <root@pam> starting task...
I am not very used to systemd - but I guess journalctl should have been obvious to me. Sorry.
Attached are the journal of both srv01 (mon.0) and srv02 (mon.1) during an attempt, as well as mon.0's log during the time.
The setting (applied on both existing node and node to be added) did not change the behavior. :-(
Btw, the store does not look overly large to me. ;-)
# du -sh /var/lib/ceph/mon/ceph-1
98M ceph-1
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.