Thank you Rokaken. I am trying to convince my client to buy support other than community.
Indeed "ceph health detail" returns just "HEALTH OK", so tonight I can sleep quietly.
So far I managed all by myself manually before migrating to Proxmox VE, but need back-up.
Is it too much to tell how to do that ? ;-) As time is running out...
I do not see any warnings again, also when running "ceph healt detail".
Never felt so insecure before like now about instructions.
Executed all instructions as far as I know precisely.
Live-migrated the VMs to another node (5 total) and back.
Is that enough to be sure no problems will show up ?
To be sure I understand this correctly, what does "cut-off" mean exactly ?
Does that mean the client is shutdown or another problem shows up after 72 hours ?
Or that the connection is renewed and automatically working with krbd enabled ?
I just want to be sure to understand this right.
On my Proxmox VE nodes I have almost 20TiB of ZFS storage.
I want to create an iSCSI config over a ZFS-volume on all nodes.
Then use these as storage devices on 1 VM or CT.
Not on the CEPH pool... that is SSD based, I want it for back-up.
I am looking for the best iSCSI target solution to...
I just used Google "how to remove ceph from proxmox" and got the following:
https://forum.proxmox.com/threads/remove-ceph.59576/
https://forum.proxmox.com/threads/removing-ceph-completely.62818/
https://forum.proxmox.com/threads/not-able-to-use-pveceph-purge-to-completely-remove-ceph.59606/
I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.