Good evening,
after some tests, I discovered that if 1 of 4 nodes goes down, the disk IO stucks.
VM and CT are still up but no disk of them are available for I/O.
I have 3 ceph monitors.
When I reboot the node, on ceph logs:
2019-01-24 10:28:08.240463 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0...
I have routing problem. No one of my VM and CT can go out after the latest upgrade.
root@bluehub-prox02:~# pveversion -v
proxmox-ve: 4.3-72 (running kernel: 4.4.24-1-pve)
pve-manager: 4.3-12 (running version: 4.3-12/6894c9d9)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.21-1-pve: 4.4.21-71...
Good morning,
I cannot destry a CT. I receive message:
"TASK ERROR: lvremove 'vmdata/vm-111-disk-1' error: Logical volume vmdata/vm-111-disk-1 contains a filesystem in use."
Even from console:
lvremove -f /dev/vmdata/vm-111-disk-1
Logical volume vmdata/vm-111-disk-1 contains a filesystem...
Is there a way to convert a raw virtual disk into a Logiacl Volume in a ThinLVM storage without passing throght a backup and restore into a different storage?
Thank you
ex:
from storge: local
/var/lib/vz/images/991/vm-991-disk-1.raw
to storage local-lvm:
LV VG Attr LSize Pool Origin...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.