Hi there,
I am learning Proxmox and I got myself stuck. This a single node PVE 6.1-7 node.
I did not Out OSDs before Stopping them. This, I think, broke the storage my sole Linux container uses, as it locked up and could not be stopped. As per another thread I killed the monitor process and that stopped the container. Now I am trying to remove it but I receive following error:
TASK ERROR: error with cfs lock 'storage-ceph-lxc-storage': can't unmap rbd device /dev/rbd/ceph-lxc-storage/vm-301-disk-0: rbd: sysfs write failed
How can I destroy this LXC so I can start fresh? Or at this point should I just reinstall PVE and be done with it?
Background:
I am running a single node learning setup. When creating individual pools in Ceph > Pools, it is possible to specify size and minimum size to be 1 and 1 so that works. However such pools can be only used for Container and VM storage, but not for general storage, at least as far as I can tell.
I tried to create a CephFS for general storage but I ran into the issue where data and metadata pools created for CephFS are always made with minimum size of 2 and size of 3 values. This does not work for my single node setup. I don't see where this option can be adjusted during creation of CephFS from PVE Web UI.
I tried using terminal to first create the pools with desired min / size values, then creating CephFS using those pools, but it seems when Pools are created individually they are set to "application rbd" type and cannot be use with CephFS. I never found a way to adjust this, so I decided to give up on Ceph for now, until I can get more nodes and just use BTRFS RAID0.
These are off-topic questions:
* Is it possible to adjust pool size and minimum size when creating CephFS?
* If not, how does this scale on clusters of 5+ nodes?
I am learning Proxmox and I got myself stuck. This a single node PVE 6.1-7 node.
I did not Out OSDs before Stopping them. This, I think, broke the storage my sole Linux container uses, as it locked up and could not be stopped. As per another thread I killed the monitor process and that stopped the container. Now I am trying to remove it but I receive following error:
TASK ERROR: error with cfs lock 'storage-ceph-lxc-storage': can't unmap rbd device /dev/rbd/ceph-lxc-storage/vm-301-disk-0: rbd: sysfs write failed
How can I destroy this LXC so I can start fresh? Or at this point should I just reinstall PVE and be done with it?
Background:
I am running a single node learning setup. When creating individual pools in Ceph > Pools, it is possible to specify size and minimum size to be 1 and 1 so that works. However such pools can be only used for Container and VM storage, but not for general storage, at least as far as I can tell.
I tried to create a CephFS for general storage but I ran into the issue where data and metadata pools created for CephFS are always made with minimum size of 2 and size of 3 values. This does not work for my single node setup. I don't see where this option can be adjusted during creation of CephFS from PVE Web UI.
I tried using terminal to first create the pools with desired min / size values, then creating CephFS using those pools, but it seems when Pools are created individually they are set to "application rbd" type and cannot be use with CephFS. I never found a way to adjust this, so I decided to give up on Ceph for now, until I can get more nodes and just use BTRFS RAID0.
These are off-topic questions:
* Is it possible to adjust pool size and minimum size when creating CephFS?
* If not, how does this scale on clusters of 5+ nodes?