The dashboard and smb modules are, as the terms suggests, OPTIONAL MODULES. they are not required for "basic functionality" and provide no utility to a ceph installation as a component of PVE.
The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim.
moving all but OSDs are trivial- just create new ones on other nodes and delete the...
Interwebs say this happens when the on-disk block size is going from 4k source to 512b destination.
Is reformatting the destination volume a possibility?
I didnt know that, but that kinda begs the question what does the dashboard offer you beyond what PVE presents; if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of...
make sure that only the node that actually has this store is in this box:
also, you need a third node or qdevice in your cluster or you can have issues if any node is down.
pvesm remove SATAPool
out of curiosity- why do you keep bringing up the other node? are they clustered? if clustered, dont delete the store; you need to go to datacenter-storage and make sure you EXCLUDE the node that doesnt have that pool in it...
lets only discuss the node in question, since the other one is working. I assume that the end result here is a zpool named saspool?
if so, zpool create SASPool [raidtype] [device list]
If something else, please note what your desired end result is.
I am completely confused.
If the drives that host saspool are NOT PRESENT, then its a foregone conclusion that a defined store looking for the filesystem is going to error... what are you asking?
lets start over.
do you have the drives...
dont worry about attaching/detaching it. simply remove and replace it.
edit- DONT PULL IT YET. there's a resilver in progress; even though the disk is bad, the vdev partner is not in a "complete" state. let it finish resilver, and THEN replace...
the likely culprit is a pre-existing filesystem on your local drive. The installer is usually pretty good at dealing with it, but in your case clearly it isnt.
use a linux livecd like https://gparted.org/download.php and completely wipe the...
Yes, lvm/thin is not cluster-aware so you will risk a loss of data. It's not supported for a reason, see also the table on https://pve.proxmox.com/wiki/Storage which clearly states that lvm/thin is NOT a shared storage.
No for exactly the...
actual errors would be useful- this can refer to the zfs pool or the storage pool in pve.
No. each pool must have a different name on the same host. same goes for pve stores.