The number of OSDs isn't relevant to a pool as long as it is larger then the minimum required by the crush rule. For example, If you have an EC profile of K=8,N=2 rule, you need a minimum of 10 OSDs DISTRIBUTED ACROSS 10 NODES. so 1 OSD per node...
from understanding failure domains. damn @UdoB beat me to the punch. I wont "professor" you on this. You can either read and understand, or deploy your preconcieved notions and learn on your flesh and blood. I would also note that if your...
Ok. lets touch on this. From my perspective, there are two types of storage (there are more but in scope.) There is payload (think OS and application) storage and bulk storage. Bulk storage can most efficiently be served by a single device such...
Caching occurs in multiple layers of presentation. By the time a virtual disk is presented to a guest, the multiple caching layers can conflict and actually SLOW the guest storage performance. see...
you dont need pci passthrough for lxc- just would need to install the proper nvidia driver based on hardware and kernel deployed. You are better off creating an installation script, especially if you intend on having multiple nodes with GPUs...
I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough...
in the many years I've been using PVE, I havent had much call for using Windows guests, and when I did it was usually Windows 2016 (and older before) and had reasonably good results. In the last few weeks, I had need of a Windows guest for a...
In a cluster you dont need or even want to backup a host. everything important lives in /etc/pve which exists on all nodes. If you DID back up a host(s), you'd open the possibility of restoring a node that has been removed from the cluster and...
The dashboard and smb modules are, as the terms suggests, OPTIONAL MODULES. they are not required for "basic functionality" and provide no utility to a ceph installation as a component of PVE.
The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim.
moving all but OSDs are trivial- just create new ones on other nodes and delete the...
Interwebs say this happens when the on-disk block size is going from 4k source to 512b destination.
Is reformatting the destination volume a possibility?
I didnt know that, but that kinda begs the question what does the dashboard offer you beyond what PVE presents; if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of...
make sure that only the node that actually has this store is in this box:
also, you need a third node or qdevice in your cluster or you can have issues if any node is down.
pvesm remove SATAPool
out of curiosity- why do you keep bringing up the other node? are they clustered? if clustered, dont delete the store; you need to go to datacenter-storage and make sure you EXCLUDE the node that doesnt have that pool in it...