I'm really feeling the noob sense right now.
One of my clustered hosts has a lot of extra capacity. I've created a couple of extra ZFS pools on that host. I was thinking I'd use them for ISO and backups.
Now I see that /etc/pve/storage.cfg is identical on all hosts, and is used to define what pool(s) are used for backup, ISO, images, templates and more... even for non-shared "local" storage.
By any chance, is the key a more general reading of the "nodes" entry, which is currently written to kinda-assume shared storage?
I am guessing that for a simple non-shared ZFS pool on only one host, I just need to add a new storage element with a "nodes" entry for that host?
(Perhaps adjust it to say "List of cluster node names where this storage exists, and is usable/accessible. One can use this property to describe or restrict storage to be available on a limited set of nodes.")
One of my clustered hosts has a lot of extra capacity. I've created a couple of extra ZFS pools on that host. I was thinking I'd use them for ISO and backups.
Now I see that /etc/pve/storage.cfg is identical on all hosts, and is used to define what pool(s) are used for backup, ISO, images, templates and more... even for non-shared "local" storage.
By any chance, is the key a more general reading of the "nodes" entry, which is currently written to kinda-assume shared storage?
nodes
List of cluster node names where this storage is usable/accessible. One can use this property to restrict storage access to a limited set of nodes.
I am guessing that for a simple non-shared ZFS pool on only one host, I just need to add a new storage element with a "nodes" entry for that host?
(Perhaps adjust it to say "List of cluster node names where this storage exists, and is usable/accessible. One can use this property to describe or restrict storage to be available on a limited set of nodes.")