add new node without ceph only for management

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
I have a 5.1 3 node cluster with ceph. Everything works great. now I want to add another node to the cluster only for management without ceph. After adding the node it appears on the list in the web ui but is unresponsive and ceph disks appears in the list(I dind't add them). ceph network is on a 10g network that is unaccesible from this last node. maybe this is the problem. How can I remove from the last node ceph?
 
the storage config is cluster-wide, and by default PVE assumes a storage is available on all nodes. you can limit storages to certain nodes with the "nodes" option (also available on the GUI).
 
edit the storage on the datacenter level (where you can also add/remove them), and in the right column the first box..
 
I can't see this option(see attachment) but this option appears if I create a new shared storage. appart from this I can't understand how to specify local storage settings on this last node. storage.cfg is replicated on all nodes like they are identical. the last one for example has a zfs pool that now is disappered in local storage. and I cannot add again from datacenter-->>add-->>zfs. Can I simply edit the storage.cfg or it will be overwritten?
 

Attachments

  • shared.JPG
    shared.JPG
    29 KB · Views: 22
seems like we removed that from the GUI for hyperconverged Ceph setups (assuming that the whole cluster has access). you can still set the 'nodes' option using 'pvesm set STORAGE nodes NODE1,NODE2,NODE3' (replace STORAGE and NODEX with actual values).

if you have a local storage that is only available on one node, add it and limit it to that specific node. for some storage types (like ZFS), you need to be connected to a node where the underlying storage is available when adding via the GUI, because it does a scan of local resources (to populate the drop down). this limitation does not exist when adding with "pvesm" or the API.
 
you were very clear but command give this output

root@nodo1:~# pvesm set ceph_ct nodes nodo1,nodo2,nodo3
400 too many arguments
pvesm set <storage> [OPTIONS]
 
right command is

root@nodo1:~# pvesm set ceph_ct --nodes nodo1,nodo2,nodo3

now I'm going with the sfs pool.. let's hope it will work!
 
ok my friend now everything is stable and look very good and I understood some very useful things, you were right about the availability of zfs from web ui! only last question:
my original last node storage.cfg was

Code:
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

zfspool: local-zfs
    pool rpool/data
    sparse
    content images,rootdir

but if I try to add from web ui zfspool: local-zfs with restriction only for the last node I can select

Disk image, Container

rootdir is not listed.
how can I fix this?
 
sorry very noob I didn't know that container=rootdir found now in the help.
OK thank you so much I hope that you add the option to add nodes restrictions in the next releases.
thank you again!!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!