Add new storage only to one node in cluster

Nikolay K.

New Member
Aug 21, 2019
4
0
1
33
Hello.
I need to add new disk only to one existing node, not to all cluster.
On the forum, I'm found the only solution how to add a disk to all nodes via Datacenter - Storage.
What is the best way to do that?
Or I only need to initiate disk as /dev/sdb* on my node?
 
Hello.
I need to add new disk only to one existing node, not to all cluster.
On the forum, I'm found the only solution how to add a disk to all nodes via Datacenter - Storage.
What is the best way to do that?
Or I only need to initiate disk as /dev/sdb* on my node?


If you create the filesystem on it over our WebGUI (Node -> Disks -> Directory -> Create) then it will already be added for that node only.

Else, if the filesystem was already created add it over DC -> Storages, name it uniquely and set the "Nodes" field in the create or edit dialog to the respective node.
 
If you create the filesystem on it over our WebGUI (Node -> Disks -> Directory -> Create) then it will already be added for that node only.

Else, if the filesystem was already created add it over DC -> Storages, name it uniquely and set the "Nodes" field in the create or edit dialog to the respective node.
Thanks, I'll try it.
 
I'm experiencing a similar issue:
Prior to joining a node to a cluster, it had a directory /4x6raid5 which was a EXT4 Formatted LVM with one LV, one VG, one PV, but of course it was on top of a hardware RAID5.

Now I can either do:
  • Datacentre > Storage > Add Directory > /4x6raid5
    Which adds the directory to all servers (but it is only valid on 2 of them because they're identical servers with identical directory names)
  • Node > Disks > Directory > ERROR No unused disks!
Why would it think that?
fdisk -l
/dev/sda
Proxmox LVM on top of RAID1
/dev/sdb
Manually added LVM on top of RAID5

df -h
Filesystem Size Used Avail Use% Mounted on
udev 54G 0 54G 0% /dev
tmpfs 11G 18M 11G 1% /run
/dev/mapper/pve-root 94G 2.6G 87G 3% /
tmpfs 54G 60M 54G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 54G 0 54G 0% /sys/fs/cgroup
/dev/mapper/4x6raid5-4x6raid5 1.7T 77M 1.6T 1% /4x6raid5
/dev/fuse 30M 44K 30M 1% /etc/pve
192.168.1.82:/mnt/xpool/NFS/Servers 3.4T 65G 3.3T 2% /mnt/pve/Servers
192.168.1.83:/mnt/xpool/NFS/Backups 3.6T 336G 3.3T 10% /mnt/pve/Backups
tmpfs 11G 0 11G 0% /run/user/0


Any help is greatly appreciated. Thank you.
 
when you add a storage, you can select which nodes should be able to use it ..