Shared storage beween Nodes in Cluster Question

Discussion in 'Proxmox VE: Installation and configuration' started by Joe Sudhoff, Jan 11, 2019.

  1. Joe Sudhoff

    Joe Sudhoff New Member

    Joined:
    Jan 6, 2019
    Messages:
    5
    Likes Received:
    0
    I have created a cluster and was wondering why there is a "?" next to each others storage. When I try to make a VM and logged into Node 1 I cannot use Node 2 storage. Am I doing something wrong? With this condition it will not allow me to migrate VM's between nodes.

    Attached is a picture of what I mean. I am running 5.3-6 w/o subscription.
     

    Attached Files:

  2. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,177
    Likes Received:
    155
    What type of storages to you have? Can you please post your storage.cfg?
    Code:
    cat /etc/pve/storage.cfg
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. Joe Sudhoff

    Joe Sudhoff New Member

    Joined:
    Jan 6, 2019
    Messages:
    5
    Likes Received:
    0
    Thomas,

    Equipment: 2 Poweredge R710's w/ Dual XEON L5520's and each with 32 gig of ram.

    Not sure if you wanted that command on 1 node or 2 so below are the results of the command on both:

    Also, not sure if equipment info was needed:
    2 Dell Poweredge R710's 1 w/32Gig of RAM and the other w/40Gig of RAM, both with Dual

    root@zeus:~# cat /etc/pve/storage.cfg
    dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

    lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

    lvm: Z1000
    vgname zeus_b
    content rootdir,images
    shared 1

    nfs: ZX100
    export /volume2/ZeusXpenologyA
    path /mnt/pve/ZX100
    server 192.168.4.27
    content images,backup,vztmpl,rootdir,iso
    maxfiles 1
    options vers=3

    lvm: A1000
    vgname apollo_b
    content rootdir,images
    nodes zeus,apollo
    shared 1

    nfs: ZX101
    export /volume3/ZeusXpenologySpare
    path /mnt/pve/ZX101
    server 192.168.4.27
    content rootdir,iso,vztmpl,images,backup
    maxfiles 1
    options vers=3
    ------------------------------------------------------------------------------------------------

    root@apollo:~# cat /etc/pve/storage.cfg
    dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

    lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

    lvm: Z1000
    vgname zeus_b
    content rootdir,images
    shared 1

    nfs: ZX100
    export /volume2/ZeusXpenologyA
    path /mnt/pve/ZX100
    server 192.168.4.27
    content images,backup,vztmpl,rootdir,iso
    maxfiles 1
    options vers=3

    lvm: A1000
    vgname apollo_b
    content rootdir,images
    nodes zeus,apollo
    shared 1

    nfs: ZX101
    export /volume3/ZeusXpenologySpare
    path /mnt/pve/ZX101
    server 192.168.4.27
    content rootdir,iso,vztmpl,images,backup
    maxfiles 1
    options vers=3
     
    #3 Joe Sudhoff, Jan 11, 2019
    Last edited: Jan 11, 2019
  4. tschanness

    tschanness Member

    Joined:
    Oct 30, 2016
    Messages:
    291
    Likes Received:
    21
    Because that's local storage and not shared storage. Marking strorage as "shared" does not share it - you'd need to do that yourself.
     
  5. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,177
    Likes Received:
    155
    Thanks, everything in /etc/pve is shared between all cluster nodes in realtime, so as long as the cluster is healthy one output should be enough, just for your information :)

    And yes, @tschanness is correct. You have a local LVM, so the other server cannot access it. The "shared" mechanism is for the case where you setup a really shared storage where PVE does not manage mounting and such, so you tell PVE that yes, this storage is really shared even if it does not look like it is.

    If you want to provide shared storage on this two node setup you could try to setup GlusterFS, or the like. If you had a third node you could also setup ceph, which we can only recommend.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. Joe Sudhoff

    Joe Sudhoff New Member

    Joined:
    Jan 6, 2019
    Messages:
    5
    Likes Received:
    0
    Thank you for the reply's and all the information. The A1000 and Z1000 are secondary drives ( RAID 5 via PERC6/i in the R710), those I had to go into the shell and use fdisk to create the pvcreate and the vgcreate, do that matter?

    So I could take Z1000 and A1000 and put them in a GlusterFS?

    Thanks again.

    Joe
     
  7. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,696
    Likes Received:
    331
    Are these shared (connected) to both servers?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice