Search results

  1. S

    gluster fsyncing/closing Input/output error

    Fresh install, get a 2 node gluster volume going on a dedicated network cluster network (1G NICs to a switch): p1 10.100.100.10 p2 10.100.100.11 p3 10.100.100.12 (quorum/management only) gluster network (10G NICs direct between nodes) p1g 10.100.101.10 p2g 10.100.101.11 create a volume 'vm' in...
  2. S

    4.2 ZFS and Gluster 3.8 issue (NFS)

    well, I think this one is technically solved. I used the gluster from the prox repo and now everything comes up and I can add it in the UI. I'm getting a fsync/closing error trying to use it, but I'll open a separate thread on that topic.
  3. S

    4.2 ZFS and Gluster 3.8 issue (NFS)

    well, i was installing prox on an APU last night and realized that gluster is in the prox repo :/ I've been adding the gluster repo so I think I'm having some version mismatch. I'm gong to re-install the two primary nodes and use the 'native' repo and try again. and yes, nfs-common was...
  4. S

    2.5 node setup with JUST corosync on the .5?

    I like Pi's, I have many of them deployed for service kiosks...but I'm a bit wary of running them as a key part of a virtualization cluster... How do you use the APUs? Are they just for quorum or are you actually hosting VM's/containers on them? Also, how did you install? I just did a debian8...
  5. S

    2.5 node setup with JUST corosync on the .5?

    I'm looking for a nice solution to a 2 node cluster with a proper 3rd node for quorum. How about a small box that just runs corosync to provide quorum? Would also be used for the 3rd node for ceph, but without local storage. My target box would be a PC Engines APU unit. Thoughts?
  6. S

    4.2 ZFS and Gluster 3.8 issue (NFS)

    I'm having a terrible time getting this configuration to work. Proxmox UI can't add the Gluster volume, it doesn't 'see' it. Some troubleshooting shows me that glusterfs isn't exporting the volume via NFS. The docs show that prox is going to use the fuse mount to view the files, but then...
  7. S

    how to tear down a cluster?

    I am desperately trying to figure out how to tear down a cluster. I have tried deleting the contencts of /etc/cluster/ and /var/lib/pve-cluster/ but no matter what, when I restart pve-cluster and do pvecm nodes I see the cluster nodes. grrrr, why is there no pvecm delnode nodename -force...