2 Node Cluster w/ Shared Disc

cmkrs

Member
Jan 18, 2021
6
0
6
Hi there,
I am relatively new to Proxmox clustering. I am trying to setup a 2 node cluster with a shared (iscsi from a Truenas Scale).
This is a v 7.2.3 setup.

1. I added iscsi resources (100 MB for Quorum and 14 TB for VMs and ISO files) from the Truenas to both nodes under the Datacenter/Storage.
2. Created LVMs for both iscsi resources under the Datacenter/Storage
3. Created a Cluster on the first node and joined the second node to it.

What I'd like to find out is;
1. Why I don't have the /etc/pve/cluster.conf file under either node (there is corosync.conf under this location)
1. Can I built this cluster without a third node (raspberry pi or otherwise) and still maintain failover in an event of a node failure.
2. How to set the iscsi based (100 MB) disc for Quorum.
3. How to create directory for VMs and ISO files on the iscsi based disc (14 TB)

Any help would be greatly appreciated.

Thank you
 
What I'd like to find out is;
1. Why I don't have the /etc/pve/cluster.conf file under either node (there is corosync.conf under this location)
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
Corosync cluster configuration file (prior to Proxmox VE 4.x, this file was called cluster.conf)
1. Can I built this cluster without a third node (raspberry pi or otherwise) and still maintain failover in an event of a node failure.
Can you? Yes. Will you be protected from all cases of split-brain failures? No.
2. How to set the iscsi based (100 MB) disc for Quorum.
https://forum.proxmox.com/threads/san-iscsi-as-quorum-disk-proxmox-4-4.35187/
3. How to create directory for VMs and ISO files on the iscsi based disc (14 TB)
You have decisions to make. Do you want to store your VMs as QCOW files? Then format the storage, mount it, point PVE to it as directory storage, make it for VM images and ISO. Do you want to use LVM? Then either split your storage between ISO and VMs or find a new place for VMs.
https://pve.proxmox.com/wiki/Storage


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you bbgeek17 for all your answers.
Given the feedback, I have decided use a third node, a small VM on Truenas to run as Quorum vote device, which should eliminate the need for the iscsi based quorum disc.
https://www.danatec.org/2021/05/21/two-node-cluster-in-proxmox-ve-with-raspberry-pi-as-qdevice/

On the third question, I have created two iscsi resources, 14 TB for WMs and 300 GB for the ISOs, with one LVM for each one. The I followed the instructions from the link below to create the WM and ISO spaces.
https://kb.vander.host/operating-sy...-in-ubuntu-and-mount-it-all-the-way-to-fstab/
All went well up until this point but with the restart of the pve nodes, I got fstab errors for the mount points I created.
Even though those mount points work fine when mounted manually, boot process fail on both nodes if I include those mount parameters. (I got the UUIDs by blkid). fstab --> UUID=long_uuid_number /mnt/iso ext4 defaults 0 2
The only way to boot properly was to comment out those lines in fstab on both nodes.

Any advice?

Thank you
 
Thank you very much, that definitely helped with getting both nodes boot up. Here is where it got stuck now.
I am able to upload an ISO file under the isostore from the first node via the web gui, which uploads it under /mnt/isostore/template/iso location. This file only visible on the first node, I can navigate in to the same folder on the second node but the iso file is not there. What might cause this issue?
Here is the snapshot from the datacenter/storage section.

1652206139627.png
 
If you want to have a single shared filesystem across the cluster for file storage (ISO), then LVM is not the solution.
You need one of :
NFS
CIFS
Custer Aware Filesystem

Each one comes with its own set of pluses and minuses. Some require additional infrastructure, others advanced system's knowledge.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I was under the impression that Proxmox cluster storage had something similar to Hyper-V Cluster Shared Volume (CSV) where all nodes can access the iscsi resource at the same time. Truenas for instance, offer Cluster Aware Iscsi service (as most DELL, Lenovo Storage servers, as well as Starwind, unlike Ubuntu iscsi service) which is what I use for Hyper-V CSV. The ultimate goal is to migrate from Hyper-V Cluster to Proxmox cluster.
 
I was under the impression that Proxmox cluster storage had something similar to Hyper-V Cluster Shared Volume (CSV) where all nodes can access the iscsi resource at the same time.
No, Proxmox does not have built-in CSV or VMFS equivalent.
iSCSI does not have direct relationship to "shared" functionality. CSV/VMFS can run on iSCSI, DAS, Fiber SAN and other types of technology.
You may be thinking about "SCSI Persistent Reservation" - however thats at SCSI protocol, not the "i".
This page contains a table of supported storage technologies: https://pve.proxmox.com/wiki/Storage
Find "File+Shared" - those are your "Proxmox csv/vmfs".
Truenas for instance, offer Cluster Aware Iscsi service (as most DELL, Lenovo Storage servers, as well as Starwind, unlike Ubuntu iscsi service) which is what I use for Hyper-V CSV.
CSV/VMFS are file systems that live on top of raw SCSI device (and others). Proxmox does not come with a built-in _supported_ cluster aware file system. You are welcome to install/configure and support it on your own : https://en.wikipedia.org/wiki/Clustered_file_system .
PVE will consume this file system as "directory storage" where VMs and other data can be placed.

The ultimate goal is to migrate from Hyper-V Cluster to Proxmox cluster.
Its certainly possible, but competing things are rarely equivalent in life...

Why dont you use the NFS functionality of your Truenas for ISO storage?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Thank you. It will explore NFS (or CIFS), though lack of snapshot support is a bummer. I guess I may also go for CEPHFS, which checks all the boxes, but it seems that I would need to acquire decent SSDs for this solution to work well enough, based of the Proxmox docs.
 
Just out of curiosity, how big of a difference would it make if I use HDD instead of SSD for the CEPHFS? I intend to use enterprise grade HDDs.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!