How clusters of two hosts in PVE share files

CorwinWang

New Member
Jun 14, 2024
6
0
1
I have a pve system installed on two of my computers and both hosts successfully joined the cluster. My host B does not have very much hard disk capacity, and I want host b to be able to use the storage space of host A as a disk image for a virtual machine, both hosts have a single hard disk. How should I proceed?
 
First: Don't build two-node "clusters". In the default configuration, if one host goes down, you don't have a cluster anymore and the remaining computer will not work due to quorum issues. The minimum is 3 nodes and prefered shared storage is either distributed shared storage with CEPH or a dedicated shared storage with a fourth computer hosting NFS/CIFS or any type of SAN. If you now want to share the disk space of ONE host to the cluster, this is even more useless than before. Use two machines with local storage and replicate or don't use a cluster at all.

If you want to play around with a cluster, virtualize one in your beafier machine.
 
  • Like
Reactions: Kingneutron
First: Don't build two-node "clusters".

That's a bit harsh. Guessing the OP has this in a homelab setup in which case not even using e.g. high-availability, ...

In the default configuration, if one host goes down, you don't have a cluster anymore and the remaining computer will not work due to quorum issues.

... in which case a node down is not an issue.

The minimum is 3 nodes

... or 2 with a Q device.

and prefered shared storage is either distributed shared storage with CEPH

CEPH cannot be possibly something the OP is even lurking at. Shared storage with 2 nodes defeats the purpose, it sounds good, but for this setup he would be better off with replication alone, if he needs any at all.

or a dedicated shared storage with a fourth computer hosting NFS/CIFS or any type of SAN.

What's the point of cluster when the single point of failure becomes a single machine? And if the OP meant non-OS storage he might as well share it from the "bigger" node, making it "shared" with the same SPOC in the architecture.

If you now want to share the disk space of ONE host to the cluster, this is even more useless than before.

Unless I misread something it's equally useless to 3 node cluster with 4th machine providing shared storage.

Use two machines with local storage and replicate or don't use a cluster at all.

He might still do this, but he has extra storage on one of the machines he actually could share.

If you want to play around with a cluster, virtualize one in your beafier machine.

I just don't believe the OP wanted to play, just wanted to efficiently use what he has.

@CorwinWang You could simply make one of the hosts extra space into a share on the host itself in that case, NFS [1] or SMB [2], the latter would be easier to get working if you are new.

[1] https://www.debian.org/doc/manuals/debian-handbook/sect.nfs-file-server.en.html
[2] https://wiki.debian.org/Samba/ServerSimple

* You will find better guides from web searches yourself.
 
Thank you for your responses. I apologize for any confusion; my English might have caused some misunderstandings. Let me clarify my network setup.

I have a home network with two computers. Host A has a 1TB hard drive where the Proxmox VE (PVE) system is installed, while host B has only a 256GB hard drive, also running PVE. I want host B to be able to use the hard drive of host A as storage, such as for virtual machine disks.

Apart from using NFS sharing on host A, is there a more elegant solution?
 
Between NFS and CIFS/Samba, which one offers the best performance? I appreciate your help.
This depends on its implementation and setup complexity. CIFS with multipath tcp will get best performance, yet very complicated to setup. I read that QCOW2 on NFS will not be the best performance, yet I never tried (I don't use filesystems to store virtual machines). Simple NFS or CIFS setup is totally sufficient for such a simple setup. Better performance would be to just buy another disk for your second host if you value your time in trying to get the most of our cifs/nfs network setup.

Oh, I just remember, there is a third option: Use ZFS-over-iSCSI if you have ZFS on your first machine. You can add it to your second machine and you'll be able to have good speed, snapshots and thin provisioning. Yet also more complicated to setup.
 
I don't use filesystems to store virtual machines

So this is why I thought the OP was originally asking to use the extra space to store ISOs.

Simple NFS or CIFS setup is totally sufficient for such a simple setup.

If the OP has to ask, probably Samba is the way to go. Everything else being the same, I would do NFS amongst Linux boxes for _file_ sharing. But this is different.

Better performance would be to just buy another disk for your second host if you value your time in trying to get the most of our cifs/nfs network setup.

This is not nicely said, but it's actually true. Or optimise how you set up your VMs. Do you really run 50 VMs in a homelab? Or do you give each VM 20G for root just because? Did you consider running them as containers and use separate partitions to store data only and have that shared over the network?

Most importantly, the network is 1G, I suppose, so that is going to kill it off completely.

Oh, I just remember, there is a third option: Use ZFS-over-iSCSI if you have ZFS on your first machine. You can add it to your second machine and you'll be able to have good speed, snapshots and thin provisioning. Yet also more complicated to setup.

This came to my mind the moment it was not about ISOs, but then again, the network is 1G?
 
Your HW sounds pretty limited - I imagine the same goes for your NW. I estimate your going to be looking at some really shoddy performance by the time you get it working (if at all). So either you invest or to quote the excellent advice above:
just use one host is the most efficient way (less watts, less complexity, less error prone).

If you want (to play) you can inter-install VMs on nodes by the good old backup/restore method.
 
  • Like
Reactions: LnxBil
So this is why I thought the OP was originally asking to use the extra space to store ISOs.



If the OP has to ask, probably Samba is the way to go. Everything else being the same, I would do NFS amongst Linux boxes for _file_ sharing. But this is different.



This is not nicely said, but it's actually true. Or optimise how you set up your VMs. Do you really run 50 VMs in a homelab? Or do you give each VM 20G for root just because? Did you consider running them as containers and use separate partitions to store data only and have that shared over the network?

Most importantly, the network is 1G, I suppose, so that is going to kill it off completely.



This came to my mind the moment it was not about ISOs, but then again, the network is 1G?
Thank you for your detailed response, esi_y.

My home network setup is as follows: Host A is a Chinese ChangWang N100 with four 2.5G network interfaces. I have set up both a soft router and a bypass router on it. It has two 1TB NVMe drives configured with ZFS for the PVE system. Host B is my storage server with high performance, based on an X99 server motherboard, equipped with two 10G network cards. I use it to run a Synology system for storing my data. However, since the platform of host B is quite old, it only supports one NVMe interface, and all my SATA controllers are passed through to the Synology system. The single NVMe drive cannot be used in RAID, so I think storing virtual machine disks on it is not very safe. Therefore, I am looking for a way to securely store the virtual machine disk files of host B on host A.
 
I realized that my operating system is indeed installed with ZFS. Host A has two 1TB NVMe SSDs configured in a ZFS RAID1 setup. Could you provide a guide for the ZFS-over-iSCSI setup you mentioned? I think I can give it a try.
This depends on its implementation and setup complexity. CIFS with multipath tcp will get best performance, yet very complicated to setup. I read that QCOW2 on NFS will not be the best performance, yet I never tried (I don't use filesystems to store virtual machines). Simple NFS or CIFS setup is totally sufficient for such a simple setup. Better performance would be to just buy another disk for your second host if you value your time in trying to get the most of our cifs/nfs network setup.

Oh, I just remember, there is a third option: Use ZFS-over-iSCSI if you have ZFS on your first machine. You can add it to your second machine and you'll be able to have good speed, snapshots and thin provisioning. Yet also more complicated to setup.
 
Just make daily backups to the Synology and you should be just good.
Yes, I am currently doing that. I have mounted the virtualized Synology via SMB on host B and regularly back up the virtual hard drives. However, the 256GB space is indeed too small. At the moment, I don't have enough money to replace it with a larger hard drive. I still want to try using the storage space of host A. The 1TB is too large for this host. If I remove the hard drive from host A and put it in host B, then the RAID on host A will be lost... so I am a bit conflicted right now, haha.
 
However, the 256GB space is indeed too small
IDK the specs of that drive (I imagine SSD of some sort), but if you are running the PVE host on that drive & it is more than 66% full, then it isn't going to last that long (from experience unfortunately!). Here is a cheap & nasty 2TB SSD for $96, I've bought many of them & have had only good experience. PLEASE NOTE THIS IS FAR FROM ENTERPRISE GRADE - SO BACKUP EVRYTHING (ANYWAY)!
 
Thank you for your detailed response, esi_y.

My home network setup is as follows: Host A is a Chinese ChangWang N100 with four 2.5G network interfaces. I have set up both a soft router and a bypass router on it. It has two 1TB NVMe drives configured with ZFS for the PVE system. Host B is my storage server with high performance, based on an X99 server motherboard, equipped with two 10G network cards.

So the networking is not bad at all.

I use it to run a Synology system for storing my data. However, since the platform of host B is quite old, it only supports one NVMe interface, and all my SATA controllers are passed through to the Synology system. The single NVMe drive cannot be used in RAID, so I think storing virtual machine disks on it is not very safe. Therefore, I am looking for a way to securely store the virtual machine disk files of host B on host A.

I have never run it this way myself, but since you might be open to play around with the setup a bit, did you consider e.g. ZFS on top of iSCSI target and setting up partition of that local 256GB as zcache?

I am sure my post will inspire some comments too. :D

EDIT: I also do not see why you run the two nodes in a "cluster". They could be standalone just fine and there's qm remote migrate feature.
 
Last edited:
  • Like
Reactions: Kingneutron
I have a home network with two computers. Host A has a 1TB hard drive where the Proxmox VE (PVE) system is installed, while host B has only a 256GB hard drive, also running PVE. I want host B to be able to use the hard drive of host A as storage, such as for virtual machine disks.

Apart from using NFS sharing on host A, is there a more elegant solution?

You don't want machine A in a cluster to depend on machine B, what happens when you schedule downtime for upgrades?

If A's primary storage on B goes offline, cluster goes kaput because you don't have enough resources to run it solo while the other node is down. And you shouldn't run a 2-node cluster anyway, you need at least a Qdevice (if not actually a 3rd full node) for quorum.

> My host B does not have very much hard disk capacity, and I want host b to be able to use the storage space of host A as a disk image for a virtual machine

No. Just no. You need to re-think your whole plan here, and start thinking in terms of continuous operation and Disaster Recovery.

Both A and B can use storage on (hypothetical) machine C, redundancy (usually RAID) is preferred here in the interest of continuous uptime. You'll still have something of a single point of failure, but acceptable risk in a homelab as long as you have backups. Your alternative is likely something like Ceph.

The thing to do in reality is upgrade the storage on Machine B so it is capable of running all cluster services while Node A is down, or getting package upgrades and rebooting, or undergoing maintenance. 256GB is just barely enough for the OS and a bit of lvm-thin.

You might get away with a bit more by using ZFS boot/root with compression, but you should still be separating OS + data because you don't want to run low on (or out of) free disk space. It's a hassle to expand.
 
> My host B does not have very much hard disk capacity, and I want host b to be able to use the storage space of host A as a disk image for a virtual machine

No. Just no. You need to re-think your whole plan here, and start thinking in terms of continuous operation and Disaster Recovery.

Both A and B can use storage on (hypothetical) machine C, redundancy (usually RAID) is preferred here in the interest of continuous uptime.

I take it like there's no requirement on the OP side to have any kind of guaranteed uptime, it's about efficiency of using compute and storage resources. I also took the liberty to guess by now (after the clarification) that the whole setup is a "cluster" is out of convenience for "migrations". If the ample storage machine happens to be able to share out part of it over iSCSI, there is his "shared" storage. It just happens to be on a special kind of Debian.

But yes, there's no clustering benefit. Also I assume backups are out of scope of this post.

The thing to do in reality is upgrade the storage on Machine B so it is capable of running all cluster services while Node A is down, or getting package upgrades and rebooting, or undergoing maintenance. 256GB is just barely enough for the OS and a bit of lvm-thin.

This depends on the OP's objective(s). He would learn a lot e.g. setting up that iSCSI and running 2 standalone PVEs.

If I wanted to just use the setup I would use Incus instead and stick to containers where I can, experiment with that iSCSI with zcache locally, etc. ... but this is PVE forum.

(Also, if I had to use PVE for any reason, I would install on top of Debian and not use the ZFS on root at all.)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!