Help Understand Replication for an SMB

Sparda88

New Member
Feb 5, 2024
3
0
1
Hello,

I am relatively new tto Proxmox so bare with me as I am still learning the right terms butt so far Proxmox is amazing.
I just managed to get 3 PCs in a cluster. I have an extra drive in each of them and have them as a zfs drive so I can move containers between them without an issue. I have a turkey lxc that has a mountpoint for Samba server with a mountpoint. When I try to replicate it I get the error that mountpoints cannot be replicated. I did go to the container and chose skip replication but that does not copy the data in the mountpoint but just the lxc.
Is there a way to replicate the mountpoint data between the clusters in the interface? The most I can think of is a cronjob or something to just copy the data.

1767324198394.png1767324218850.png1767324258653.png

Any suggestions on what the best way to backup/replicate the data to the other drive? Any advice is appreciated.
 

Attachments

  • 1767324147979.png
    1767324147979.png
    16.1 KB · Views: 2
Okay I did some research and the list is wild interms of response. I saw rsync but that is not good for something that is running. Would it be better if I put the files in ta container directly instead of mapping the mount point or use zfs send/receive but I cannot figure out how to use that.
I am really not sure how to make a fileshare and not have to do too much manual input. Is there a way? I also see Open Media Vault as a VM but people very torn on that.
The drives I got for free are 8 TBs so even 1 TB for a container is overkill for now. I am at 130 GB. I just have some pictures that are dear to me and I really do not want to lose.
 
For me, a cluster makes only sense if you have (dedicated or distributed) shared storage. Replicating stuff is just not the at the same level and a real shared storage. I love ZFS, yet I would not want to setup a two-way-replicating system with it in a cluster like you want as I understand it. It can be done, but you have to do everything by hand, which is not only error prone, you will also have a hell of a job to monitor it.

If you have the hardware for it or you don't have that high performance requirements, look into CEPH, which needs three nodes, dedicated disk (at least one per node, better two) and replicates everything in such a way that you can live migrate or even failover without more than crash-consistent dataloss.