First Proxmox replication mode

Benoit

Renowned Member
Jan 17, 2017
55
0
71
42
Hello all from France ! ;)

I run Proxmox from a few year now, and i would like to use it different way, i have some question about Proxmox Zsync feature.

I'm running Proxmox 4.4 on two Dell R530 servers with 64Go RAM and bi-xeon 10 cores.
Both have two 10Gb network card and eight 1Gb Lan network card.
Here is the storage config of each (Raid made by Perc H730 on each)

proxmox.JPG

I install Proxmox on 300Go Raid1
On the first Raid1 2To, i will have 4 Windows VM (print server, Solidworks licencing server, wsus server, and one another dedicated for school notation software)
On the second Raid1 2To i will just have one VM that is a specific DC with ubuntu distribution made for french school environment.
This DC requires to have two disks, one for LDAP, user Data, madatory profiles etc .... and another physical for specific user data that will be the last raid1 1To.

The question is :

I want to zsync all VM of Proxmox1 on Proxmox2 to ensure that if Proxmox1 is down i can launch all my VM on Proxmox2 ( normal feature you'll tell me !)

But how can i create my Ubuntu DC VM with two different disks, based on 2 different Raid that will be zsync on the second Proxmox ?

Also, i want to use two Rj45 that will be dedicated between the two nodes for zsync only

I'm not sure that i explain correctly what i want to do ...

Do i create one ZFS named "VM-STORAGE" based on sdb and another named "VM-DC" based on sdc and sdd ?

i'm a bit lost ....o_O:confused:

Thanks for helping !
 
Last edited:
Do you think that i should install my disk on host other way like the four 2To disk into raid5 ... it will made me 6To available so i can create my "DC-VM" with a larger disk of 3To with 2To for system and 1To partition dedicated for specific data ?

The overall performance of all my VM will not be degraded ?

The "DC-VM" use lots of disk access on system disk.

With Raid5 it will be easier for me to zsync my VM ... but i need not to lost performance on my DC !
 
Hi,

general it is not recommended to run ZFS on a Raid.

ZFS includes it's own raid system what you should use if you what to run ZFS.

I don't know this Raid Card, but some people write it is possible to activate non-RAID pass through and run ZFS on it.

And if you run zfs on the non-RAID pass through disk you can make one large pool with raid 10 and performance will no problem.
 
Thanks for your quick reply

how can i configure raid 10 on other way than with the raid controller card ?
 
stupid question but ...

why it is not recommanded to use physical raid with ZFS ?
 
you tell pve-zsync

-dest string

the destination target is like [IP]:<Pool>[/Path]
 
thanks

i do not explain what i want correctly.

I have many network cards on my proxmox host.

I want to dedicate a bond of two 1gb network card on each host only for zsync, that bond will be connected directly from host to host not trough a network switch.
 
Yes

Server A 10.10.10.1 Vm Active
Server B 10.10.10.2 Vm Replica

cmd on ServerA
-source 100 -dest 10.10.10.2:<pool>/subset

cmd on ServerB
-source 10.10.10.1:100 -dest <pool>/subset
 
i give you the entire values

Proxmox1 :
name : svr-07-hve
networkbond : 10.10.10.1
pool : VM-STOCKAGE
VM-NAME : svr-08-hve
VM-ID : 100


Proxmox2 :
name : svr-09-hve
networkbond : 10.10.10.2
pool : VM-STOCKAGE
VM-NAME : the same as proxmox1
 
Last edited:
on source

pve-zsync create --source 10.10.10.1:100 --dest VM-STOCKAGE --verbose --maxsnap 2 --name svr-08-hve

is it right ?
 
Correct but this command must run on target.

on source
pve-zsync create --source 100 --dest 10.10.10.2:VM-STOCKAGE --verbose --maxsnap 2 --name svr-08-hve
 
ok !

i try it but i get an error message :

COMMAND:
ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.10.10.2
GET ERROR:
mktemp: failed t ocreate file via template '~/.ssh/ssh-copy-id_id.XXXXXXXXXXXXXXXXXX' : No such file or directory


i guess i forgot to configure ssh first ... i will find that on the web

maybe you have commands on hand ?
 
The key still exists on the other node.

you have only add the ip in the known hosts
on 10.10.10.1
Code:
ssh 10.10.10.2

on 10.10.10.2
Code:
ssh 10.10.10.1
 
again me !

A made a mistake while i create my pve-zsync command.

I do the following command to delete zsync

pve-zsync destroy --source 10.10.10.1:100 --name svr-05-hpe

it works.

when i do pve-zsync list there is nothing.

But when i look into my VM-STOCKAGE i see the vm-100-disk-1 disk and i could not delete it ...

How can i do ?

thanks again for your help !
 
@wolfgang

Hello,

I try the pve-zsync ommand to send VM on proxmox dest as you tell me

on source
pve-zsync create --source 100 --dest 10.10.10.2:VM-STOCKAGE --verbose --maxsnap 2 --name svr-05-hve

Failed !

It tells me there is not VM 100 to replicate.


So i try the command on target :
pve-zsync create --source 10.10.10.1:100 --dest VM-STOCKAGE --verbose --maxsnap 2 --name svr-05-hve

The replication start


i do not undestrand why ????


another question ...

when i do command

pve-zsync list

result show pve-zsync only from host where i launch the command to replicate is it normal ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!