Question about PVE + NAS Syno + Veeam

Mar 19, 2025
6
0
1
Hello everyone,

We're installing a new infrastructure. I'm used to KVM/Virtmanager (under CentOS).

As I started working here, we, as a team, decided to migrate from VMware to Proxmox.

Although I've used KVM since around 2011/2012, I've never used Proxmox.

We've got two Synology servers configured in HA mode, as well as various servers.

We would like to use the Synology servers in ISCSI, but Veeam doesn't support ISCSI.

I saw the ZFS-Over-ISCSI options, but it's not compatible with Synology NASes.

I'd like to be able to back up the VMs to VEEAM and then migrate them from one PVE to another.

I see two options:

-NFS, but I'm worried about performance.

-An ISCSI LUN (per VM) connected from the CLI and formatted in ZFS, but I have some doubts about migrating the VMs.

Could I have your opinion?

Thanks in advance.
 
Hi @doys , welcome to the forum.

We would like to use the Synology servers in ISCSI, but Veeam doesn't support ISCSI.
This does not sound correct. What exactly do you mean by "use iSCSI"? iSCSI+LVM, direct iSCSI, something else?

I saw the ZFS-Over-ISCSI options, but it's not compatible with Synology NASes.
This is correct.
-NFS, but I'm worried about performance.
Best way is to try. It may be sufficient for your.
-An ISCSI LUN (per VM) connected from the CLI and formatted in ZFS, but I have some doubts about migrating the VMs.
This sounds more like Direct iSCSI, although it is unclear whether ZFS in this scheme is inside the VM? If you coordinate everything perfectly it may work. But as this is outside of PVE control, it will be up to you to maintain.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks Bbgeek17,

I just tested the performance between NFS and ISCSI + ZFS.

The difference in performance is significant.

I don't have any issues managing manually, but can VMs automatically migrate when hot swapping from one host to another in the cluster while using this method? (I don't have the hardware to test this yet.)
 
I created a LUN on Synology


in cli:

iscsiadm --mode discoverydb --type sendtargets --portal 172.16.101.100 --discover
iscsiadm --mode node --targetname iqn.2000-01.com.synology:SAN.default-target.7067ca2bc93 --portal 172.16.101.100:3260 --login
zpool create TEST2 /dev/sdb

in gui in the data center

Storage -> add -> zfs
Select zfs-pool "test2"
tick the thin provision checkbox
 
zpool create TEST2 /dev/sdb

in gui in the data center

Storage -> add -> zfs
Select zfs-pool "test2"
tick the thin provision checkbox
no, the failover will not work.
For all intents and purposes you've created a LOCAL storage. ZFS is not a cluster aware file system, meaning that it cant be simultaneously accessed by multiple hosts. You will have data corruption.

Once you connect the iSCSI LUN, make sure that you specified that it will be reconnected on reboot. Then pass it through completely to a VM as a VM dedicated LUN. Repeat the iSCSI connection steps on second host.
This is NOT the preferred method of doing things, I don't know whether it will align with your Veeam goals, but it will work with VM transfer.

Frankly, you are veering into unsupported territory to satisfy a 3rd party app. There are better ways to handle it.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm not necessarily looking to use this method.

My constraints are:
1- 2 Synology servers in HA
2- Hot failover of VMs
3- Backup with Veeam

Ideally with the least possible performance loss.

Iscsi is better than NFS in terms of IO. I have some SQL servers and applications coded with feet.

I like LVM, which I'm quite familiar with, but Proxmox doesn't support snapshots with LVM.

To make Veeam work, snapshots must work.

I liked my KVM solution (without a dedicated distribution) + CLI + SAN under Linux + scripts, but my new employer wants support.

If you have another solution, I'm interested :-)
 
but it doesn't matter, my question is what are the possible solutions
I respectfully disagree. It does matter.

The storage where system files of workers are located is not, or does not need to be, the same storage as the one you are backing up.
For example, you can locate you worker on NFS. The workers are spun up per PVE node and do not fail over.

You have a few options:
a) Use PBS
b) Reach to your Backup vendor support and ask them to assist with the set up.
c) Understand the various requirements and try your configuration in a lab
d) Ask for suggestions from your storage vendor
e) Find a different storage vendor who supports PVE+Veeam+Storage combination
f) Use your current storage, place Worker on NFS, use LVM thick for shared storage pool, backup your data with Veeam

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Indeed, my English isn't very good, and I understood the term "worker" to mean something else.

As for your suggestions:
PBS doesn't support s3.

We'll use LVM.

Thank you.
 
As for your suggestions:
PBS doesn't support s3.
When a requirement isn’t stated upfront, it can’t be factored in when providing options.

We'll use LVM.

Thank you.
You are welcome, I am happy you were able to settle on acceptable solution for your infrastructure.

P.S. this may help with setting up your iSCSI/LVM combination: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited: