Best practices for PBS

subjectx

Member
Nov 4, 2020
36
3
8
112
Greetings,

A quick background on my small infrastructure:
Proxmox server with multiple running VMs and LXCs. Empty SFP+ port. All incoming traffic from public IP is being routed via iptables, either to custom SSH port for direct server access, to port 8006, and the rest towards VM running PfSense).

Another physical server meant for running PBS. It has 2x 2TB nvme drives, 8x 10TB HDDs.

I would like to install proxmox and virtualize PBS (1. should I go VM or LXC?) on it on this fresh server, but I'm concerned that I will have connectivity problems because of iptables routing organization, never done this before.
Lets say servers get connected via free SFP+ to each other. I set IP on main server on SFP+ NIC to 192.168.5.2 and IP on Backup server on SFP+ NIC to 192.168.5.3.
2. Now I have to give PBS VM which IP? 3. Where?

4. Can I connect from main proxmox server, that runs on IP 10.0.0.2 as host, to this PBS VM on IP 192.168.5.4 that runs on host 192.168.5.3?

5. If I have 2x 2TB nvme drives meant for OS, how can I organize them to use ZFS Special device? Maybe question is more like: can I partition nvme drives in half (before installing Proxmox or PBS), and use 1TB partition in RAID1 for OS (either Proxmox or PBS) and second two 1TB partition as ZFS special device?

6. Does PBS need RAID 10 or can I go with RAID6/RAIDz2 in IOPS needs?

I know this is alot of questions, so If needed, we can arrange https://www.buymeacoffee.com/ paid session via some videoconference call or something.
 
Greetings,

A quick background on my small infrastructure:
Proxmox server with multiple running VMs and LXCs. Empty SFP+ port. All incoming traffic from public IP is being routed via iptables, either to custom SSH port for direct server access, to port 8006, and the rest towards VM running PfSense).

Another physical server meant for running PBS. It has 2x 2TB nvme drives, 8x 10TB HDDs.

I would like to install proxmox and virtualize PBS (1. should I go VM or LXC?) on it on this fresh server, but I'm concerned that I will have connectivity problems because of iptables routing organization, never done this before.
Lets say servers get connected via free SFP+ to each other. I set IP on main server on SFP+ NIC to 192.168.5.2 and IP on Backup server on SFP+ NIC to 192.168.5.3.
2. Now I have to give PBS VM which IP? 3. Where?

4. Can I connect from main proxmox server, that runs on IP 10.0.0.2 as host, to this PBS VM on IP 192.168.5.4 that runs on host 192.168.5.3?

5. If I have 2x 2TB nvme drives meant for OS, how can I organize them to use ZFS Special device? Maybe question is more like: can I partition nvme drives in half (before installing Proxmox or PBS), and use 1TB partition in RAID1 for OS (either Proxmox or PBS) and second two 1TB partition as ZFS special device?

6. Does PBS need RAID 10 or can I go with RAID6/RAIDz2 in IOPS needs?

I know this is alot of questions, so If needed, we can arrange https://www.buymeacoffee.com/ paid session via some videoconference call or something.
Great ambition ;) perhaps there are as many best practices as there are PBS users. I have no idea of the best practices, but i do have a lot of opinions. Maybe your questions can start a best practice thread, who knows.
- First, and I really really mean first, try it, try it again, play with it. Both PBS and proxmox is so easy to try and install, you will most likely change your ideas on the way. Play with it along with building your strat.
- If PBS virtualized, start with VM to have full control. Once you know what you like, you can think of container. I would use VM anyway. You probably have your VM on the vmbr bridge, give it a new ip on same subnet as the physical PBS host to begin with.
- Your no 4 question, you can connect anything anywhere, the physical host can have several subnet IPs on one nic, the vm can have plenty nics etc, just do as you like, yes you can connect like your question,
- Your no 5, noooo don't put anything else on your special device ssds. As far as i know they cannot be removed from zfs once added. It's no good idea to be depending on OS on same ssd as special devices. I am not sure if it can be done, but in my world it's an awful idea asking for trouble. I understand the budget, but please no.
- No 6, PBS loves iops but your special devices will cover most of the iops, go with whatever raid/mirror you like. Since mirrored vdevs now can be removed, I absolutely love mirrored vdevs, so if you have 10 disks 5 mirrored would be great, but that's only me, other possibly dislike mirrors. I like zfs a lot so being able to keep the pool while adding/removing vdevs is an absolutely fantastic feeling.

Why not run both Proxmox and PBS (not VM) on the standalone physical server? in that case you can even restore to that proxmox if your other host is down ? You can still have also a PBS VM ofcourse.

my 3 cents, don't have more today!
 
Last edited:
  • Like
Reactions: subjectx
Thank you for your reply.

If I can be a bit more specific, since I tried alot already and simply cannot get ping through, this is my current situation on networking:

main server:
- integrated firewall completely off
- network tab:
Screenshot_1.png
- iptables routing everything incoming apart from few ports to pfsense VM
- cannot ping 192.168.3.x nor 192.168.4.1
- physical connection via SFP+ port

backup server:
- integrated firewall completely off
- network tab:
Screenshot_2.png
- no iptables routing yet
- internet access on host, no internet access on guest (pbs vm) yet
- PBS VM has vmbr1 as network card with IP 192.168.3.2
- physical connection via SFP+ port


Main question: how to config this so that main server has access to PBS VM?
 
Thank you for your reply.

If I can be a bit more specific, since I tried alot already and simply cannot get ping through, this is my current situation on networking:

main server:
- integrated firewall completely off
- network tab:
View attachment 23070
- iptables routing everything incoming apart from few ports to pfsense VM
- cannot ping 192.168.3.x nor 192.168.4.1
- physical connection via SFP+ port

backup server:
- integrated firewall completely off
- network tab:
View attachment 23071
- no iptables routing yet
- internet access on host, no internet access on guest (pbs vm) yet
- PBS VM has vmbr1 as network card with IP 192.168.3.2
- physical connection via SFP+ port


Main question: how to config this so that main server has access to PBS VM?
i am sorry but i am no good with iptables and pfsence, I thought iptables is mostly filtering/NAT and pfsense is good at routing besides fw stuff. Did you mean your routing is done with iptables?
 
Well, iptables are there to do first routing: traffic on specific ports stays on host (PVE; custom ssh port, 8006, 123, 53), the rest (tcp and udp) is routed to pfsense (well, onto vmbr1) where I'll do NAT or port forward or whatever.

For example, I also control icmp packages with iptables, turning ping off when I dont need it.

My iptables config is quite long, I can post it here if needed..

Question still remains how to configure hosts (networking) so that VMs from each host can communicate with each other and that main host can communicate with PBS VM on backup host.

To access internet on backup host I had to put network organization as shown in OP, but that means, that VMs dont have internet access.
For internet access on VMs I'll need to do masquerading (only one public IP available!) or somehow reroute all the internet traffic for VMs through main host via physical 1:1 SFP+ connection (enp67s0f* NICs). Both things I imagine will involve iptables?
 
5. If I have 2x 2TB nvme drives meant for OS, how can I organize them to use ZFS Special device? Maybe question is more like: can I partition nvme drives in half (before installing Proxmox or PBS), and use 1TB partition in RAID1 for OS (either Proxmox or PBS) and second two 1TB partition as ZFS special device?

Hi,

If you use nvme, you simply do not need special devices for normal usage. ZFS Special device make sense in 2 situations:
- if you have a rotational HDDs
- if you have a very high load on your nvme storage, and want to get better speed/IOPs(offloading the nvme) - this is most improbable case for you

Also note, that is recomended to have separate storage for your OS and for your data(this is general, not only for PBS)!

Good luck / Bafta !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!