I would say that using ZFS sync this kind of setup if fairly ok!
1. I would suggest to use SSDs on PVE Prod 1 if you plan to run systems, that are resource consuming and needs fast IO;
2. I would leave it as PBS, because it is better to have a real copy from VM, that you can recover in case of...
The same problem! I have 2.5TB on cephfs with a lot of small files, that has to be backed up each day on PBS. ~50GB of extra data is generated each week. Backup task using proxmox-backup-client takes ~3 days. Cephfs is on 16 SAS 1TB HDDs.
Any ideas how to speed up this process? Servers are with...
Hi!
First of all download newest driver from Intel. For example for x710 https://downloadcenter.intel.com/download/24411/Intel-Network-Adapter-Driver-for-PCIe-40-Gigabit-Ethernet-Network-Connections-under-Linux-
Unpack it, install kernel-headers, development-tools, gcc, etc.
cd i40e-<x.x.x>/src/...
No, Dell servers are used in our case.
Still - first check, using ethtool -i $interface_name, X710 FW and driver version. I suggest to have version 8.30 and driver version at least 2.14.13.
I had 6.01 FW version and 2.8.40 driver version at the beginning!
Actually nobody checked if all IFs are up at the beginning. They are bonded, so all the time at least one IF was up. Only latter i found out on switch, that a random IF is down after reboot.
One more thing i did - upgraded FW of the NIC to the latest Intel provided FW.
Problem solved!
1. We have X520-DA2 and X710-DA2 ethernet controllers in servers;
2. pve-kernel-5.4 uses ixgbe v.5.1.0-k for X520-DA2 and i40e v.2.8.20-k for X710-DA2;
3. Latest ixgbe driver version available @intel web site is 5.11.3;
4. Latest i40e driver version available @intel web site is...
And it works as expected after a server reboot?
I still cant find the reason why randomly one or more interfaces are not detected correctly during bootup!
https://forum.proxmox.com/threads/no-network-after-proxmox-kernel-upgrade.86216/
Here you go! Beside I'm back with kernel version 5.4.78-2-pve.
What i tried:
disabled all vmbr and VLANs, so only physical interfaces and bonds are enabled. Still the same with new kernel and with the old one all interfaces are up. N3K debug log has no error messages or something like that!
1...
Hi!
I have a 6 server cluster:
3 servers are hybrid nodes with a lot of OSDs and other 3 nodes are like VM processing nodes.
Everything is backed up by 2x2 port 10G NIC in hybrid nodes and 1x2 port 10G NIC un processing nodes and two stacked N3K switches.
Ceph does the thing for VM storage and...
Hi, symmcom!
Thanks for sharing your experience. We are using eve4pve-barc for ceph backup purposes!
Can you share your experience with backy2? It installs on host OS?
Hi!
I have pretty similar case.
At the moment i have a 4 server proxmox cluster hosted in DC-A. As a shared storage i'm using CEPH.
And there comes DC-B, connected using fiber optic.
So my idea is to join to existing cluster another 4 servers (similar as located in DC-A) located in DC-B, expand...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.