Ok, so it would have been clever to just let the tcpdump run and wait for the first tcp connect error...
()
2020-12-10T22:00:00+01:00: Starting datastore sync job 'pbs02:pve-infra-onsite:pve-infra:s-920ce45c-1279'
2020-12-10T22:00:00+01:00: task triggered by schedule 'mon..fri 22:00'...
The IPsec Tunnel is always up and both firewalls are well below their maximum session limit.
I have seen the 10sec wait time between the error und the next action.
Is there something like a retry setting I could use?
I could run a ping/date job during sync job time to rule out network problems.
Hi,
playing around with PBS v1.0-5 using a sync job against a remote second installation with the same version.
Versions:
proxmox-backup: 1.0-4 (running kernel: 5.4.78-1-pve)
proxmox-backup-server: 1.0.5-1 (running version: 1.0.5)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3...
We are currently migrating from VMware/NetApp with Veeam to Proxmox VE with Proxmox BS.
This is our use case:
- Site 1 has a 4HE server with lots of spinning disks and ZFS as PBS1 data sink and PVE cluster 1.
- Site 2 has PVE clusters 2 and 3 with a PBS2 installation on a QEMU VM which holds...
@wolfgang , so what would you recommend, if you want ZFS within the VM to use snapshots on a regular basis like 15 minutes? Is ZFS within the VM on top of CephFS ok?
As a private home user I want a backup but am o.k. with minor bugs <- so no money, no enterprise repository...
For the company I want a working, stable solution with manufacturer support. Test system using a community license, Production system Basic and up...
With a 3 node Cluster invest into 100GBit Dualport network cards and use DAC cables to make a point2point connection.
Then use this
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
to set it up
Just started an old Win 2008 R2 Enterprise system to have a look...
I am not using SCSI since I want to use FSTRIM - which only works with SATA on these old Windows versions...
Another Idea:
- Use ceph-volume first to create the OSDs
- Then out, stop and purge them
- Then recreate them with pveceph using the already existing LVs...
What I tried to say was:
- Use pvcreate on your NVMe to create a LVM device
- Use lvcreate on your LVM to create 4 logical volumes
- Use these LVs in pveceph plus your WAL device
You could achieve this easier. See https://forum.proxmox.com/threads/recommended-way-of-creating-multiple-osds-per-nvme-disk.52252/post-242117
When you look at the result it creates a LVM device with 4 volumes. You should be able to use the volumes with pveceph.
The ThomasKrenn RA1112 1HE pizza box uses an Asus KRPA-U16 motherboard which runs on an AMI BIOS.
The only settings I changed are:
- Pressed F5 for Optimized Defaults
- Disabled CSM support (we only use UEFI)
We wanted to benchmark to compare results and identify problems in the setup. We did...
So I updated the Zabbix templates used for the Proxmox nodes and switched to Grafana to render additional graphs. We do have single CPU threads graphs and NVMe utilization percentage over all three nodes and items in one graph.
This is a benchmark run with 4 OSDs per NVMe.
Order is
4M...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.