So i'm trying to acheive the following schedules:
Job 1: Monday through Friday - once per hour - First run at 06:30 - last run at 19:30
Job 2: Monday through Saturday - once per hour - First run at 06:40 - last run at 19:40
Job 3: Wednesday to Sunday - once per hour - First run at 06:50 last...
Ideally i would like to get the following setup done for most production VMs:
Monday to Friday: "mon..fri"
between 06:30 and 19:30: ?
Once per hour: */1:00
Starting Time: has this to be set by the 06:30 parameter or the 19:30 parameter ? So if i want to have Job1 run at 0630 and Job2 run at...
Since the Wiki Article on building Windows Guest ISO#s with buildin Virtio-Drivers seems to be out of date a bit, here is a Guide on "How to build Win2k19 and Win2K22 Server ISO's".
Disclaimer: This is based on the following guidance...
Your bottleneck should be the single 10G line (1.25 GByte per second) that your SAN serves the VM/Host with.
For reference sake:
We use a PBS installed side by side on a Proxmox host (for quorum; hosts only lightweight test-vms)
CPU: AMD EPYC 7313 (16 Cores a 3.0 Ghz)
Proxmox <-> PBS...
Out of curiosity, what kind of Storage / CPU cycles do you have dedicated to your PBS ?
Ours runs on 2x Intel M.2's in zfsRaid1 and 8x 14 TB WD in zfsRaidZ1 and backs up about 20 TB of VM Data. Prune and GC does not take longer than 4 hours ( that is the Window we have allotted for Prune, GC...
I am in no way an expert on the subject matter.
All i know is this:
1. When doing Iperf's you can find your self CPU bottle necked unless you use mutliple instances (see https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/).
2. For us using...
Is it possible to use the PBS to Backup the configuration files of multiple Proxmox Hypervisors ?
I was unable to find the part in the GUI of the Hypervisor
e.g. Datacenter > Backup only allows me to add VMs/CTs to the Backup Schedule.
is this an unsupported feature, or does one need to...
Have you benchmarked the link speeds from Host to Host ?
e.g. using this method ? https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/
did you use "linux" bonds or did you use openvswitch based bonds ?
I ran into a similar issue a while back...
That leaves the following question: As far as performance is concerned: Would you rather
Option 1: use 4 disks per zfsRaid10 pool (1x 8k and 1x 64k pool)
Option 2: use NVME-CLI to create seperate namespaces for the 16K pool and 64K pool (and use them for seperate zfsRaid10's that each have 8...
What is the recommended "Block Size" for ZFS utilizing Windows Server 2019/2022 Clients (KVM)??
The standard for NTFS Blocksize is 4K, Guidance for Exchange and SQl-Server is 64K. I don't know what the Guidance for File-Servers is. This is 2019 and maybe 2022 servers.
I have 8 NVME's running...
Issue has been fixed.
Following Steps have been taken:
Setup the 100 Gbit/s mesh network - using openvSwitch and RSTP_Loop_setup - according to this guide: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#RSTP_Loop_Setup
test using iperf on 3 seperate CPU-Threads and ports...
So, i ran the following tuning:
Resulted in the following speeds:
Combined Speeds: ~ 55 Gbit/s of uni-directional transfer
roughly 30 Gbit/s increases, but a far cry from AMD's own test results with a X-5
Also at 55 Gbit/s unidirectionally - but a bunch of retries.
If i am reading...
All 3 nodes show
Speed: 100000Mb/s
Shows 40 Gbit/s.
Is the readout of lshw wrong ?
Any Iperf3 i run comes down to a summed up bandwith of 21 Gbit/s or less.
Out of curiousity and based on this thread with a X-6...
Hi there;
I have 3 Proxmox nodes connected via Mellanox 100GbE ConnectX-5 EN QSFP28 cards in cross-connect mode using 3 meter 100G DAC-Cables.
Card is a
MCX516A-CCAT
Card is recognized at 40 Gbit/s.
Any ideas how to change the speeds to 100 GBit/s ?
Am i missing a driver ? first time...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.