Search results

  1. W

    [SOLVED] Proxmox to PBS via Schedule (Simulator) - Hourly Backups starting at a specific time ?

    So i'm trying to acheive the following schedules: Job 1: Monday through Friday - once per hour - First run at 06:30 - last run at 19:30 Job 2: Monday through Saturday - once per hour - First run at 06:40 - last run at 19:40 Job 3: Wednesday to Sunday - once per hour - First run at 06:50 last...
  2. W

    [SOLVED] Proxmox to PBS via Schedule (Simulator) - Hourly Backups starting at a specific time ?

    Ideally i would like to get the following setup done for most production VMs: Monday to Friday: "mon..fri" between 06:30 and 19:30: ? Once per hour: */1:00 Starting Time: has this to be set by the 06:30 parameter or the 19:30 parameter ? So if i want to have Job1 run at 0630 and Job2 run at...
  3. W

    [TUTORIAL] Build Windows Server ISO with Buildin VirtioDrivers

    Changelog: 2022/05/23 fixed an error with Windows Server 2019 ISO Label Generation.
  4. W

    [TUTORIAL] Build Windows Server ISO with Buildin VirtioDrivers

    Since the Wiki Article on building Windows Guest ISO#s with buildin Virtio-Drivers seems to be out of date a bit, here is a Guide on "How to build Win2k19 and Win2K22 Server ISO's". Disclaimer: This is based on the following guidance...
  5. W

    What-if I delete index folders?

    Your bottleneck should be the single 10G line (1.25 GByte per second) that your SAN serves the VM/Host with. For reference sake: We use a PBS installed side by side on a Proxmox host (for quorum; hosts only lightweight test-vms) CPU: AMD EPYC 7313 (16 Cores a 3.0 Ghz) Proxmox <-> PBS...
  6. W

    What-if I delete index folders?

    Out of curiosity, what kind of Storage / CPU cycles do you have dedicated to your PBS ? Ours runs on 2x Intel M.2's in zfsRaid1 and 8x 14 TB WD in zfsRaidZ1 and backs up about 20 TB of VM Data. Prune and GC does not take longer than 4 hours ( that is the Window we have allotted for Prune, GC...
  7. W

    3 Bridges for Management, Ceph and SDN/VMs over 1 bond with 2x physical 100 GBE NICs

    I am in no way an expert on the subject matter. All i know is this: 1. When doing Iperf's you can find your self CPU bottle necked unless you use mutliple instances (see https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/). 2. For us using...
  8. W

    Use PBS to Backup Proxmox Hypervisor Configuration

    Is it possible to use the PBS to Backup the configuration files of multiple Proxmox Hypervisors ? I was unable to find the part in the GUI of the Hypervisor e.g. Datacenter > Backup only allows me to add VMs/CTs to the Backup Schedule. is this an unsupported feature, or does one need to...
  9. W

    3 Bridges for Management, Ceph and SDN/VMs over 1 bond with 2x physical 100 GBE NICs

    Have you benchmarked the link speeds from Host to Host ? e.g. using this method ? https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/ did you use "linux" bonds or did you use openvswitch based bonds ? I ran into a similar issue a while back...
  10. W

    3 Bridges for Management, Ceph and SDN/VMs over 1 bond with 2x physical 100 GBE NICs

    Did you benchmark the local Read / Writes inside the VM ? I am assuming your Read/writes given are transferspeeds via the 100 Gig Mellanox.
  11. W

    ZFS Block Size recommendation for IO optimisation?

    That leaves the following question: As far as performance is concerned: Would you rather Option 1: use 4 disks per zfsRaid10 pool (1x 8k and 1x 64k pool) Option 2: use NVME-CLI to create seperate namespaces for the 16K pool and 64K pool (and use them for seperate zfsRaid10's that each have 8...
  12. W

    ZFS Block Size recommendation for IO optimisation?

    What is the recommended "Block Size" for ZFS utilizing Windows Server 2019/2022 Clients (KVM)?? The standard for NTFS Blocksize is 4K, Guidance for Exchange and SQl-Server is 64K. I don't know what the Guidance for File-Servers is. This is 2019 and maybe 2022 servers. I have 8 NVME's running...
  13. W

    [SOLVED] Mellanox ConnectX-5 EN - 100G running at 40G

    Issue has been fixed. Following Steps have been taken: Setup the 100 Gbit/s mesh network - using openvSwitch and RSTP_Loop_setup - according to this guide: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#RSTP_Loop_Setup test using iperf on 3 seperate CPU-Threads and ports...
  14. W

    [SOLVED] Mellanox ConnectX-5 EN - 100G running at 40G

    So, i ran the following tuning: Resulted in the following speeds: Combined Speeds: ~ 55 Gbit/s of uni-directional transfer roughly 30 Gbit/s increases, but a far cry from AMD's own test results with a X-5 Also at 55 Gbit/s unidirectionally - but a bunch of retries. If i am reading...
  15. W

    [SOLVED] Mellanox ConnectX-5 EN - 100G running at 40G

    All 3 nodes show Speed: 100000Mb/s Shows 40 Gbit/s. Is the readout of lshw wrong ? Any Iperf3 i run comes down to a summed up bandwith of 21 Gbit/s or less. Out of curiousity and based on this thread with a X-6...
  16. W

    [SOLVED] Mellanox ConnectX-5 EN - 100G running at 40G

    Hi there; I have 3 Proxmox nodes connected via Mellanox 100GbE ConnectX-5 EN QSFP28 cards in cross-connect mode using 3 meter 100G DAC-Cables. Card is a MCX516A-CCAT Card is recognized at 40 Gbit/s. Any ideas how to change the speeds to 100 GBit/s ? Am i missing a driver ? first time...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!