Search results

  1. H

    Verify jobs - Terrible IO performance

    Where is a guide on setting up this special udev? Don't see it mentioned in Proxmox ZFS page.
  2. H

    Verify jobs - Terrible IO performance

    You talking about a pool with cache? (L2ARC) I plan on getting 2 M.2 NVMe drives for that in RAID 1. Question would be how big do they need to be? If 250GB is enough that would be great I will get gen4 drives.
  3. H

    Verify jobs - Terrible IO performance

    SSD storage is too much I will see if zfs speeds things up. I am in the process of making my own storage server would having more drives improve the time taken to verify backups (presuming disk are same speed). Also is there an optimal ZFS raid type and ashift/block size optimal for PBS storage...
  4. H

    Verify jobs - Terrible IO performance

    I don't recall what it's set to how do I check in hardware raid if it's ext4/xfs. Are there also any posts about ZFS vs Hardware RAID for PBS? I presume hardware is better maybe @Dunuin knows?
  5. H

    Verify jobs - Terrible IO performance

    I have about 100 virtual machines I back up daily and experience terrible IO performance during verificatinos that take 10+ hours The disks used are 4x RAID 10 (Hardare RAID) WUH721414AL5201...
  6. H

    Dual EPYC CPU nodes vs Single CPU performance for virtual machines

    I'm searching for some newer upgraded hardware and am stuck on if I should buy in regards to the performance from 1 CPU Socket vs Dual CPU sockets. When looking at CPU benchmarks I have noticed that Dual CPUs always have a lower score than having two separate nodes. For example if you look at...
  7. H

    OpenVZ Import to LXC - open '/bin/sh' failed: No such file or directory

    OpenVZ release 7.0.18 How else could I backup the data than vzdump 1145
  8. H

    OpenVZ Import to LXC - open '/bin/sh' failed: No such file or directory

    I am attempting to convert from OpenVZ (ploop) to Proxmox LXC Export: vzctl stop 1145 && vzdump 1145 --bwlimit 9999999999999 Import: pct restore 2136 /var/lib/vz/dump/vzdump-1145.tar The following errors are shown recovering backed-up configuration from '/var/lib/vz/dump/vzdump-1145.tar'...
  9. H

    OVH - Use the main IP in virtual machine?

    This is why I mentioned OVH. They do not allow you to create a virtual mac on the main interface IP. Made your changes above still can't get it to work.
  10. H

    OVH - Use the main IP in virtual machine?

    I can't seem to get this working Proxmox node: auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual auto vmbr0 iface vmbr0 inet static bridge-ports eth0 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet static address 192.168.137.5/24 bridge-ports none...
  11. H

    OVH - Use the main IP in virtual machine?

    Ok thanks will give it a go. So if the OVH IP is 1.1.1.1 I use the example you set above and configure the virtual machine with what IP and gatewaY?
  12. H

    OVH - Use the main IP in virtual machine?

    I have come across multiple threads on NAT routing the main IP into a virtual machine but is it possible to use the IP directly inside a virtual machine?
  13. H

    Unofficial Proxmox Discord server

    https://discord.gg/Zmss6x6Z7a is another
  14. H

    i9-12900K - Poor performance in VM?

    I beleive this is because the i9-12000k has Performance Cores: 8 Cores, 16 Threads, 3.2 GHZ Base, 5.2 GHZ Turbo Efficient Cores: 8 Cores, 8 Threads, 2.4 GHZ Base, 3.9 GHZ Turbo How do I assign the virtual machine a performance core? pveversion -v proxmox-ve: 7.1-2 (running kernel...
  15. H

    i9-12900K - Poor performance in VM?

    https://browser.geekbench.com/v5/cpu/14712640 - Ran on node (2012) https://browser.geekbench.com/v5/cpu/14712821 - On VM (1438) Can anyone tell me why the virtual machines performance seems to be significantly lower?
  16. H

    Firewall error related to ipset

    Having same issue. Reboot is the only way?
  17. H

    GRE tunnel public IP in virtual machine - Traceroute showing full route

    `88.198.49.xxx` = Hetzner (will run virtual machines on this) `141.94.176.xxx` = OVH (contains block below) `164.132.xxx.0/28` = IP block to use on Hetzner as virtual machines To get GRE set up I ran the following: OVH: ip tunnel add gre1 mode gre remote 88.198.49.xxx local...
  18. H

    Route OVH IP block to Hetzner to be used by virtual machines?

    I have tried the following which gets 164.132.xxx.1 pinging on the OVH node but not publicly. Public IPv4 (OVH) server 1: 141.94.176.xxx Public IPv4 server 2 (Hetzner) : 5.9.105.xxx IP block I want to use on server 2 (OVH IP Block): 164.132.xxx.0/28 Bridge interface server2: vmbr0 Run this on...
  19. H

    Route OVH IP block to Hetzner to be used by virtual machines?

    Public IPv4 server 1 (OVH bare metal) : 141.94.199.xxx Public IPv4 server 2 (Hetzner bare metal) : 5.9.105.xxx IP block I want to use on server 2: 164.132.xxx.xxx/28 (OVH IP Block) Can someone please assist me on how I can do this via GRE tunnel? Both have Proxmox installed. Also what would...
  20. H

    Clear cache buildup inside VMs

    How did the reading/test go with hot-plugging memory?