Recent content by harmonyp

  1. Live migration (host flag) between AMD EPYC

    Wondering has anyone tested if live migration with host flag between different AMD EPYC (7001 series). It would be good to know what works and what doesn't if anyone has tests.
  2. Verify jobs - Terrible IO performance

    Makes sense thanks for the replies. I was wondering why I could not create a virtual machine when I set the whole pool to 4MB in the GUI. For now I have done the following commands zpool create -f zfs -o ashift=12 mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf...
  3. Verify jobs - Terrible IO performance

    I am fairly new to ZFS what do you mean by "don't store your datastore on the pools root, but to create a new dataset for it first. You can then set the recordsize to 1M just for that dataset, so only the PBS datastore with all of it 4MB chunk files will use that big recordsize" how would I go...
  4. Verify jobs - Terrible IO performance

    Sorry I misread. One last question what do you suggest I set the blocksize to for an average PBS backup only server. I have only set the blocksize for special device not the pool. zfs set special_small_blocks=4K zfs
  5. Verify jobs - Terrible IO performance

    Ok thanks for the info. Any ideas why the command below creates the pool with only 26TB not sure how to confirm if it's RAID 10 sudo zpool create -f zfs -o ashift=12 mirror /dev/sdb /dev/sdc /dev/sdd /dev/sda mirror /dev/sdf /dev/sdg /dev/sde /dev/sdh special mirror /dev/nvme0n1 /dev/nvme1n1...
  6. Verify jobs - Terrible IO performance

    To create RAID-10 I will run sudo zpool create -f zfs -o ashift=12 mirror /dev/sdb /dev/sdc /dev/sdd /dev/sda mirror /dev/sdf /dev/sdg /dev/sde /dev/sdh special mirror /dev/nvme0n1 /dev/nvme1n1 zfs set special_small_blocks=4K zfs zfs set compression=lz4 zfs The sda drives are 14TB and NVMe...
  7. Verify jobs - Terrible IO performance

    I changed that to mirror which is RAID 1 what is the correct command for RAID 10? Are the other commands good for PBS too?
  8. Verify jobs - Terrible IO performance

    I have the following devices /dev/nvme0n1 /dev/nvme1n1 /dev/sdb /dev/sdc /dev/sdd /dev/sda /dev/sdf /dev/sdg /dev/sde /dev/sdh I'm going to run the following just want to make sure that's all I need to do to create the pool? sudo zpool create -f zfs -o ashift=12 mirror /dev/sdb /dev/sdc...
  9. Block VM from accessing private IPs (proxmox nodes and switches)

    Does this look ok? [group blockbackend] OUT DROP -dest 10.0.10.229 -log nolog OUT DROP -dest 10.0.10.222 -log nolog OUT DROP -dest 10.0.10.221 -log nolog OUT DROP -dest 10.0.10.220 -log nolog
  10. Block VM from accessing private IPs (proxmox nodes and switches)

    Ok thanks might be a silly question but if I block 10.0.10.0/24 would that cause any issues if the virtual machine wanted to run something locally on any IP in that range? Example a lot of VPN install scripts would use 10.0.10.x as an IP
  11. Block VM from accessing private IPs (proxmox nodes and switches)

    I want to block virtual machines from being able to connect to proxmox interfaces on https://10.0.12.100:8006 for example. I've only tried the following which blocked all access not just the VMs. [/code][RULES] IN ACCEPT -i vmbr1 -source 10.0.12.0/24 -log nolog[/code] If possible I want it to...
  12. NIC errors on boot Couldn't write '1' to 'net/ipv4/conf/vmbr0/arp_ignore'

    I am seeing a bunch of errors when my system boots I am not sure what they all mean except for the /net/ lines below asrock X570D4U NIC: Solarflare SF432-1012 OS: Installed Proxmox after Debian11 https://pastebin.com/Qe9HJgYv Nov 19 11:29:52 ENT1 systemd-sysctl[820]: Couldn't write '1' to...
  13. Verify jobs - Terrible IO performance

    @Dunuin @Neobin Have much data does the special udev write? I am going to dedicated two disks I just don't know if I should go with a drive like PM9A1 which is cheaper but has lower total endurance (600TBW) or a high endurance drive, trying to weight the pros/cons.
  14. Verify jobs - Terrible IO performance

    Where is a guide on setting up this special udev? Don't see it mentioned in Proxmox ZFS page.
  15. Verify jobs - Terrible IO performance

    You talking about a pool with cache? (L2ARC) I plan on getting 2 M.2 NVMe drives for that in RAID 1. Question would be how big do they need to be? If 250GB is enough that would be great I will get gen4 drives.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!