Search results

  1. H

    Verify jobs - Terrible IO performance

    Great here is mine NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs 51.3T 8.75T 42.6T - - 2% 17% 1.00x ONLINE - mirror-0 12.7T 2.15T 10.6T - - 2% 16.9% - ONLINE sda...
  2. H

    Verify jobs - Terrible IO performance

    @Dunuin Question regarding the special devices /dev/nvme0n1 /dev/nvme1n1 How can I check how full they are?
  3. H

    Sync ZFS to remote ZFS datastore

    @Dunuin Any experience with that or anyone else.
  4. H

    Sync ZFS to remote ZFS datastore

    I think this maybe what I am looking for https://www.conproly.com/blogs/20180630-zfs-backups-on-proxmox-with-znapzend/ which uses https://github.com/oetiker/znapzend What do you think @Dunuin my concern now is how do I mount a remote ZFS? What's the optimal way
  5. H

    Sync ZFS to remote ZFS datastore

    I want to replicate the whole zfs pool. ZFS replication would be perfect but the issue is it does not support external servers unless they are in the cluster? I have two different clusters I want to do this for so that wont really work. Ideally my nodes ZFS would have an additional copy for...
  6. H

    Sync ZFS to remote ZFS datastore

    @Dunuin Maybe you know? You are ZFS master :)
  7. H

    Sync ZFS to remote ZFS datastore

    I would like to sync my ZFS datastore (root@prox1) to my remote storage server (root@storage) preferably every 15 minutes. It must only sync new data just like zfs replication (which is local only???) root@prox1:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zfs...
  8. H

    Live migration (host flag) between AMD EPYC

    Wondering has anyone tested if live migration with host flag between different AMD EPYC (7001 series). It would be good to know what works and what doesn't if anyone has tests.
  9. H

    Verify jobs - Terrible IO performance

    Makes sense thanks for the replies. I was wondering why I could not create a virtual machine when I set the whole pool to 4MB in the GUI. For now I have done the following commands zpool create -f zfs -o ashift=12 mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf...
  10. H

    Verify jobs - Terrible IO performance

    I am fairly new to ZFS what do you mean by "don't store your datastore on the pools root, but to create a new dataset for it first. You can then set the recordsize to 1M just for that dataset, so only the PBS datastore with all of it 4MB chunk files will use that big recordsize" how would I go...
  11. H

    Verify jobs - Terrible IO performance

    Sorry I misread. One last question what do you suggest I set the blocksize to for an average PBS backup only server. I have only set the blocksize for special device not the pool. zfs set special_small_blocks=4K zfs
  12. H

    Verify jobs - Terrible IO performance

    Ok thanks for the info. Any ideas why the command below creates the pool with only 26TB not sure how to confirm if it's RAID 10 sudo zpool create -f zfs -o ashift=12 mirror /dev/sdb /dev/sdc /dev/sdd /dev/sda mirror /dev/sdf /dev/sdg /dev/sde /dev/sdh special mirror /dev/nvme0n1 /dev/nvme1n1...
  13. H

    Verify jobs - Terrible IO performance

    To create RAID-10 I will run sudo zpool create -f zfs -o ashift=12 mirror /dev/sdb /dev/sdc /dev/sdd /dev/sda mirror /dev/sdf /dev/sdg /dev/sde /dev/sdh special mirror /dev/nvme0n1 /dev/nvme1n1 zfs set special_small_blocks=4K zfs zfs set compression=lz4 zfs The sda drives are 14TB and NVMe...
  14. H

    Verify jobs - Terrible IO performance

    I changed that to mirror which is RAID 1 what is the correct command for RAID 10? Are the other commands good for PBS too?
  15. H

    Verify jobs - Terrible IO performance

    I have the following devices /dev/nvme0n1 /dev/nvme1n1 /dev/sdb /dev/sdc /dev/sdd /dev/sda /dev/sdf /dev/sdg /dev/sde /dev/sdh I'm going to run the following just want to make sure that's all I need to do to create the pool? sudo zpool create -f zfs -o ashift=12 mirror /dev/sdb /dev/sdc...
  16. H

    Block VM from accessing private IPs (proxmox nodes and switches)

    Does this look ok? [group blockbackend] OUT DROP -dest 10.0.10.229 -log nolog OUT DROP -dest 10.0.10.222 -log nolog OUT DROP -dest 10.0.10.221 -log nolog OUT DROP -dest 10.0.10.220 -log nolog
  17. H

    Block VM from accessing private IPs (proxmox nodes and switches)

    Ok thanks might be a silly question but if I block 10.0.10.0/24 would that cause any issues if the virtual machine wanted to run something locally on any IP in that range? Example a lot of VPN install scripts would use 10.0.10.x as an IP
  18. H

    Block VM from accessing private IPs (proxmox nodes and switches)

    I want to block virtual machines from being able to connect to proxmox interfaces on https://10.0.12.100:8006 for example. I've only tried the following which blocked all access not just the VMs. [/code][RULES] IN ACCEPT -i vmbr1 -source 10.0.12.0/24 -log nolog[/code] If possible I want it to...
  19. H

    NIC errors on boot Couldn't write '1' to 'net/ipv4/conf/vmbr0/arp_ignore'

    I am seeing a bunch of errors when my system boots I am not sure what they all mean except for the /net/ lines below asrock X570D4U NIC: Solarflare SF432-1012 OS: Installed Proxmox after Debian11 https://pastebin.com/Qe9HJgYv Nov 19 11:29:52 ENT1 systemd-sysctl[820]: Couldn't write '1' to...
  20. H

    Verify jobs - Terrible IO performance

    @Dunuin @Neobin Have much data does the special udev write? I am going to dedicated two disks I just don't know if I should go with a drive like PM9A1 which is cheaper but has lower total endurance (600TBW) or a high endurance drive, trying to weight the pros/cons.