Search results

  1. 6uellerbpanda

    Proxmox & Packer: VM quit/powerdown failed during a Packer build. Anyone have any ideas why?

    you've set "qemu_agent": false but you need "qemu_agent": true and of course "qemu-guest-agent" needs to be installed in OS
  2. 6uellerbpanda

    [SOLVED] nfs mounts using wrong source ip/interface

    downgrading to nfsv3 isn't an option for us I will try a tcpdump next time I've a maintenance window to do it
  3. 6uellerbpanda

    [SOLVED] nfs mounts using wrong source ip/interface

    yes we downgraded the kernel due to - https://forum.proxmox.com/threads/kernel-oops-with-kworker-getting-tainted.63116/page-2#post-299247 upgrading to the latest pve though isn't something I want to do atm
  4. 6uellerbpanda

    [SOLVED] nfs mounts using wrong source ip/interface

    @Stoiko Ivanov thanks for your time here you go: root@hv-vm-01:/root# ip route default via 10.0.100.254 dev vmbr0 onlink 10.0.11.0/25 dev enp9s0.11 proto kernel scope link src 10.0.11.3 10.0.12.0/28 dev enp1s0f0 proto kernel scope link src 10.0.12.1 10.0.100.0/24 dev vmbr0 proto kernel scope...
  5. 6uellerbpanda

    [SOLVED] nfs mounts using wrong source ip/interface

    since upgrade to pve 6.1 (it was working fine with 6.0) we've the problem that nfs mounts are using random source ip/interfaces and not the one in the same vlan. our current config looks like this: pve-manager/6.1-7/13e58d5e (running kernel: 5.0.21-5-pve) # /etc/pve/storage.cfg nfs...
  6. 6uellerbpanda

    ZFS Tests and Optimization - ZIL/SLOG, L2ARC, Special Device

    I can only speak for zfs on freebsd but I guess it's the same for linux... that will be difficult to almost impossible because you have the txg groups and compression between it (if enabled)...but it's also not necessary from a performance point of few 'cause there won't be much to gain. zfs...
  7. 6uellerbpanda

    Kernel Oops with kworker getting tainted.

    we also hit this problem: Feb 25 01:49:57 hv-vm-01 kernel:[123814.413163] #PF: supervisor read access in kernel mode Feb 25 01:49:57 hv-vm-01 kernel:[123814.413735] #PF: error_code(0x0000) - not-present page Feb 25 01:49:57 hv-vm-01 kernel:[123814.414312] PGD 0 P4D 0 Feb 25 01:49:57 hv-vm-01...
  8. 6uellerbpanda

    PVE 6 ZFS, SLOG, ARC, L2ARC- Disk Configuration

    basically I would always recommend zfs except when it comes to speed...you basically need to understand what and which IOPS your workload will produce and then decide if zfs will help you or fights against you. in your case I guess it will be random IOPS with 50/50 read/write and I also guess...
  9. 6uellerbpanda

    ZFS sync=disabled safeness

    if you've ssd zpool...no you don't need a SLOG except your SLOG is faster than the slowest ssd in your zpool so what you're saying is that your ssd zpool isn't performing as you want it ?
  10. 6uellerbpanda

    ZFS sync=disabled safeness

    well, disabling sync makes all io asynchronous - regardless the protocol. I don't know what the default value of tgx commits are but this timeframe you will basically loose on data. depending on the application this can/will result in inconsistent data and for instance linux will probably run...
  11. 6uellerbpanda

    ZFS sync=disabled safeness

    why do you even want to disable it ?
  12. 6uellerbpanda

    ZFS bad Performance!

    arc_summary looks fine. is there any reason why you didn't add a slog ?
  13. 6uellerbpanda

    ZFS bad Performance!

    arc_summary output pl with a mirrored vdev you get the write performance of one hdd and for reads it can read from both. your seagate hdd are also not very fast you will get max raw 60 IOPS (IOPS = 1000/(Seek Latency + Rotational Latency) and when I look at your fio stuff it looks fine...
  14. 6uellerbpanda

    ZFS slow write performance

    well let me tell you that you won't get the same write perf like with ext4 at all (at least when you use the same hardware specs)...that is by "design" for any cow filesystem. you're the only person who can tell if that is suitable for your use case. what's the physical connection to the...
  15. 6uellerbpanda

    Need help with high CPU IOwait during heavy IO operations

    pl post output of zpool status arc_summay sysctl -a | grep -i meta
  16. 6uellerbpanda

    SSD configuration for new Proxmox install?

    It basically depends on your workload. For your VM storage it will be probably random io = mirrored vdevs For your bulk storage if sequential io - raidz1... If a mix of seq and random io you can choose whatever you think your expected workload will be.
  17. 6uellerbpanda

    High I/O slow guests

    why do you have a l2arc ??? check https://forum.proxmox.com/threads/zfs-worth-it-tuning-tips.45262/page-2#post-217209 for random read/write io your raid-z2 is the worst choice. check dr. google why. use a mirrored zpool except you only have seq io. every other "tuning" you now do is "useless"...
  18. 6uellerbpanda

    High I/O slow guests

    this may sound strange but on which best practices etc. did you configure your zpool ? and what is your typicall workload ? arc_summary output please also
  19. 6uellerbpanda

    Very Slow IO and Fsync Low Dell r720 VE 5.3 Latest

    pl post arc_summary zpool status zfs get all <YOUR ZPOOL> do you've a hba or raid controller - if raid controller did you put it in "HBA" mode ?
  20. 6uellerbpanda

    About high load and hard drives

    I do hope you have a HBA and not HW raid controller and also your slog is degraded first pl check https://forum.proxmox.com/threads/zil-l2arc-question.47266/#post-222861 about your L2ARC. this caps you also. zfs mirrors are good for random IO put not so good for sequential IO in your case you...