Recent content by 6uellerbpanda

  1. 6uellerbpanda

    nfs mounts using wrong source ip/interface

    so I made an upgrade to 6.2 (5.4.60-1-pve) yesterday and the outcome is the same. also made an strace and a tcpdump. see tar. interesting is that initial nfs session is going via the correct interface and then beginning with SECINFO_NO_NAME (acording to rfc this handles the sec between client...
  2. 6uellerbpanda

    monitoring plugin - check_pve

    it's for checking updates - https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/apt/update
  3. 6uellerbpanda

    High IO Delay

    lvm would be enough just to see if the io problem is zfs specific.
  4. 6uellerbpanda

    High IO Delay

    Did you try it with something else except zfs?
  5. 6uellerbpanda

    High IO Delay

    so the with the other server you've the same hardware and same zfs setup and "only" the ssd is different ?
  6. 6uellerbpanda

    High IO Delay

    was the problem always there or when did it start to appear ? did you change something before that ? is it your first rodeo with zfs ? but the IO delay could also come from the NFS part isn't it ? can you give us an example with actual numbers pl
  7. 6uellerbpanda

    Proxmox & Packer: VM quit/powerdown failed during a Packer build. Anyone have any ideas why?

    I don't have much experience with preseed but I'm using following and this is working in my case (without any systemctl magic): # Software Selections tasksel tasksel/first multiselect ssh-server minimal d-i pkgsel/include string lsof strace openssh-server...
  8. 6uellerbpanda

    Proxmox & Packer: VM quit/powerdown failed during a Packer build. Anyone have any ideas why?

    you've set "qemu_agent": false but you need "qemu_agent": true and of course "qemu-guest-agent" needs to be installed in OS
  9. 6uellerbpanda

    nfs mounts using wrong source ip/interface

    downgrading to nfsv3 isn't an option for us I will try a tcpdump next time I've a maintenance window to do it
  10. 6uellerbpanda

    nfs mounts using wrong source ip/interface

    yes we downgraded the kernel due to - https://forum.proxmox.com/threads/kernel-oops-with-kworker-getting-tainted.63116/page-2#post-299247 upgrading to the latest pve though isn't something I want to do atm
  11. 6uellerbpanda

    nfs mounts using wrong source ip/interface

    @Stoiko Ivanov thanks for your time here you go: root@hv-vm-01:/root# ip route default via 10.0.100.254 dev vmbr0 onlink 10.0.11.0/25 dev enp9s0.11 proto kernel scope link src 10.0.11.3 10.0.12.0/28 dev enp1s0f0 proto kernel scope link src 10.0.12.1 10.0.100.0/24 dev vmbr0 proto kernel scope...
  12. 6uellerbpanda

    nfs mounts using wrong source ip/interface

    since upgrade to pve 6.1 (it was working fine with 6.0) we've the problem that nfs mounts are using random source ip/interfaces and not the one in the same vlan. our current config looks like this: pve-manager/6.1-7/13e58d5e (running kernel: 5.0.21-5-pve) # /etc/pve/storage.cfg nfs...
  13. 6uellerbpanda

    ZFS Tests and Optimization - ZIL/SLOG, L2ARC, Special Device

    I can only speak for zfs on freebsd but I guess it's the same for linux... that will be difficult to almost impossible because you have the txg groups and compression between it (if enabled)...but it's also not necessary from a performance point of few 'cause there won't be much to gain. zfs...
  14. 6uellerbpanda

    Kernel Oops with kworker getting tainted.

    we also hit this problem: Feb 25 01:49:57 hv-vm-01 kernel:[123814.413163] #PF: supervisor read access in kernel mode Feb 25 01:49:57 hv-vm-01 kernel:[123814.413735] #PF: error_code(0x0000) - not-present page Feb 25 01:49:57 hv-vm-01 kernel:[123814.414312] PGD 0 P4D 0 Feb 25 01:49:57 hv-vm-01...
  15. 6uellerbpanda

    PVE 6 ZFS, SLOG, ARC, L2ARC- Disk Configuration

    basically I would always recommend zfs except when it comes to speed...you basically need to understand what and which IOPS your workload will produce and then decide if zfs will help you or fights against you. in your case I guess it will be random IOPS with 50/50 read/write and I also guess...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!