Search results

  1. S

    Proxmox VE 8.2.2 - High IO delay

    Well the EXTREME Iowait 60%-80% seems more on those Podman Systems (ZFS on top of ZVOL) so maybe there it's a similar Issue to what the User reported on #openzfs (ZFS Pool Deadlock), although not to the same extent. On the other systems it might indeed be lower, but still somewhat 20% or so.
  2. S

    Proxmox VE 8.2.2 - High IO delay

    Uhm I never got that Memory Issue. But that's also probably because I force ZFS to do what I want instead of leaving it on a very loose leash ;) . I reduced the amount of ARC I allow ZFS to use. Maximum 4GB for the Proxmox VE Host on a 32GB System (otherwise ZFS can eat up to 50%, i.e. 16GB...
  3. S

    Proxmox VE 8.2.2 - High IO delay

    Weird ... The GUI being picky about the Kernel Version :oops: ? I mean, if the Kernel is good to run VMs/CTs, then it should also be for the GUI (IMHO). I don't see how the Kernel Version would break the GUI in that regards ... Did you check Services pvestatd and pveproxy ? I think I'll try a...
  4. S

    Proxmox VE 8.2.2 - High IO delay

    Any further Discoveries ? I'm quite disappointed that all the Proxmox VE Team and other Users say "Do NOT use Consumer SSD", when the Issue arise after a Package /Kernel and/or ZFS) Upgrade ... I guess I could maybe take Kernel 6.5.x config from Proxmox VE, Download Kernel 6.6.41 Sources, then...
  5. S

    Proxmox VE 8.2.2 - High IO delay

    The only feedback I got from the OpenZFS IRC Channel is that Kernel 6.8 changed MANY THINGS. Not very specific I know, but that's what I know. Not sure if the Issue was already on Kernel 6.5.x. Granted my Workload might have changed since (and was probably fairly light on Kernel 6.5.x), so the...
  6. S

    Error while installing fupdown2 version 3.2.0-1+pmx7

    Just experienced this as well on my latest Proxmox VE Upgrade to PVE 8.x. In my case removing /tmp/.ifupdown2-first-install fixed / bypassed this Issue.
  7. S

    [TUTORIAL] Proxmox VE 8.0 Mainline Kernel Builds

    It's a weird choice (by Ubuntu Developers and Proxmox VE Developers) to NOT use LTS Kernels (LTS defined on kernel.org Website), like 6.1 and 6.6 Instead. Given the IOwait Issue I am currently experiencing, the ideal scenario would have been to build a custom Kernel based on 6.6 which is the...
  8. S

    Proxmox VE 8.2.2 - High IO delay

    Well you might have a point, at least to some extent, I am not debating that. I just think that if a new Issue shows up on 3-4 Servers of mine after an Update, it's a BUG, not a feature. I am debating that being the only cause. And while I tend to agree that the Issue seems more predominant in...
  9. S

    Proxmox VE 8.2.2 - High IO delay

    LVM is an absolute PITA to manage. I tried to recover previous systems. Never again :rolleyes: ! Why are you so focused on PBS ? I am talking about Proxmox VE, not Proxmox Backup Server. So is having to change a Partition Layout when you have Data on it already ... Let alone to setup backups...
  10. S

    Proxmox VE 8.2.2 - High IO delay

    What exactly do you mean there ? I don't want to change my entire infrastructure just for the fun of it whenever there is a new BUG popping up ....
  11. S

    Proxmox VE 8.2.2 - High IO delay

    Again you seem to miss the Point ... It is a new occurrence with Proxmox 8 and Kernel 6.8.x (at least for me), it didn't happen before. Never before I was getting hangups when saving a 100KB file with nano ! As for the Write Amplification I can sort of agree, I looked at the TBW and I was VERY...
  12. S

    Proxmox VE 8.2.2 - High IO delay

    I don't recall 6.5 being *THIS* Problematic. You could be right though, as I'm doing some more work now that I used to do back then ... Definitively 6.8 is an Issue. But having to install Kernel 6.2 on the latest Proxmox VE ... That seems really a Hack. Did you have to recompile the Kernel ...
  13. S

    Proxmox VE 8.2.2 - High IO delay

    Any update from Proxmox Developers would be appreciated. I am experiencing this on SEVERAL Servers. And it cannot be that I need NVME Drives for the very limited amount of work that I am currently doing :rolleyes: .
  14. S

    Proxmox VE Cluster vs Non-Cluster

    I am (re)debating this Issue, as I have done in the Past. My use Case, as a Homelab user, is that some/many Hosts are up only when needed (at specific Times, in order to reduce Power Consumption), and so the "normal" way with Quorum doesn't really work. I had a look at...
  15. S

    Configuration Management System (Saltstack) Permission Errors but working for files in /etc/pve/nodes/<mynode> ???

    But also if I removed the chown command I still get one error. See attached files in the original Post. It works, but it's weird to have failure logs in that case ...
  16. S

    Configuration Management System (Saltstack) Permission Errors but working for files in /etc/pve/nodes/<mynode> ???

    salt-minion is definitively running as REAL root, otherwise it won't be able to do anything on the minion: root@PVE:~# ps aux | grep salt root 6257 0.2 0.0 131264 26368 ? Ss 12:14 0:00 /opt/saltstack/salt/bin/python3.10 /usr/bin/salt-minion root 6266 1.6 0.0 734016...
  17. S

    Configuration Management System (Saltstack) Permission Errors but working for files in /etc/pve/nodes/<mynode> ???

    I am bit surprised (and I think it's only happening since a few Weeks/Months) by the Error but it seems to be working somehow. I use certbot in a Podman/Docker Container to generates ALL Certificates for my Infrastructure. No, I do NOT use the ACME "plugin" of Proxmox VE, since I have wildcard...
  18. S

    Proxmox generate 2 mac address visibile on the switch not allowed by the data center

    On another Note, as soon as I enable "Outbound NAT" on OPNSense using one of the Additional IPv4 Addresses, everything breaks down :rolleyes: . It seems Inbound (Port-forwarding) NAT works correctly with the Additional IPs (configured in OPNSense -> Interfaces -> Virtual IPs), but for Outbound...
  19. S

    ZFS root booting to busybox, but no displayed command, message or error?

    I have this ... cat /etc/default/grub.d/zfs.cfg GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} root=ZFS=\"rpool/ROOT/debian\"" GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} root=ZFS=\"rpool/ROOT/debian\"" Optionally you could also add these (I add to each line) in case of a headless...
  20. S

    Proxmox VE 8.2.2 - High IO delay

    I am observing some very high (>40%, sometimes 80%) IO Delay on Proxmox VE 8.2.2 with pve-no-subscription Repository. Looking at some Posts over this Forum, this may be due to not using Enterprise-Grade SSD, although to be honest I don't necessarily "buy" this justification. I am using Crucial...