Search results for query: ZFS

  1. R

    KVM processes in SWAP despite 35-55 GiB free RAM – Proxmox + ZFS

    Despite reducing the ZFS ARC size significantly, 44% of swap was used overnight — even though there was always plenty of free RAM available on the host. What makes this even more confusing is that it also happened on VMs where ballooning is completely disabled (balloon: 0), so Proxmox was never...
  2. W

    TrueNAS Storage Plugin

    I guess I'm not aware, is the native ZFS-iSCSI unable to store TPM disks?
  3. B

    TrueNAS Storage Plugin

    @warlocksyno have you managed to solve the tpm storage issue that exists with the native ZFS over iSCSI plugin?
  4. R

    KVM processes in SWAP despite 35-55 GiB free RAM – Proxmox + ZFS

    I tried setting the ARC limit at runtime. However, arc_summary showed no change. Further research reveals that this is expected behavior — zfs_arc_max can technically be changed at runtime, but the internal value arc_c_max that actually controls the ARC size only gets recalculated under memory...
  5. R

    KVM processes in SWAP despite 35-55 GiB free RAM – Proxmox + ZFS

    Since my root filesystem is on ZFS, there is no way to change the ARC limit at runtime. Or is there any way? The only option is to update /etc/modprobe.d/zfs.conf and run update-initramfs -u to rebuild the initramfs. After a reboot the new limits will be active. Since the machine has 96 GB of...
  6. D

    KVM processes in SWAP despite 35-55 GiB free RAM – Proxmox + ZFS

    ...I think lowering the ARC is a good first step to try. an 8 TiB ZFS pool would only need around 10 GiB of ARC, so 96 GiB should be more than safe. You could probably start with an even lower value and tune it from there.
  7. R

    KVM processes in SWAP despite 35-55 GiB free RAM – Proxmox + ZFS

    System: Proxmox VE (8.4.17, updated, no fresh install), ZFS 2.2.9-pve1, Linux 6.8.12-20-pve 256 GiB RAM, Single CPU (no NUMA) vm.swappiness = 1 - set from 60 to 5 and than to 1 ZFS ARC: default limit of 50% RAM = 125.7 GiB, currently used at 125.6 GiB (99.9%) - but no /etc/modprobe.d/zfs.conf...
  8. A

    [SOLVED] Proxmox freezes with high IO, maybe ZFS related

    I've changed suspected failed drive sdb, for now its all okay, no more freezes since 36 hours ago. I hope the problem can be marked as solved. I came across this topic and in the end topic-starter changed the hardware too.
  9. leesteken

    ZFS Storage bad NVME Performance

    ...is also not a good fit for VMs due to the low (random) IOPS and mirrors are indeed better: https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/ EDIT: And ZFS has read/write amplification and overhead (compared to LVM) which allow it to have many useful features like...
  10. B

    ZFS Storage bad NVME Performance

    ...specs: CPU: AMD EPYC 7303P Motherboard: H12SSW-NTR Storage: OS: 2x 256GB SSD Data: 3x Samsung OEM Datacenter NVMe PM9A3 3.84TB (PCIe 4.0 x4) ZFS setup: I created a RAIDZ1 pool and configured the following settings: zfs set recordsize=8K rpool zfs set compression=lz4 rpool zfs set...
  11. Z

    new drive setup, considering RAIDZ1

    ...nvme with other files not a dedicated device, so its not a big deal a few GB goes to that just for the heck of it. mostly the usecase for the ZFS raid will just be file storage and not often used VMs/some CTs, most CTs that need low latency and handle many small files are on SSD/NVME and...
  12. J

    [PVE 9/ZFS-Based VM/LXC Storage] Why Shouldn't I Disable Swap Inside Linux VMs?

    ...take into account that for zswap you always need physical swap as backing device. For example if you use the defaults of PVEs installer for ZFS you will endup without space for a dedicated swap device. And swapfiles are not recommended for ZFS since they caused problems in the past (not sure...
  13. Z

    new drive setup, considering RAIDZ1

    that is good to know, just write right? i am weary of using special devices, if one of those fails the pool goes down too right? so its best to use mirrors there too? i planned to use l2arc from an NVME but i wanted to go low risk routes that wont cause the pool to fail if anything happens...
  14. S

    [SOLVED] Proxmox freezes with high IO, maybe ZFS related

    ...IO thread for a few VMs Hard Disk attributes, rebooted the VMs and now no more IO delays and IO Pressure Stalls. This does not appear to be ZFS related as one of my PVE hosts is on EXT4 partitions and I have a mix of AMD and Intel CPU hosts that it was happening with and I have both...
  15. UdoB

    new drive setup, considering RAIDZ1

    ...IOPS for writing data and four times the IOPS for reading data. And... for rotating rust I highly recommend to add two fast - but small - SSD/NVMe as a so called "Special Device". It really speeds things up. Also: https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
  16. J

    [SOLVED] Issue: Unable to reach Ceph after upgrade to Squid (19.2)

    Good day. I have recently updated my Ceph from Reef (18.2.8) to Squid (19.2) in preparation for upgrading my ProxMox systems from v8 to v9. I followed the below documentation, to prepare: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#In-place_upgrade To which I then completed these steps...
  17. D

    [SOLVED] Proxmox freezes with high IO, maybe ZFS related

    ...does not necessarily directly identify the disk that first went bad. If a severe stall occurs on one drive or along one I/O path, the resulting ZFS write stall can also show up as deadman logs on other disks. Based on the iostat results, this looks more like a problem with sdb itself or with...
  18. P

    [HELP] Windows Server 2016 VM (migrated from VMware) - CRITICAL_PROCESS_DIED after restore from PBS and Veeam

    ...Replication: v13 VM role: Windows Server 2016 Domain Controller + File Server VM disk: ~559 GB LVM volume on iSCSI NAS Backup targets: PBS (local ZFS datastore) + Veeam (SMB NAS target) Problem: Following a crash, we attempted to restore the VM from both PBS and Veeam backups. All restore...
  19. leesteken

    [PVE 9/ZFS-Based VM/LXC Storage] Why Shouldn't I Disable Swap Inside Linux VMs?

    Swap inside (Linux) VMs is just as advantageous. Having ZFS (or BTRFS) underneath those VMs is not ideal, but should not be a problem (with enterprise drives) unless they start thrashing, which is a problem of and in itself. Writing every once in a while to swap is good for performance and...
  20. SInisterPisces

    [PVE 9/ZFS-Based VM/LXC Storage] Why Shouldn't I Disable Swap Inside Linux VMs?

    I am aware that on bare metal Linux, or a non-ZFS-based Proxmox VE host, there are advantages to having swap enabled. So, I'm asking specifically about using swap inside VMs stored as zVols on a thin-provisioned ZFS mirror pool. I have a 4 GiB Debian 13 that I've never seen use more than 1.2...