...nvme with other files not a dedicated device, so its not a big deal a few GB goes to that just for the heck of it.
mostly the usecase for the ZFS raid will just be file storage and not often used VMs/some CTs, most CTs that need low latency and handle many small files are on SSD/NVME and...
...take into account that for zswap you always need physical swap as backing device. For example if you use the defaults of PVEs installer for ZFS you will endup without space for a dedicated swap device. And swapfiles are not recommended for ZFS since they caused problems in the past (not sure...
that is good to know, just write right?
i am weary of using special devices, if one of those fails the pool goes down too right? so its best to use mirrors there too?
i planned to use l2arc from an NVME but i wanted to go low risk routes that wont cause the pool to fail if anything happens...
...IO thread for a few VMs Hard Disk attributes, rebooted the VMs and now no more IO delays and IO Pressure Stalls. This does not appear to be ZFS related as one of my PVE hosts is on EXT4 partitions and I have a mix of AMD and Intel CPU hosts that it was happening with and I have both...
...IOPS for writing data and four times the IOPS for reading data.
And... for rotating rust I highly recommend to add two fast - but small - SSD/NVMe as a so called "Special Device". It really speeds things up.
Also: https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
Good day. I have recently updated my Ceph from Reef (18.2.8) to Squid (19.2) in preparation for upgrading my ProxMox systems from v8 to v9. I followed the below documentation, to prepare:
https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#In-place_upgrade
To which I then completed these steps...
...does not necessarily directly identify the disk that first went bad. If a severe stall occurs on one drive or along one I/O path, the resulting ZFS write stall can also show up as deadman logs on other disks.
Based on the iostat results, this looks more like a problem with sdb itself or with...
...Replication: v13
VM role: Windows Server 2016 Domain Controller + File Server
VM disk: ~559 GB LVM volume on iSCSI NAS
Backup targets: PBS (local ZFS datastore) + Veeam (SMB NAS target)
Problem:
Following a crash, we attempted to restore the VM from both PBS and Veeam backups. All restore...
Swap inside (Linux) VMs is just as advantageous. Having ZFS (or BTRFS) underneath those VMs is not ideal, but should not be a problem (with enterprise drives) unless they start thrashing, which is a problem of and in itself.
Writing every once in a while to swap is good for performance and...
I am aware that on bare metal Linux, or a non-ZFS-based Proxmox VE host, there are advantages to having swap enabled.
So, I'm asking specifically about using swap inside VMs stored as zVols on a thin-provisioned ZFS mirror pool. I have a 4 GiB Debian 13 that I've never seen use more than 1.2...
...sdb3 out of pool
zpool offline rpool sdb3
Problem seems to be fixed, loadavg getting lower, io delay lowers to almost zero.
Getting sdb drive back to the pool
zpool online rpool sdb3
IO delay rising again.
What is this, another faulty drive? Faulty ZFS logic which unfairly loads only one drive?
...https://forum.proxmox.com/threads/voting-for-feature-request-for-zfs-over-iscsi-storage.169947/ oder https://forum.proxmox.com/threads/powerloss-protection-plp-mythos.157003/ ), damit der OP diese "Übersicht" einordnen kann
...is "all" on the two destination PBS (old one and new one)
recordsize is 128K on both PBS too.
These are actually the default parameters when the ZFS pool is created through PBS web interface.
I'm definitively missing something 8-)
Why is there such a storage size (and/or dedup) difference...
...as beta since Ubuntu 26.04 has not yet been released. Proxmox leverages the Ubuntu kernel, enhanced with custom compile flags, built-in ZFS support, and patches optimized for virtual machines and LXC containers. (If you are curious of the patches, checkout the proxmox-kernel git.)
Ubuntu...
...Aber: Dem Debian-Kernel fehlen dann natürlich Sachen, die im Ubuntu- bzw. ProxmoxVE-Kernel vorhanden sind, etwa zfs
Und wenn man Ubuntu testet, sollte man das nehmen, was einen vergleichbaren Kernel hat, laut...
...6.17.13-1-pve, 6.17.2-2-pve)
Then; I'll have to do :
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>
Next, will I have to use proxmox-boot-tool or grub-install?
A lot of thanks for your help :)
...thought was to move away from TrueNAS altogether and run PVE on the primary storage server and PBS on the backup storage server. I have a ~24TB ZFS pool on the primary storage server I intend to export from TrueNAS and import to PVE; I do not intent to wipe this pool and rebuild a pool to...
...your setup - leading to boot problems. These names are not static, they do not point to the same device under all circumstances. “By-id” will.
ZFS can consume any block storage device. You may specify an already existing partition on a disk, for example. If you give it a brand-new disk it...
...it happens again
In system logs i can see many records like this
Meanwhile zpool events -v shows this
Apr 13 2026 15:54:06.617975577 ereport.fs.zfs.deadman
class = "ereport.fs.zfs.deadman"
ena = 0x819c1944f5406001
detector = (embedded nvlist)
version...
did you verify it works:
cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-6.17.13-2-pve root=ZFS=/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs processor.max_cstate=1