zfs

  1. P

    PVE migration – Samba file server shadow copy: which technology should be used (vm with zfs vs. lxc with zfs vs. vm with ext4/lvm)?

    Hello, we are currently in the process of migrating our vmware esxi to Proxmox. One of the next steps is migrating the samba fileserver, since it is ~3TB large, I want to set up a new vm/lxc on the pve node and transfer the data with rsync so I can switch to the new system more or less “on the...
  2. L

    All ZFS Pools Showing as 97% Full in PVE

    Hello everyone, I use Proxmox and Proxmox Backup Server. I have a single ZFS-formatted 8TB archive HDD that I use with Nextcloud that's showing 97% full, a RAID1 ZFS 8TB HDD pool that I use with PBS for backups that's showing 97% full, and a RAID6 ZFS 8TB HDD pool used as a secondary PBS backup...
  3. T

    [SOLVED] I broke HBA passthrough to TrueNAS VM by adding 2nd HBA

    First real post to this forum, so please be gentle. The Hardware: SuperMicro 846 36xHDD chassis with dual 1250W PS, Gigabyte MZ72-HB2 dual-socket AMD motherboard w 256GB ECC RAM, 2xEPYC 7F52 16-core processors, LSI SAS2308 HBA #1, 4x Samsung PM9A1 512GB SSD boot drive in zfs stripe/mirror...
  4. C

    Helpful hints for users moving from TrueNAS Scale to Proxmox (zfs, Samba/SMB share)

    As a new user to proxmox (coming from TrueNAS Scale), I made a bunch of mistakes that cost me an aggravating week of time/effort. The two biggest issues I had were: Not realizing I had to export my TrueNAS zfs pool before importing it into Proxmox (which ultimately corrupted my zpool and made...
  5. J

    ZFS - How do I stop resilvering a degraded drive and replace it with a cold spare?

    Replacement HDD gets here tomorrow. In the mean time I'd like to save the mirrored good drive from working overtime to resilver the failing drive. I also don't understand how to swap the new drive with the old drive, since I don't have any open drive bays to add the new drive to the pool...
  6. W

    [ANN] bzfs 1.18.0 near real-time ZFS replication tool is out

    It improves operational stability by default. Also runs nightly tests on AlmaLinux-10.1 and AlmaLinux-9.7, and ties up some loose ends in the docs. If you missed 1.17, it also improves handling of snapshots that carry a `zfs hold`. Also improves monitoring of snapshots, especially the timely...
  7. R

    ZFS error in journal log

    Building/testing a server, running proxmox-ve: 9.1.0 (kernel: 6.17.9-1-pve), machine have 5 disks: 3x ssd, 1x nvme, 1x sata. Disk1-ssd (OS), efi and boot partition are not encrypted, root file system (ext4) is on a luks encrypted partition, clevis/tang decrypt the luks partition so machine...
  8. R

    ZFS error in journal log

    Build/test a server, running proxmox-ve 9.1.0 on kernel: 6.17.9-1-pve on top of Debian 13, machine have 5 disks (3x ssd, 1x nvme, 1x sata): disk1-ssd (OS), efi and boot are not encrypted, root filesystem ext4 is on luks encrypted partition, clevis/tang is configured so the root file system get...
  9. G

    Ditching Unraid in favor of Proxmox for ZFS storage shares?

    Hi there, I'm currently running Unraid in a VM on Proxmox and passing through the SATA controllers, as well as one NVME drive on my Motherboard. I started using Unraid as an easy method to use disks of different sizes and have a cache in front. But over time, came to realize that the vanilla...
  10. L

    Inconsistent file copy on Windows + ZFS

    We have recently run up against an issue where large (>4GB) copies cause lockups within a Windows guest. These can be copies within the same folder, or across disks. The original issue was seen on a RAID Z-1 pool (6x 870 EVO SSD), but has been replicated on ZFS RAID-1 pool with Micron 7450. The...
  11. T

    [SOLVED] No zfstools in the latest kernel 6.17.9-1-pve

    Hello everyone, "Und täglich grüßt das Murmeltier" I just upgraded my kernel from 6.17.4-2 to 6.17.9-1. Sadly now I'm missing following binaries/scripts: - arcstat - arc_summary Is it possible to include them in the next releases and how can I install them now? Thanks in advance
  12. J

    HPE P816i-a SR Gen 10 - SCSI Resets and ZFS failures

    Hi everyone, i have a kind of weird behaviour with a PBS based on a HP DL380 Gen10. Since the upgrade to PBS 4 the system sporadically "spits out" one of 10 SAS disks. Interestingly enough its a different disk every time and it's mostly just one. These disks get marked as failed in the ZFS...
  13. A

    [SOLVED] RAIDz2 zpool not showing full capacity after zpool attach

    I had a zpool with 4x8TB SATA drives in a RAIDz2 with a total capacity of 16TB. I updates to PVE 9 and ZFS to 2.3 and attached 2 additional drives to the zpool using: zpool upgrade <poolname> zpool attach <poolname> raidz2-0 /dev/disk/by-id/<drive1> zpool attach <poolname> raidz2-0...
  14. K

    random kernel hang in zfs

    Hi, We have a proxmox 8.3 host running on a fujitsu Primergy RX2540 M4 server. It's running zfs on top of hw raid which I know is not recommended, but I don't think it should lead to this behavior. It has crashed three times during the last 4 months. I was only able to capture the call trace...
  15. A

    [SOLVED] Proxmox VE 8 to 9 in-place update but kernel still on 6.8

    I did an in-place upgrade from PVE8 to PVE9 but after the update the kernel and zfs are still showing old versions even though they both seem to have updated: # pveversion pve-manager/9.1.4/5ac30304265fbd8e (running kernel: 6.8.12-18-pve) # uname -a Linux ghar80 6.8.12-18-pve #1 SMP...
  16. M

    bindmount ZFS dataset and children to unprivileged LXC

    On my host I have a ZFS dataset looking like this: NAME MOUNTPOINT Data /Data Data/Nas /Data/Nas Data/Nas/Documents /Data/Nas/Documents Data/Nas/Photos /Data/Nas/Photos I have a LXC...
  17. D

    Help understanting ZFS disk usage by vm disks

    Hi everybody, I have a zfs pool that is filling up but I can't understand why But the sum of all vm disks doesn't match with used space above Only vm-1107 has snapshots # zpool list rpoolData NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpoolData...
  18. T

    [SOLVED] Proxmox 9 - IO error ZFS

    Hi everyone, We’ve been deploying several new Proxmox 9 nodes using ZFS as the primary storage, and we’re encountering issues where virtual machines become I/O locked. When it happens, the VMs are paused with an I/O error. We’re aware this can occur when a host runs out of disk space, but in...
  19. C

    [TUTORIAL] QuantaStor Proxmox Storage Plugin

    Hello all! I'm a developer with OSNEXUS. We’re excited to announce the pre-release version of the QuantaStor Proxmox Storage Plugin, designed to simplify the management of QuantaStor storage within Proxmox VE clusters. This plugin enables seamless creation and management of ZFS-over-iSCSI...
  20. M

    ZFS Pool Error remains after replacement of Disk

    We had an issue with one disk of our ZFS-Pool. The disk was changed, pool resilvered all fine. Two errors are still remaining: root@pve1:~# zpool status -v pool: rpool state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may...