zfs

  1. I

    Hat Proxmox 8 neue zfs_arc_max settings?

    Hallo zusammen Hatte eine Proxmox 7 Kiste am laufen mit 64GB und 32GB ARC. Um umzusteigen auf volblocksize 16k habe ich Proxmox neu installiert. Nun scheint zfs_arc_max auf 8GB einstellt zu sein. Wurde das was am default geändert mit Version 8? Im manual steht noch immer noch 50%...
  2. V

    What is the method to utilize efficient ZFS daily snapshots to keep a backup of containers and vms?

    I would like to keep ZFS-based backups / snapshots in daily (1 month weekly), weekly (5 weeks), and monthly (6 months) intervals on servers so as to be able to readily rollback if need arises. How is this accomplished in an automated fashion so as to both accomplish this and purge older...
  3. C

    DMAR: IOMMU enabled / Error: cannot prepare PCI pass-through, IOMMU not present

    Hi, i am trying to understund why the system behave this way to find out where the issue lays, system specifications and and steps taken in attempt to fix it bellow: CPU: G4400T / I5 6500 (tried both) RAM: 16GB (2x 8GB) Motherboard: ASRock H110M-DGS R3.0 (BIOS: P7.4) NetworkPVE: RTL8111 GbE...
  4. M

    LXC, ZFS, verschachtelte Datasets, bind mount

    Hallo, ich habe einen ZFS Dataset. Genannt "tank/work". Diesen habe ich als Bind Mount in meinen LXC durchgereicht, dann kann ich im LXC unter "/srv/work" auf die Daten zugreifen und auch auf die Snapshots usw. Es wird alle 15min ein Snapshot angelegt mit zfs-auto-snapshot. Super! Nund möchte...
  5. S

    IO delay

    I upgraded my hosts and removed all SWAP's from the LXC clients and this is the result. Now I wonder if the problem was the code or the SWAP. I'm betting LXC don't like ZFS with SWAP.
  6. Y

    Enable Storage to store qcow2

    I created a ZFS RAIDZ storage. What must I do to make this storage support qcow2?
  7. K

    ZFS RAID

    Hi all! I want to understand on high level how to arrange mirroring RAID using ZFS in proxmox and how to do it step by step. So according to documentation: RAID1 Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The...
  8. I

    Proxmox VE 8.1.4 - watchdog: BUG: soft lockup - CPU#X stuck for Xs

    Hello there. Since I connected my nodes to cluster I noticed on some Linux VMs I'm getting this error: I couldn't find any working solution for this. I suppose this has something to do with ZFS as one of my nodes where ZFS is not operating these VMs work without any issues. Do you have any...
  9. K

    could not activate storage 'XXXX', zfs error: cannot import 'XXXXX': no such pool available

    Guten Abend, Ich habe eben einige Updates in meinem Node aufgespielt welche mir unter "Updates" angezeigt wurde. VE ist Version 8.1.4 Seither werden meine zfs-pools nicht mehr erkannt. Im Rechenzentrum unter "Storage" werden sie noch angezeigt aber im Knoten nicht Die storage.cfg sieht so...
  10. P

    EMERGENCY: Proxmox Server Failure

    Help!! I recently upgrade to Proxmox 8 and now, my entire server fails to start. It usually starts with a few ACPI GPIO failures and then it tells me that my ZFS pool for all my home's data is failing to import. Then after reaching the login screen, after about 10 seconds, the entire server...
  11. R

    performance problems

    Hello, I have a strange problem in one of our servers that I think is related to the ZFS volume. If I run a du or ncdu command on this server to read the files, it is extremely slow. Takes about 30/40 minutes per TB. I don't notice the problem with writing to it. Before making the backup, our...
  12. B

    RAID storage advice please

    I am rebuilding my office server. I would like to host it as a VM on Proxmox. Currently I am running a software RAID10 with 4x1TB disks and 16GB RAM I need good read/write performance with some redundancy. I was thinking of adding an additional drive for RAID6 but I understand the performace...
  13. Z

    Can't wipe drive & ZFS(8,4,4 fails to be created)

    I tried to format 3 drives (4TB, 4TB, 8TB) into a ZFS Array. This did not work. I then tried to format 3 drives into the other array types. This did not work. I proceed to format only 1 drive into a raid0 default drive. This did work. Now I want to wipe this empty drive. I then try wiping it...
  14. J

    another volume 'dpool2' already exists

    Hi all. I've searched this forum but no solution helped me. Here the point: I have 2 nodes. PVE-01 and PVE-04. and virtual machine 103 on pve01 with 3 storages/3 disc's I have replication from pve01 to pve004 that did not work. 2024-02-14 09:59:00 103-0: start replication job 2024-02-14...
  15. B

    ZFS Raid showing incorrect usage?

    I have a ZFS Raid volume on each of my hosts and when I look at the Node's ZFS Window I see the correct utilization. To be clear, each host has 12 drives, configured in a raid 10 using ZFS. I named them the same due to what I saw was a requirement for replication to function in a cluster...
  16. R

    pfsense VM with UFS leads to overfull vm disk until reboot

    Hello, I recently moved to running pfsense in a VM on pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve). My proxmox storage is zfs, so using virtio block storage for the vm creates a zvol. In pfsense, in order to avoid zfs on zfs, and write amplification, I am using UFS as the...
  17. P

    RaidZ1 performance ZFS on host vs VM

    Hi, I am currently testing my RaidZ1 setup. The plan was actually to create the ZFS pool (4x1TB PCIe NVME SSD) with Proxmox and then pass it as a disk to a VM, among others. In my benchmarks with fio, however, I noticed that the performance on the host was significantly higher (approx. 50%)...
  18. M

    Feature request: Don't delete volumes onm restore which were excluded from backup

    On my old proxmox instance i didn't use zfs but the directory storage (.vmdk / .qcow2 files). I used VMs on which i excluded drives from backup. When i restored a VM, the disk was then marked as unused but still available to remount on that VM (after doing a qm rescan). I also had trouble with...
  19. K

    ZFS advice for hetzner bigger boxes

    Hi. Some time ago I took of hetzner auction a machine with 15 x 6TB HDD. Somewhat naive (if not plain stupid) I configured it in one vdev raidz2. auch. it run few VM (very slow) and few LXC and some backing up for my substancial photo-library, and predictably machine is slowly dying I even got...
  20. N

    Forming a 10Tb WD Ultrastar DC HC330 hard drive (part: 0B42266 and 0B42305) in Proxmox does not work, but ,in Windows10 formatting works.

    I can't format my hard drive in proxmox. It gives an error (file attached below). The latest current version of the proxmox update has been installed. But when running in Windows 10 through the disk management partition, formatting in ntfs format works correctly. Files on the disk run without...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!