zpool

  1. S

    Proxmox VE created ZFS pool, passed through into VM online on both host and guest

    My current NAS setup has Proxmox VE running on bare metal and TrueNAS Scale running in a VM. I created a zfs pool "appspool" from the UI: Datacenter -> Storage -> Add -> ZFS I then created a TrueNAS scale VM and passed through the disk qm set 900 --scsi2...
  2. H

    Strange ZFS performance issue

    Hello, I have old-ish HP ProLiant Microserver Gen8. It has 4x 3.5" bays (currently populated with 2x WD Reds = /dev/sda, /dev/sdb) + 1x old Laptop 2.5" HDD ( I installed Proxmox OS only = /dev/sdc ) My setup is that I have the Proxmox installed on the single 2.5" HDD, where the whole disk was...
  3. J

    ZFS suddenly inactive, unable access the zpools

    Hello gents, I would like to ask you for an advice after hours of googling without success. After restarting my home servers when plying with VPN, my 1 month old SSD disk which I am using as a NAS (i know, 1 disk, nas, zfs....who does this) stoped working. I was unable to access the storage, so...
  4. J

    [SOLVED] Help me fix my rpool - need to remove a partition form the pool and need to make another one bootable

    Hi all, While working through this problem here, I realized that I needed to fix the layout of the partition in my rpool before proceeding. /dev/sdb3 and /dev/sda3 are now mirrored partitions (correct) but sda2 should be a EFI boot partition. I'm trying to remove sda2 from the rpool so I have...
  5. T

    [SOLVED] Multi node cluster, local ZFS per node

    Hi, What is the correct way to setup local ZFS Pools on each node? I am not looking to share them/ha across nodes, just want to use a local pool on each node for VM's, and used my shared storage for HA (NFS) across the pool. I have the cluster setup, and when I add a zpool to a node, it looks...
  6. P

    [SOLVED] Huge backup size when backup but small used disk on VM

    Hello I have a simple problem : i have a Windows server 2019 on a VM. Space used on the disk of the VM : 179Go Size of the VM disk : 1.65To When i backup the VM with proxmox, my backup size is 919Go. Why ? I cannot defrag the disk on the VM For information, I use a zpool storage for the disk...
  7. T

    Proxmox VM - Windows 10 [One Disk Image, multiple VMs]

    Hi People, So what i wanted to achieve is i wanted one central Master VM disk and multiple VMS will be using the same as read only mode. So my base vm let say vm 100 has windows 10 and lots of other softwares, and my other vms, vm101 to vm105 is using the same disk image of base windows vm...
  8. UdoB

    Datastore - emulating one by another?

    Good morning, here is a little sunday morning story with a question regarding Datastore emulation: My homelab is growing - while I am evaluating options to shrink it down for power consumption reasons. Growing is not always linear and with different new (in this case: used) hardware come new...
  9. L

    ZFS pool on mdadm raid0

    I am wondering if it is possible to create a zfs pool on a raid0 mdadm, I love the performance of the latter and the functionality of the former with proxmox.
  10. G

    ZFS Pool Performance vs LXC

    Hi All, Hoping to gain a better understanding of why I'm seeing a difference in performance running the same fio command in different contexts. A little background, my pool originally consisted contained 4 vdevs, each with 2x 3TB. However due to a failing disk on mirror-3 and wanting to...
  11. M

    [SOLVED] Second ZFS pool failed to import on boot

    Hello, i have 2 zfs pools on my machine (PVE 7.1): rpool (Mirror SSD) datapool (Mirror HDD) Everytime i boot up my machine, i get an error, the import of datapool failed. A view in the syslog shows everytime the same enty: Jan 3 13:31:29 pve systemd[1]: Starting Import ZFS pool datapool...
  12. D

    [SOLVED] zpool disk replacement - already in replacing/spare config; wait for completion or use 'zpool detach'

    Hello everyone, I have a zpool (raidz1) built with 3 x 2TB disks. Some disk start to report some error so I've taken 3 x 6TB disks and I want to replace the old ones. I've followed these steps: :~# zpool status -v 'storagename' pool: 'storagename' state: ONLINE status: Some supported...
  13. F

    Replacing drives in Zpool

    Hi all, I currently have 4x 1TB NVMe drives running as a stripped mirror for my VM disk storage. I am in the process of replacing them all for 2TB drives. I'm happy with using the zpool replace command. I have replaced two so far, the second one is just re-silvering now. I currently only have...
  14. J

    [SOLVED] zpool mit 3 Festplatten

    Hallo liebes Proxmox Forum. Ich würde gerne meine 2x 1TB SATA SSD in einem "RAID0" als mirror mit meiner 1x 2TB NVME SSD in einem mirror Verbinden. So das ich 2TB SSD Speicher habe. So das entweder die NVME oder eine der SATA SSDs kaputt gehen kann. Vielen dank schon mal für die Info.
  15. E

    Windows VM shows MUCH less space used than ZFS pool its on

    I created a Windows Server 2019 VM with a 120GB C: drive. After the VM was created I added a 60TB drive and formatted it with NTFS. This 60TB drive is a raw file on a ZFS pool. The issue I'm having is that even though in Windows the D: drive is showing 11TB used and 49TB free when I look at the...
  16. J

    [SOLVED] Accidentally deleted root=ZFS=rpool in kernel/cmdline and boot hangs

    Hello, Urgently need help. I have research work data on it:oops: Story: My GPU passthrough has been working well on both 6.2.55 and 65. Because I used update-initramfs -u -k all, both .55 and .65 has GPU frame buffer disable thus no host display out. I was trying to reverse GPU passthrough...
  17. M

    No email notification for zfs status degraded

    Hello, I installed zfs-zed via # apt-get install zfs-zed and edited /etc/zfs/zed.d/zed.rc to uncomment ZED_EMAIL_ADDR="root" as described here: https://pve.proxmox.com/wiki/ZFS_on_Linux#_activate_e_mail_notification zpool status is degraded, but I do not get an E-Mail. Other notifications from...
  18. J

    ZFS: Failed to replace failing disk

    Hi, one of my disks reported a SMART-error, so I set out to replace the failed disk - but failed. Here's what I did, as reported by zpool history, with some remark indented: 2021-04-12.09:48:43 zpool offline rpool sdc2 I removed the failed disk and shut down the system, replacing the drive...
  19. D

    ZFS big overhead

    Hi, I have a ZFS rpool with 4 x 10 Tb drives on raidz1-0: root@pve:~# zpool status -v pool: rpool state: ONLINE scan: scrub repaired 0B in 1 days 04:24:17 with 0 errors on Mon Mar 15 04:48:19 2021 config: NAME STATE READ WRITE CKSUM rpool...
  20. C

    Issue with large ZFS volume, CIFS share and VM storage

    We have a working 4 hosts PVE 6.3-3 cluster installed with ZFS. All those machine are enterprise gear on working condition. With those machine we archive, digitize and produce very large video files to different format. These are often intensive processes that require a lot of resources and...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!