zfs

  1. powersupport

    required command to remove the ZFS snapshot via CLI

    Hi, we have listed the snapshot using 'zfs list -t snapshot' but Could you please let me know how to remove the snapshot from command line? Looking forward to your reply. Regards
  2. powersupport

    ZFS pool size full with incorrect actual usage.

    Hi, While checking my virtual machine I got an IO-Error. I have also checked the ZFS pool but it shows it was full. but I have confirmed we just configured the hard disk for the virtual machine hard disk size 1.55 TB. I am not sure why the ZFS disk consumes more than the assigned for the VM...
  3. ZFS space consumption

    Hello, I’m posting hear because we have some weird information on ours proxmox cluster. We use 4 identical OVH servers in a cluster. The servers are in use from 10 months. The cluster is composed of 2 “Main” servers and 2 “replications” servers. All the servers have been upgraded from proxmox...
  4. [SOLVED] Question about how Proxmox uses ZFS pools

    Hello, I just bought some new disks and though it would be a good time to try out ZFS. I have been explained by some people that first you create a zpool and then on top of that you create a filesystem data-set, after which you can store you files on the mounted location. I added my zfs...
  5. Avoiding fixed IP limitation from pve-zsync

    I've been playing with pve-zsync to backup my datasets and I was quite happy, but the fixed ip limitation was a boomer, because its almost the cost of another fiber connection just having to pay for a fixed IP. So one of my problems was changing the ssh port for security reasons and I've found...
  6. Samsung 860 evo 250gb for proxmox OS only

    Hi, During the process of setting up a new proxmox server, I asked some questions around the forums and another situation arise that caused me some concern. The question was related with the use of SSDs for ZFS on proxmox. So I was thinking that I had a pair of samsung 860 pro 250gb to use...
  7. ZFS Pool Not Found Error 500

    Last year create ZFS pool - local-hdd 2021-09-10.19:57:15 zpool create -o ashift=12 local-hdd /dev/disk/by-id/ata-ST500LM021-1KJ152_W62BB0MS 2021-09-10.19:57:20 zfs set compression=on local-hdd 2021-09-10.20:18:06 zpool import -c /etc/zfs/zpool.cache -aN 2021-09-10.20:35:11 zfs create -V...
  8. zfs error: cannot import 'raid1': no such pool available (500)

    Many months agao created a zfs-pool `local-hdd` 2021-09-10.19:57:15 zpool create -o ashift=12 local-hdd /dev/disk/by-id/ata-ST500LM021-1KJ152_W62BB0MS 2021-09-10.19:57:20 zfs set compression=on local-hdd 2021-09-10.20:18:06 zpool import -c /etc/zfs/zpool.cache -aN This is the output of zpool...
  9. Shutdown VM on failover to free resources

    The following scenario: Node 1 & 2 are a HA cluster. Node 1 hosts VM 1 Node 2 hosts VM 2 & VM 3 -> Node 1 crashes. VM 1 is handed over to node 2 by HA job. Is there a way to automatically shutdown VM 3 (low prio) to free resources on Node 2 for the downtime of Node 1 (duration of failover)?
  10. Power hit on server and now system unable to mount nfs shares. Services failing including sssd.

    Ok .. so i'm a idiot. Lets just get that out of the way. New server and I don't have it plugged in to battery. We had some bad weather yesterday and there was a power blip. Server's power supply tripped from a spike and the system was powered down when I got home. I started up the system and...
  11. Newb question about ZFS/CEPH etc

    Hi, I'm playing with a small home lab. I would like to use ceph to ensure that I have replicated vm's for reduncancy. I have 2 servers , each with 1 HDD. The first server (HP microserver) with 16gb ram has been running Proxmox for years, hand cranked from Debian running LVM on a 500G SATA...
  12. [SOLVED] LXC Snapshot Details?

    From my wiki reading (perhaps missed details and thus my confusion) some things aren't entirely clear so I wanted to hopefully get some more details here. I have an LXC guest on Proxmox 7.2 with ZFS compression enabled: zfs set compress=zstd rpool/data/subvol-211-disk-0 The volume is currently...
  13. Best Two Node Setup Sharing A ZFS Pool

    Hey y'all, So I wanted to gauge some people's opinion here on a homelab setup i am in the process of creating with 2 PVE nodes that have already been setup in a cluster. Here are the 2 nodes 1. R430 (pve1) with 2x Xenon E5-2620 v3 cpus, 32GB RAM, and 8x 800GB SAS SSDs in raid z1 (~6TB usable)...
  14. [SOLVED] ZFS Pool lost after disk failure

    Ok, I know this is a long read but I guess it could be interesting knowing the background. I'm running Proxmox with a with a few VMs and Proxmox is running on a NVMe disk and the VM's are running from a ZFS pool consisting of 2 x 3TB mirrored disks. I also have a 1TB disk that one of the VM's...
  15. Win19 VM: IO errors, but PVE zfs and smart clean.

    Dear Proxmox Users, I have a problem with a Windows Server 2019 VM on PVE 5.4.143-1. I looking for the reasons of this event, and a solution to avoid it in the future. Maybe You can help to figure it out. At morning we noticed, the Win server is slow, and the mssql not run. I try to restart...
  16. Unable to export zfs pool.

    Hi Long story short. I've just received a server and it has a zfs pool on it with dying disks (tank). I've put in 2 new larger drives in a pool mirror configuration (tank2). Now i'm trying to export the original/old pool (tank) so i can import it back (rename) as (tankold), and then rename...
  17. Advice for a new installation with zfs + a question on the raid/mdadm

    Hello, I have decide to transfere all my nas HDD on my Promox server and after little issue (see here: Failed to import pool ‘rpool’; ssd ko ? No go to X570D4I-2T BIOS!), it’s done proxmox see they new hdd (and one new ssd ) First one question: Proxmox see my new HDD … and also the raid 5...
  18. Change ssh user on ZFS over ISCSI

    Hello guys, is there a way to change the user from root to admin when connect ZFS over ISCSI? command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.10.1.100_id_rsa root@10.10.1.100 zfs list -o name,volsize,origin,type,refquota -t volume,filesystem -Hrp' failed: exit code 255 (500)...
  19. [SOLVED] Failed to import pool ‘rpool’; ssd ko ? No go to X570D4I-2T BIOS!

    Hello, My Proxmox server worked very well. But I changed it from computer case to add more hard drive But now I have the next message when I start it. Begin : Importing ZFS root pool ‘rpool’ … Begin : Importing pool ‘rpool’ using defaults … Failure : 1 Failure : 1 Command : /sbin/zpool...
  20. Confused On ZFS Failed Disk Replacement Because Proxmox Created Three Partitions

    Hello everyone, I installed Proxmox (7.2-4) using ZFS (RAID1) and one of the two drives have failed. I thought Proxmox would use both entire disks for the ZFS rpool during the installation but it appears Proxmox created three partitions and included only one partition in the ZFS rpool. With...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!