[SOLVED] Why does my ZFS pool show each disks with a 2.15 GB unformatted partitions?

backdoc

New Member
Apr 12, 2022
14
4
3
I was looking at the Disks menu/option and noticed that every disk in my pool had a 4 TB partition and an unformatted 2.15 GB partition. I'm new to ZFS. And, I originally created my pool in TrueNAS SCALE. So, I wasn't sure if this was a ZFS thing or a TrueNAS thing.

Since /dev/sdX2 is showing 4TB, it looks like I'm not losing any space. But, before I went a lot further with my Proxmox setup, I wanted to make sure that I wasn't losing disk space and see if there was a relatively safe way (and not too manual way) to backup my current pool and then recreate it without splitting each disk into 2 partitions. I'm a bit of a perfectionist. And, I finally got my datasets set up where I think I'll be happy and 3 LXC's that are working just like I want. So, I don't want to jeopardize that. But, it's killing my OCD.

1650545590838.png
 
Last edited:
I was looking at the Disks menu/option and noticed that every disk in my pool had a 4 TB partition and an unformatted 2.15 GB partition. I'm new to ZFS. And, I originally created my pool in TrueNAS SCALE. So, I wasn't sure if this was a ZFS thing or a TrueNAS thing.

Since /dev/sdX2 is showing 4TB, it looks like I'm not losing any space. But, before I went a lot further with my Proxmox setup, I wanted to make sure that I wasn't losing disk space and see if there was a relatively safe way (and not too manual way) to backup my current pool and then recreate it without splitting each disk into 2 partitions. I'm a bit of a perfectionist. And, I finally got my datasets set up where I think I'll be happy and 3 LXC's that are working just like I want. So, I don't want to jeopardize that. But, it's killing my OCD.

View attachment 36150

Per this post and this documentation, if I'm interpreting them correctly, it appears that TrueNAS does create a swap partition on every disk.

If this is the case, could I take a disk offline, reformat it, then bring it back online and let it resilver? Or, do all disks have to be partitioned exactly the same way with ZFS?

Or alternatively, would this work? Take a disk offline, then use gparted to delete the unwanted partition and then bring it back online with the good partition and the content left in tact?

If you can't tell, I am very new to ZFS and ZFS raids. I don't know if normal disk utilities that I've used a million times are okay to use here.
 
Or, do all disks have to be partitioned exactly the same way with ZFS?
I know for sure that this is not necessary.
Or alternatively, would this work? Take a disk offline, then use gparted to delete the unwanted partition and then bring it back online with the good partition and the content left in tact?
I've have done this to resize a ZFS pool with mirrored vdevs in the past. It's not clear to me what kind of redundancy your ZFS pool(s) have (stripe, mirror or RAIDZ1/2/3?) . If there is no redundancy, then that will destroy your data. If there is only 1 redundant copy, it will put your data at risk during this operation.
 
I know for sure that this is not necessary.

I've have done this to resize a ZFS pool with mirrored vdevs in the past. It's not clear to me what kind of redundancy your ZFS pool(s) have (stripe, mirror or RAIDZ1/2/3?) . If there is no redundancy, then that will destroy your data. If there is only 1 redundant copy, it will put your data at risk during this operation.
I have 6 4TB drives in raidz2. So, I should have 2 disk redundancy. So, you think taking one offline, using gparted or partimage to delete the unwanted partition, resizing the remaining partition and then bringing it back online would be a good approach?
 
I thought I'd follow up to my email with how I resolved it.
  1. Do
    Code:
    zfs status
    to show devices. In my case, my devices were listed by PARTUUID which gave me one extra step because I wanted to replace each device by-id.
  2. For no real reason, I wanted to replace them in device order like /dev/sda, /dev/sdb... /dev/sdf. So, to figure out which one mapped to the PARTUUID, I used
    Code:
    blkid | grep 'the PARTUUID'
    .
  3. Then, I took the device offline with
    Code:
    zpool offline <mypoolname> <PARTUUID listed in zfs status>
    Used parted to remove the partitions:
    Code:
    parted /dev/sda
    (parted) p "just to review the partitions and be sure I saw what I expected"
    (parted) rm 1
    (parted) rm 2
    (parted) q
    Then, I had to find the "by-id" by looking in /dev/disks/by-id/ata*. It would list all of my drives with 2 partitions except for 1 which had 0 partitions. So, I knew that was the device I needed. So, at that point, I did
    Code:
    zpool replace <mypoolname> <PARTUUID> <my by-id value>
  4. Code:
    zpool status
    to check progress. It shows the progress and what's happening.
Each disk took about 20 min to resilver.

NOTE: A couple of times, I forgot to take the device offline before deleting the partitions with parted. I couldn't figure out why parted was complaining about not being able to notify the kernel and that the device might be in use. But, ZFS was very forgiving. I just rebooted, then proceeded as normal.

Hope this helps somebody.
 
Last edited:
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!