Increase the size of the ZFS datastore

Feb 8, 2022
10
0
1
37
Good evening everyone, I have installed PBS as a virtual machine on Qnap NAS.
I have configured 2 images as HDD, one for PBS and one for Datastore.
The datastore virtual disk has been mounted in pbs as ZFS.

2022-03-02 20_58_57-NAS1DATACENTER.jpg
I expanded the image size from 2TB to 15TB, on pbs I see the new disk size but the Datastore remains the original size.


Code:
root@pbs1:~# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
Datastore1   795G  1.10T      795G  /mnt/datastore/Datastore1

Code:
root@pbs1:~# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Datastore1  1.94T   795G  1.16T        -     11.7T     9%    40%  1.00x    ONLINE  -

Code:
root@pbs1:~# zpool status
  pool: Datastore1
 state: ONLINE
config:

        NAME                                             STATE     READ WRITE CKSUM
        Datastore1                                       ONLINE       0     0     0
          scsi-0QEMU_QEMU_HARDDISK_4dc5dda05c4645f19202  ONLINE       0     0     0

errors: No known data errors

If I use the command online -e Datastore1 sda:
Code:
root@pbs1:~# zpool online -e Datastore1 sda
cannot expand sda: no such device in pool

Where am I doing wrong? I can not understand..
 
Hi,

it looks like sda is not a full device path. Have you tried it with /dev/sda? If that does not work, list the devices with zpool list -v -H -P. It should look something like this:

Code:
Datastore1     36.5G   4.53G   32.0G   -       -       1%      12%     1.00x   ONLINE  -

        /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_4dc5dda05c4645f19202-part1     36.5G   4.53G   32.0G   -       -       1%      12.4%   -       ONLINE

Note that the numbers won't match. You can then take /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_4dc5dda05c4645f19202 as the device parameter, as in: zpool online -e Datastore1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_4dc5dda05c4645f19202. Be sure not to include the trailing "part1" if you haven't set up your pool explicitly to use only a partition.

Hope that helps.
 
  • Like
Reactions: jonsko
Hello,

I am running PBS on a VM on my PVE. I am also trying to expand my PBS datastore. To achieve this, I added a new disk with 5TB to the node and passed it thorugh to the VM. I attached is as a mirror to my ZFS pool and detached the old disk with 300 GB after resilvering was completed.

Code:
root@pbs:~# zpool status
  pool: pbs-zfs
 state: ONLINE
  scan: resilvered 33.0G in 00:10:02 with 0 errors on Mon Feb  5 20:48:02 2024
config:

        NAME                                 STATE     READ WRITE CKSUM
        pbs-zfs                              ONLINE       0     0     0
          usb-Seagate_Portable_NT3668TM-0:0  ONLINE       0     0     0

errors: No known data errors

Then I ran the command:
Code:
root@pbs:~# zpool online -e pbs-zfs /dev/disk/by-id/usb-Seagate_Portable_NT3668TM-0:0

zpool lists shows that zpool is expanded, but the datastore stays at the same old value of 300 GB.

Code:
root@pbs:~# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pbs-zfs  4.55T  32.5G  4.52T        -         -     0%     0%  1.00x    ONLINE  -

root@pbs:~# df -h 
Filesystem            Size  Used Avail Use% Mounted on
udev                  950M     0  950M   0% /dev
tmpfs                 197M  720K  196M   1% /run
/dev/mapper/pbs-root  3.0G  1.9G  993M  66% /
tmpfs                 983M     0  983M   0% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
pbs-zfs               300G   33G  268G  11% /mnt/datastore/pbs-zfs
tmpfs                 197M     0  197M   0% /run/user/0

It seemas somehow the filesystem is not expanding ? Any ideas?

Many thanks in advance