Datastore switched to new smaller drive w/o requesting it.

hemna

New Member
Jun 3, 2025
4
0
1
Hello,
I have PBS version 3.4.1 installed on a machine, not as a vm. I had a 5TB drive mounted as a datastore called "Primary".
I shut my machine down and installed a 1TB SSD, that had an existing ZFS partition on it. Upon boot, the Primary
datastore switched to the SSD, making all my backups on my 5TB volumes unavailable. PBS shouldn't mount a new drive
under the same datastore name. Shouldn't it mount it based upon scsi UUID?


```
root@pbs:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pbs/root / ext4 errors=remount-ro 0 1
/dev/pbs/swap none swap sw 0 0
proc /proc proc defaults 0 0

root@pbs:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 12G 0 12G 0% /dev
tmpfs 2.4G 1.7M 2.4G 1% /run
/dev/mapper/pbs-root 892G 4.4G 842G 1% /
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
Primary 900G 49G 851G 6% /mnt/datastore/Primary
tmpfs 2.4G 0 2.4G 0% /run/user/0

root@pbs:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 930.5G 0 part
├─pbs-swap 252:0 0 8G 0 lvm [SWAP]
└─pbs-root 252:1 0 906.5G 0 lvm /
sdb 8:16 0 5.5T 0 disk
├─sdb1 8:17 0 5.5T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 931.5G 0 disk
├─sdc1 8:33 0 931.5G 0 part
└─sdc9 8:41 0 8M 0 part
```

root@pbs:~# blkid /dev/sdb1
/dev/sdb1: LABEL="Primary" UUID="1282641853788701463" UUID_SUB="15980374347302783384" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-5af93235168bbb25" PARTUUID="eba042e8-bf09-344b-9a09-e128e25e03c4"

root@pbs:~# blkid /dev/sdc1
/dev/sdc1: LABEL="Primary" UUID="8478406210991036267" UUID_SUB="17229537871035985973" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-f0dce505979d3f76" PARTUUID="f7782438-e485-9848-a74a-c788283782df"
 
Hi,
from the output you posted it seems that both storages use zfs. Check the output of zfs get mountpoint [dataset] to see where the respective filesystems are mounted. You might need to set a new mountpoint for one of the datasets so they are not mounted on top of each other.
 
zfs get mountpoint wasn't very helpful.

root@pbs:~# zfs get mountpoint
NAME PROPERTY VALUE SOURCE
Primary mountpoint /mnt/datastore/Primary local

root@pbs:~# zpool status -T d Primary
Wed Jun 4 08:43:30 AM EDT 2025
pool: Primary
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
Primary ONLINE 0 0 0
wwn-0x5002538e4978de42 ONLINE 0 0 0



sdb is the 5Tib disk
sdc is the 1Tib disk that's being used as the Primary for whatever reason.


root@pbs:~# ls -al /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 540 Jun 3 14:07 .
drwxr-xr-x 9 root root 180 Jun 3 14:07 ..

...
lrwxrwxrwx 1 root root 9 Jun 3 14:07 wwn-0x5000c500f73d4777 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x5000c500f73d4777-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x5000c500f73d4777-part9 -> ../../sdb9
lrwxrwxrwx 1 root root 9 Jun 3 14:07 wwn-0x5002538e4978de42 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x5002538e4978de42-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x5002538e4978de42-part9 -> ../../sdc9
lrwxrwxrwx 1 root root 9 Jun 3 14:07 wwn-0x588891410021f21a -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x588891410021f21a-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x588891410021f21a-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jun 3 14:07 wwn-0x588891410021f21a-part3 -> ../../sda3
 
So your zpool on disk sdb is not imported, what is the output of zpool import?
 
root@pbs:~# zpool import
pool: Primary
id: 1282641853788701463
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

Primary ONLINE
wwn-0x5000c500f73d4777 ONLINE
 
root@pbs:~# zpool import
pool: Primary
id: 1282641853788701463
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

Primary ONLINE
wwn-0x5000c500f73d4777 ONLINE
Okay, so your pools have the same name and one was imported before trying the other, the second one failing to import because of the name clash. You can export the currently imported one via zpool export Primary, making sure beforehand the pool is not in use by e.g. the Proxmox Backup Server in which case you should set the datastores located on this pool as maintenance mode offline. Once the pools are exported, you can re-import them by their id (as listed in the output of zpool import) and rename the pool by zpool import <id> <poolname>. Further, make sure that the ZFS datasets on the pools have different mountpoints, so one does not overmount the other. You can check that with the zfs get mountpoint command.
 
You got a lot of very on-point advice here. It's all exactly correct.

But ... nobody told you to do the simple thing.
The simple thing is shut down the system. Take out the new 1TB SSD. Put it somewhere else. Wipe it. Put it back.

And then its fixed. No more conflict. No exporting. No mucking about at all.