[SOLVED] need mkfs before zpool add cache and log partition?

dater

New Member
Jul 9, 2022
15
0
1
step1、I have created zpool rpool
step2、gdisk create gpt and partition on sdb /dev/sdb1 /dev/sdb2
step3 、I used zpool add -f rpool log /dev/sdb1 cache /dev/sdb2
the question is after step2 need mkfs xfs /dev/sdb2 ?
instant step3 or format sdb1 &sdb2
now the sdb1 sdb2 partition type is Linux filesystem ,isn't correct?,the log and cahe is working?
thank you very much
==================================================
root@bk:~# zpool status rpool
pool: rpool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi3-part3 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part3 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3 ONLINE 0 0 0
logs
sdb1 ONLINE 0 0 0
cache
sdb2 ONLINE 0 0 0

errors: No known data errors
=========================================================
root@bk:~# fdisk -l
Disk /dev/sda: 128 GiB, 137438953472 bytes, 268435456 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 223C1C54-A909-4A4C-9DC3-06D91446F4F5

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 268435422 267384799 127.5G Solaris /usr & Apple ZFS


Disk /dev/sdb: 32 GiB, 34359738368 bytes, 67108864 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C7D276AB-A173-4E21-A06C-C8C5C6C0D4AA

Device Start End Sectors Size Type
/dev/sdb1 2048 20973567 20971520 10G Linux filesystem
/dev/sdb2 20973568 41945087 20971520 10G Linux filesystem


Disk /dev/sdc: 128 GiB, 137438953472 bytes, 268435456 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 09099E05-80A0-484B-9FBF-1F6A8F4931DC

Device Start End Sectors Size Type
/dev/sdc1 34 2047 2014 1007K BIOS boot
/dev/sdc2 2048 1050623 1048576 512M EFI System
/dev/sdc3 1050624 268435422 267384799 127.5G Solaris /usr & Apple ZFS


Disk /dev/sdd: 128 GiB, 137438953472 bytes, 268435456 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D28A10DB-FBC9-4C27-95EA-4CA03D57A9A4

Device Start End Sectors Size Type
/dev/sdd1 34 2047 2014 1007K BIOS boot
/dev/sdd2 2048 1050623 1048576 512M EFI System
/dev/sdd3 1050624 268435422 267384799 127.5G Solaris /usr & Apple ZFS


Disk /dev/sde: 128 GiB, 137438953472 bytes, 268435456 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 708B477A-9D65-4A29-9B4D-6F309315FC6E

Device Start End Sectors Size Type
/dev/sde1 34 2047 2014 1007K BIOS boot
/dev/sde2 2048 1050623 1048576 512M EFI System
/dev/sde3 1050624 268435422 267384799 127.5G Solaris /usr & Apple ZFS
 
Last edited:
step3 、I used zpool add -f rpool log /dev/sdb1 cache /dev/sdb2
the question is after step2 need mkfs xfs /dev/sdb2 ?
No!
The moment you add sdb2 to the pool it is "owned" by ZFS.
The output of zpool status confirms that it is used. (Btw: please use Code-Tags, it makes it more readable.)

The fact that sdb2 has the "wrong" partition type can be ignored as it doesn't matter.

Best regards
 
Whats your output of zpool get ashift rpool? Make sure it is atleast "12" in case your physical disks use 4K physical sector size. Because you created that pool with virtual disks only that will report that they are using 512B sectors.
 
No!
The moment you add sdb2 to the pool it is "owned" by ZFS.
The output of zpool status confirms that it is used. (Btw: please use Code-Tags, it makes it more readable.)

The fact that sdb2 has the "wrong" partition type can be ignored as it doesn't matter.

Best regards
I understand,
thank you very much again~~
 
Whats your output of zpool get ashift rpool? Make sure it is atleast "12" in case your physical disks use 4K physical sector size. Because you created that pool with virtual disks only that will report that they are using 512B sectors.
1、I do that in a vm,disk is virtual disk,rpool was createrd when install proxmox back system
after pbs intalled ,folder path /rpool already existed, I just create datastore /rpool and use it.
2、zpool get ashift rpool output
root@bk:~# zpool get ashift rpool
NAME PROPERTY VALUE SOURCE
rpool ashift 12 local
 
Last edited:
No!
The moment you add sdb2 to the pool it is "owned" by ZFS.
The output of zpool status confirms that it is used. (Btw: please use Code-Tags, it makes it more readable.)

The fact that sdb2 has the "wrong" partition type can be ignored as it doesn't matter.

Best regards
1、I do that in a vm,disk is virtual disk,rpool was createrd when install proxmox back system
after pbs intalled ,folder path /rpool already existed, I just create datastore /rpool and use it.
2、dear , Is there a problem with this operation,used the pbs install create zfs pool add datastore and use it
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!