About ZFS stroage upgrade

jjc27017

Member
Dec 14, 2017
40
0
6
36
Hello,
I have already run proxmox with 3 nodes as cluster, and I want to ask something about how to add a hard disk for increasing space as zfs, can you give some advises or steps from initializing the hard disk to add into the zfs pool? I have check the document from oracle and it has some difference config between the proxmox and the document's example. I don't want to make some mistake and break the cluster... many thanks...
 
Hi,
You have to be a bit more specific about your configuration and what you like to attache.
Also what different do you mean oracle and PVE?
 
Hi,
You have to be a bit more specific about your configuration and what you like to attache.
Also what different do you mean oracle and PVE?

Hi, what now is something like
root@pve:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19.9G 860M 19.0G - 1% 4% 1.00x ONLINE -
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda2 ONLINE 0 0 0

errors: No known data errors
root@pve:~# fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 90D6EAFA-F75C-47D8-93D5-C702297C4531

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 41926621 41924574 20G Solaris /usr & Apple ZFS
/dev/sda9 41926622 41943006 16385 8M Solaris reserved 1


Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

how can I initialize the /dev/sdb as zfs file system and add into zpool? I saw the document from oracle and it has some mirror label, than confuse me..

the oracle document
# zpool add zeepool mirror c2t1d0 c2t2d0
# zpool add -n zeepool mirror c3t1d0 c3t2d0
would update 'zeepool' to the following configuration:
zeepool
mirror
c1t0d0
c1t1d0
mirror
c2t1d0
c2t2d0
mirror
c3t1d0
c3t2d0
 
Hi,
c2t1d0 are a device at solaris ( controler 2 target 1 disk 0 ) - in this case the whole device (with an appending s1 mean partition (slice) 1)

On linux like sdb...

Udo

May I just add disk into zpool directly, for example
zpool add rpool sdb

I check up with my test environment, it works, but I don't know if it is a normal way to add a new device.
 
The problem is that you make a span(like striping) with add,
but ZFS do not support rebalancing, so it is very recommended to add a mirror to a pool.
This means you have the reliability or an raid0 but not the speed of it.
 
The problem is that you make a span(like striping) with add,
but ZFS do not support rebalancing, so it is very recommended to add a mirror to a pool.
This means you have the reliability or an raid0 but not the speed of it.

I follow the instruction and get result as below
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 824M in 0h4m with 0 errors on Mon Dec 18 17:49:46 2017
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb1 ONLINE 0 0 0
sda2 ONLINE 0 0 0

errors: No known data errors
root@pve:~# fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 611079DB-ABD2-4BA8-B393-94B20AB2914A

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 41926621 41924574 20G Solaris /usr & Apple ZFS
/dev/sda9 41926622 41943006 16385 8M Solaris reserved 1


Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8F2691ED-163E-0D46-82A3-62823AE87C8D

Device Start End Sectors Size Type
/dev/sdb1 2048 41924607 41922560 20G Solaris /usr & Apple ZFS
/dev/sdb9 41924608 41940991 16384 8M Solaris reserved 1


Disk /dev/zd0: 2.4 GiB, 2550136832 bytes, 4980736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

but it didn't increase the space eventually...
root@pve:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19.9G 825M 19.1G 16.0E 1% 4% 1.00x ONLINE -

root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 466M 0 466M 0% /dev
tmpfs 97M 5.2M 92M 6% /run
rpool/ROOT/pve-1 17G 786M 16G 5% /
tmpfs 485M 25M 460M 6% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 485M 0 485M 0% /sys/fs/cgroup
rpool 16G 128K 16G 1% /rpool
rpool/ROOT 16G 128K 16G 1% /rpool/ROOT
rpool/data 16G 128K 16G 1% /rpool/data
/dev/fuse 30M 16K 30M 1% /etc/pve
tmpfs 97M 0 97M 0% /run/user/0

May I ask the reason....?