[SOLVED] ZFS Pool (2x2 Mirror) zu klein

Moalti

New Member
Jan 13, 2022
2
0
1
124
Hallo Zusammen,

ich habe über die CLI ein 2x2 Mirror erstellt mit 4x3TB.

Code:
zpool status
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jan 13 22:05:05 2022
    2.28T scanned at 1.67G/s, 1.31T issued at 986M/s, 2.28T total
    178G resilvered, 57.62% done, 00:17:07 to go
config:

    NAME                        STATE     READ WRITE CKSUM
    tank                        ONLINE       0     0     0
      mirror-0                  ONLINE       0     0     0
        wwn-0x50014ee2678ac596  ONLINE       0     0     0
        wwn-0x50014ee21235835d  ONLINE       0     0     0
      mirror-1                  ONLINE       0     0     0
        wwn-0x50014ee20ce5d1eb  ONLINE       0     0     0
        wwn-0x5000c5003cf7ddec  ONLINE       0     0     0  (resilvering)

errors: No known data errors

Nach einem Neustart wurde eine Platte nicht erkannt und der Pool ist in den Status "degraded" gegangen.
Daher habe ich die Platte aus dem Pool entfernt (detach) und danach wieder hinzugefügt (attach).

Nun habe ich das Problem, dass die Größe des Pools nicht stimmt. Ich erwarte 5.44TB.
Das System zeigt mir über "df-h" aber nur 3.1 TB an.

Code:
zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  5.44T  2.28T  3.16T        -         -     0%    41%  1.00x    ONLINE  -
Code:
zfs get available
NAME                PROPERTY   VALUE  SOURCE
tank                available  3.03T  -
tank/backup         available  3.03T  -
tank/downloads      available  3.03T  -
tank/media          available  3.03T  -
tank/nextcloud      available  3.03T  -
tank/nextcloud-old  available  3.03T  -
tank/proxmox        available  3.03T  -
tank/samba-public   available  50.0G  -
Code:
zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
tank                2.28T  3.03T      120K  /tank
tank/backup          619G  3.03T      619G  /tank/backup
tank/downloads       207G  3.03T      207G  /tank/downloads
tank/media          1.43T  3.03T     1.43T  /tank/media
tank/nextcloud        96K  3.03T       96K  /tank/nextcloud
tank/nextcloud-old  11.5G  3.03T     11.5G  /tank/nextcloud-old
tank/proxmox        29.3G  3.03T     29.3G  /tank/proxmox
tank/samba-public    104K  50.0G      104K  /tank/samba-public
Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
tank                  3.1T  128K  3.1T   1% /tank
tank/backup           3.7T  619G  3.1T  17% /tank/backup
tank/samba-public      50G  128K   50G   1% /tank/samba-public
tank/nextcloud-old    3.1T   12G  3.1T   1% /tank/nextcloud-old
tank/nextcloud        3.1T  128K  3.1T   1% /tank/nextcloud
tank/downloads        3.3T  208G  3.1T   7% /tank/downloads
tank/media            4.5T  1.5T  3.1T  33% /tank/media
tank/proxmox          3.1T   30G  3.1T   1% /tank/proxmox

Wie kann ich das ganze korrigieren.
Danke für eure Tips.

Gruß
 
Hallo,
Hallo Zusammen,

ich habe über die CLI ein 2x2 Mirror erstellt mit 4x3TB.

Code:
zpool status
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jan 13 22:05:05 2022
    2.28T scanned at 1.67G/s, 1.31T issued at 986M/s, 2.28T total
    178G resilvered, 57.62% done, 00:17:07 to go
config:

    NAME                        STATE     READ WRITE CKSUM
    tank                        ONLINE       0     0     0
      mirror-0                  ONLINE       0     0     0
        wwn-0x50014ee2678ac596  ONLINE       0     0     0
        wwn-0x50014ee21235835d  ONLINE       0     0     0
      mirror-1                  ONLINE       0     0     0
        wwn-0x50014ee20ce5d1eb  ONLINE       0     0     0
        wwn-0x5000c5003cf7ddec  ONLINE       0     0     0  (resilvering)

errors: No known data errors

Nach einem Neustart wurde eine Platte nicht erkannt und der Pool ist in den Status "degraded" gegangen.
Daher habe ich die Platte aus dem Pool entfernt (detach) und danach wieder hinzugefügt (attach).

Nun habe ich das Problem, dass die Größe des Pools nicht stimmt. Ich erwarte 5.44TB.
Das System zeigt mir über "df-h" aber nur 3.1 TB an.
df ist bei zfs nicht zuverlässig.

Code:
zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  5.44T  2.28T  3.16T        -         -     0%    41%  1.00x    ONLINE  -
Zum Unterschied zwischen der Benutzung mit zpool und zfs, Zitat aus man zpoolprops:
Code:
     free              The amount of free space available in the pool.  By contrast, the zfs(8) available property describes how much new data can be written to ZFS filesystems/volumes.  The zpool free property is not generally useful for this purpose, and can be
                       substantially more than the zfs available space.  This discrepancy is due to several factors, including raidz parity; zfs reservation, quota, refreservation, and refquota properties; and space set aside by spa_slop_shift (see zfs(4) for
                       more information).

Code:
zfs get available
NAME                PROPERTY   VALUE  SOURCE
tank                available  3.03T  -
tank/backup         available  3.03T  -
tank/downloads      available  3.03T  -
tank/media          available  3.03T  -
tank/nextcloud      available  3.03T  -
tank/nextcloud-old  available  3.03T  -
tank/proxmox        available  3.03T  -
tank/samba-public   available  50.0G  -
Code:
zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
tank                2.28T  3.03T      120K  /tank
tank/backup          619G  3.03T      619G  /tank/backup
tank/downloads       207G  3.03T      207G  /tank/downloads
tank/media          1.43T  3.03T     1.43T  /tank/media
tank/nextcloud        96K  3.03T       96K  /tank/nextcloud
tank/nextcloud-old  11.5G  3.03T     11.5G  /tank/nextcloud-old
tank/proxmox        29.3G  3.03T     29.3G  /tank/proxmox
tank/samba-public    104K  50.0G      104K  /tank/samba-public
Da sieht man dass /tank größer ist (USED + AVAIL) als df glaubt.
Für noch detailiertere Informationen (Nutzung von Snapshot/reservation etc.) gibt es zfs list -o space.

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
tank                  3.1T  128K  3.1T   1% /tank
tank/backup           3.7T  619G  3.1T  17% /tank/backup
tank/samba-public      50G  128K   50G   1% /tank/samba-public
tank/nextcloud-old    3.1T   12G  3.1T   1% /tank/nextcloud-old
tank/nextcloud        3.1T  128K  3.1T   1% /tank/nextcloud
tank/downloads        3.3T  208G  3.1T   7% /tank/downloads
tank/media            4.5T  1.5T  3.1T  33% /tank/media
tank/proxmox          3.1T   30G  3.1T   1% /tank/proxmox

Wie kann ich das ganze korrigieren.
Danke für eure Tips.

Gruß
 
Hallo Fabian,

also sind die größen soweit OK wenn ich dich richtig verstanden habe?
 
Die Werte von den zfs Kommandos sollten akkurat sein ;)
Also insgesamt 2.28T schon in Benutzung und 3.03T noch frei, wobei das Letztere immer eine Schätzung ist, wegen Kompression/Metadaten etc.
 
  • Like
Reactions: Moalti

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!