Attached NAS as backup, lost capacity

rjcab

Member
Mar 1, 2021
61
1
13
44
Hello,

I have the latest version of proxmox with several VM.
One of them is trunenas for backuping many stuff such as camera records.

bcp.jpg
I get the hddintbkp by passthrough the 2nd Hdd of the proxmox server on the VM truneas which is used only for storage backups and mount it as NAS point.

Code:
root@pve:~# lsblk -o +MODEL,SERIAL,WWN
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT MODEL               SERIAL           WWN
sda            8:0    0   1.8T  0 disk            WDC_WD20SPZX-08UA7  WD-WX12E90D6Y78  0x50014ee2139aed6c
├─sda1         8:1    0     2G  0 part                                                 0x50014ee2139aed6c
└─sda2         8:2    0   1.8T  0 part                                                 0x50014ee2139aed6c
nvme0n1      259:0    0 953.9G  0 disk            INTEL SSDPEKNU010TZ BTKA204007L61P0B eui.0000000001000000e4d25c247cf35401
├─nvme0n1p1  259:1    0  1007K  0 part                                                 eui.0000000001000000e4d25c247cf35401
├─nvme0n1p2  259:2    0     1G  0 part /boot/efi                                       eui.0000000001000000e4d25c247cf35401
└─nvme0n1p3  259:3    0 952.9G  0 part                                                 eui.0000000001000000e4d25c247cf35401
  ├─pve-swap 253:0    0     8G  0 lvm  [SWAP]                                         
  └─pve-root 253:1    0 944.9G  0 lvm  /                                              
root@pve:~#


root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=53FE-879F /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
192.168.1.19:/mnt/diskint/pve /mnt/hddintbkp/   nfs     auto,_netdev,,nofail    0 0
root@pve:~#

On truenas VM:

ckp.jpg
If you have any ideas, welcome :)
 
Last edited:
Hi,
what is your actual question? That you don't see the full 1.8T as reported by the output of lsblk? Is the full disk exposed as NFS share from within the VM? What filesystem are you using in truenas? maybe you created different partitions on the disk or you are using different zfs datasets on the same zpool (in case of zfs).
 
Hello,

ZFS, for zpool. The HDD has a capacity of 1.8T and when I mount it, only 1.39 ...
 
You will have to check from within the TrueNAS VM how much storage space the share actually has available. Note that the information you are showing only shows the full disk size. Please check the output of zpool list -v within the VM.
 
Thanks Chris for your time Am a newbie

Good idea, and maybe start of a clue:

Code:
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
diskint    1.81T  1.35T   469G        -         -    14%    74%  1.00x    ONLINE  -
  sda2     1.82T  1.35T   469G        -         -    14%  74.7%      -    ONLINE
root@pve:~#

wired, why the total capacity is not handled ?
 
Thanks Chris for your time Am a newbie

Good idea, and maybe start of a clue:

Code:
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
diskint    1.81T  1.35T   469G        -         -    14%    74%  1.00x    ONLINE  -
  sda2     1.82T  1.35T   469G        -         -    14%  74.7%      -    ONLINE
root@pve:~#

wired, why the total capacity is not handled ?
Okay, so the zpool has the full 1.82T at its disposal. What does a zfs list -o space show you?

Note that not the full capacity of the pool is available for storage, as zfs needs some additional space for checksums ecc. and also reserves some space for internal housekeeping, e.g. https://openzfs.github.io/openzfs-docs/Performance and Tuning/Module Parameters.html#spa-slop-shift
 
Okay, so the zpool has the full 1.82T at its disposal. What does a zfs list -o space show you?

Note that not the full capacity of the pool is available for storage, as zfs needs some additional space for checksums ecc. and also reserves some space for internal housekeeping, e.g. https://openzfs.github.io/openzfs-docs/Performance and Tuning/Module Parameters.html#spa-slop-shift
Here below:
Code:
root@pve:~# zfs list -o space
NAME                                                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
diskint                                                    411G  1.35T        0B    104K             0B      1.35T
diskint/.system                                            411G   879M        0B    852M             0B      26.2M
diskint/.system/configs-3a30009ae08f4d17abd5898e9ae66fa4   411G  5.34M        0B   5.34M             0B         0B
diskint/.system/cores                                     1024M    96K        0B     96K             0B         0B
diskint/.system/rrd-3a30009ae08f4d17abd5898e9ae66fa4       411G  15.6M        0B   15.6M             0B         0B
diskint/.system/samba4                                     411G   360K        0B    360K             0B         0B
diskint/.system/services                                   411G    96K        0B     96K             0B         0B
diskint/.system/syslog-3a30009ae08f4d17abd5898e9ae66fa4    411G  4.68M        0B   4.68M             0B         0B
diskint/.system/webui                                      411G    96K        0B     96K             0B         0B
diskint/camera                                             411G  88.2G        0B   88.2G             0B         0B
diskint/iocage                                             411G  4.77G        0B   8.21M             0B      4.76G
diskint/iocage/download                                    411G   691M        0B     96K             0B       691M
diskint/iocage/download/13.1-RELEASE                       411G   435M        0B    435M             0B         0B
diskint/iocage/download/13.2-RELEASE                       411G   256M        0B    256M             0B         0B
diskint/iocage/images                                      411G    96K        0B     96K             0B         0B
diskint/iocage/jails                                       411G  1.93G        0B     96K             0B      1.93G
diskint/iocage/jails/Portier                               411G  1.91G      336K    312K             0B      1.91G
diskint/iocage/jails/Portier/root                          411G  1.91G     45.5M   1.87G             0B         0B
diskint/iocage/jails/camera                                411G   284K        0B    108K             0B       176K
diskint/iocage/jails/camera/root                           411G   176K        0B    176K             0B         0B
diskint/iocage/jails/camerafolder                          411G  12.6M       76K    104K             0B      12.4M
diskint/iocage/jails/camerafolder/root                     411G  12.4M      224K   12.2M             0B         0B
diskint/iocage/log                                         411G   112K        0B    112K             0B         0B
diskint/iocage/releases                                    411G  2.16G        0B     96K             0B      2.16G
diskint/iocage/releases/13.1-RELEASE                       411G  1.51G        0B     96K             0B      1.51G
diskint/iocage/releases/13.1-RELEASE/root                  411G  1.51G        0B   1.51G             0B         0B
diskint/iocage/releases/13.2-RELEASE                       411G   668M        0B     96K             0B       668M
diskint/iocage/releases/13.2-RELEASE/root                  411G   668M        8K    668M             0B         0B
diskint/iocage/templates                                   411G    96K        0B     96K             0B         0B
diskint/pcloud                                             411G   409G        0B    409G             0B         0B
diskint/pve                                                411G   885G        0B    885G             0B         0B
root@pve:~#
 
diskint/pve 411G 885G 0B 885G 0B 0B
I assume you share only the diskint/pve dataset as NFS share, is that correct? Than the sum of avail and used is what you should see as storage space on the PVE side. The other datasets consume space too, subtracting from the available storage space for your NFS shared dataset.
 
I shared 3 folders as below:
View attachment 55661
but only the last is mounted on PVE as NFS storage i would assume. The main point is, your disk space is fixed, different datasets consume space from the same total available pool space, so your dataset cannot have the full disk size available. The total size you see on the PVE side, is only the used and available space for that specific dataset, not the whole zpool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!