Storage

Toni Blanch

Member
May 8, 2019
27
0
6
50
Hello, let's see if you can help me, I have a disk created with more than 9TB in raw, in consumption of the vm it does not exceed 4TB, instead it tells me that it has consumed all the space in the storage. What has happened?
Can I reduce the raw file so that the vm doesn't give me problems?

Captura de pantalla 2022-02-18 a las 14.07.40.png
 
Please provide the output of pveversion -v, qm config 184, pvesm status and cat /etc/pve/storage.cfg.
 
I would guess you are using a raidz1/2/3 and didn't increased the blocksize of your pool (or in other words you are using a too low volblocksize for your zvols). In that case you get alot of padding overhead and everything will need way more storage.
If thats the case search the forum for "volblocksize" or "padding overhead". I explained it dozens of times here.

In case you got a raidz1/2/3 pool the output of your zpool status, zpool list would be useful.
 
Last edited:
root@ALMACENDATOS:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-12
pve-kernel-4.15.18-24-pve: 4.15.18-52
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

**************************************************
oot@ALMACENDATOS:~# qm config 184
agent: 1
bootdisk: scsi0
cores: 4
description: Copias%3A%0ASemanales/mensuales/Anuales%0A%0AFtp propio
ide0: ISOS:iso/VMware_vmxnet3.iso,media=cdrom,size=95984K
memory: 8192
name: FTPSTORAGE3
net0: vmxnet3=02:00:00:8A:56:67,bridge=vmbr2
numa: 0
onboot: 1
ostype: win10
scsi0: ALMACENDATOS:vm-184-disk-0,cache=writethrough,discard=on,size=1100G
scsi1: ALMACENDATOS:vm-184-disk-1,cache=directsync,discard=on,size=9600G
scsihw: virtio-scsi-pci
smbios1: uuid=18f10bd1-9866-4f6e-a580-98870a375bb8
sockets: 2
vmgenid: c841f582-aaab-4294-993a-1c2c5f95d9e8
***************************************
root@ALMACENDATOS:~# pvesm status
Name Type Status Total Used Available %
ALMACENDATOS zfspool active 10958536704 10957596328 940375 99.99%
FTP24TB zfspool disabled 0 0 0 N/A
ISOS nfs active 524288000 423349248 100938752 80.75%
SNAPSHOT71 nfs disabled 0 0 0 N/A
SNAPSHOT81 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_2 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_3 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_3_15DIAS nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_4 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_5 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_6 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_7 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_8 nfs disabled 0 0 0 N/A
SNAPSHOT_STORAGE_9 nfs disabled 0 0 0 N/A
SSD zfspool disabled 0 0 0 N/A
Snapshot_10 nfs disabled 0 0 0 N/A
almacenftp zfspool disabled 0 0 0 N/A
local dir active 478212864 1573632 476639232 0.33%
mcdisco rbd active 1775249823 986264031 788985792 55.56%


******************
root@ALMACENDATOS:~# cat /etc/pve/storage.cfg
nfs: ISOS
export /export/ftpbackup/ns3147355.ip-51-91-15.eu
path /mnt/pve/ISOS
server ftpback-rbx7-642.ovh.net
content iso
maxfiles 1
options vers=3

dir: local
path /var/lib/vz
content backup,images,snippets,vztmpl,rootdir,iso
maxfiles 0
shared 0

rbd: mcdisco
content rootdir,images
krbd 0
nodes TRANSPORTESPRATS,MCPROXMOX1,MCPROXMOX2,ALMACENDATOS,MCPROXMOX3,RAPALO
pool mcdisco

zfspool: SSD
pool SSD
content images,rootdir
nodes LAPIEMONTESA

zfspool: almacenftp
pool almacen
blocksize 8K
content images
nodes FTPSERVER
sparse 1

zfspool: FTP24TB
pool almacenftp
content images
nodes FTP24TB
sparse 1

zfspool: ALMACENDATOS
pool ALMACENDATOS
content images
nodes ALMACENDATOS
sparse 1

nfs: SNAPSHOT_STORAGE_2
export /export/ftpbackup/ns3147356.ip-51-91-15.eu
path /mnt/pve/SNAPSHOT_STORAGE_2
server ftpback-rbx3-303.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX2,MCPROXMOX3

nfs: SNAPSHOT_STORAGE_3
export /export/ftpbackup/ns3147357.ip-51-91-15.eu
path /mnt/pve/SNAPSHOT_STORAGE_3
server ftpback-rbx3-500.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,MCPROXMOX1

nfs: SNAPSHOT_STORAGE_4
export /export/ftpbackup/ns3136329.ip-51-77-246.eu
path /mnt/pve/SNAPSHOT_STORAGE_4
server ftpback-rbx2-58.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,TRANSPORTESPRATS

nfs: SNAPSHOT_STORAGE_5
export /export/ftpbackup/ns3142069.ip-51-77-246.eu
path /mnt/pve/SNAPSHOT_STORAGE_5
server ftpback-rbx3-272.ovh.net
content backup
maxfiles 2
nodes LAPIEMONTESA,MCPROXMOX2,MCPROXMOX3

nfs: SNAPSHOT_STORAGE_6
export /export/ftpbackup/ns3136209.ip-51-77-246.eu
path /mnt/pve/SNAPSHOT_STORAGE_6
server ftpback-rbx3-242.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,RAPALO

nfs: SNAPSHOT_STORAGE_7
disable
export /export/ftpbackup/ns3142015.ip-51-77-246.eu
path /mnt/pve/SNAPSHOT_STORAGE_7
server ftpback-rbx3-431.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,GRUPO1800

nfs: SNAPSHOT_STORAGE_8
export /export/ftpbackup/ns31107983.ip-51-91-7.eu
path /mnt/pve/SNAPSHOT_STORAGE_8
server ftpback-rbx3-146.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,TRANSPORTESPRATS

nfs: SNAPSHOT_STORAGE_9
export /export/ftpbackup/ns3143355.ip-51-83-3.eu
path /mnt/pve/SNAPSHOT_STORAGE_9
server ftpback-rbx7-698.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,ICAR1

nfs: SNAPSHOT71
export /export/ftpbackup/ns3136345.ip-51-77-118.eu
path /mnt/pve/SNAPSHOT71
server ftpback-rbx7-796.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3,MCPROXMOX1,ICAR1,GRUPO1800,LAPIEMONTESA

nfs: SNAPSHOT81
export /export/ftpbackup/ns3186800.ip-51-195-105.eu
path /mnt/pve/SNAPSHOT81
server ftpback-rbx7-850.ovh.net
content backup
maxfiles 2
nodes TRANSPORTESPRATS,MCPROXMOX3

nfs: Snapshot_10
export /export/ftpbackup/ns3136345.ip-51-77-118.eu
path /mnt/pve/Snapshot_10
server ftpback-rbx7-796.ovh.net
content backup
maxfiles 2
nodes MCPROXMOX3

nfs: SNAPSHOT_STORAGE_3_15DIAS
export /export/ftpbackup/ns3147357.ip-51-91-15.eu
path /mnt/pve/SNAPSHOT_STORAGE_3_15DIAS
server ftpback-rbx3-500.ovh.net
content backup
maxfiles 15
nodes GRUPO1800

***********************************************
 
I would guess you are using a raidz1/2/3 and didn't increased the blocksize of your pool (or in other words you are using a too low volblocksize for your zvols). In that case you get alot of padding overhead and everything will need way more storage.
If thats the case search the forum for "volblocksize" or "padding overhead". I explained it dozens of times here.

In case you got a raidz1/2/3 pool the output of your zpool status, zpool list would be useful.

Thank you very much for your help, I can't see what the correct values are...

**********
root@ALMACENDATOS:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
ALMACENDATOS 14.5T 13.2T 1.35T - 39% 90% 1.00x ONLINE -
rpool 476G 6.39G 470G - 44% 1% 1.00x ONLINE -


**********
root@ALMACENDATOS:~# zpool status
pool: ALMACENDATOS
state: ONLINE
scan: scrub repaired 272K in 48h43m with 0 errors on Tue Feb 15 01:07:50 2022
config:

NAME STATE READ WRITE CKSUM
ALMACENDATOS ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 0h0m with 0 errors on Sun Feb 13 00:24:52 2022
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme0n1p2 ONLINE 0 0 0

errores: No hay errores de datos conocidos
 
In case your pool "ALMACENDATOS" was created with an ashift=12 (you can check that with zfs get ashift ALMACENDATOS) and 4 disks in a raidz1 your volblocksize needs to be atleast 16K. It basically looks like this:
Parity+Padding loss:Usable raw capacity for zvols:
Volblocksize: 4K/8K50%40%
Volblocksize: 16K/32K33%54%
Volblockisze: 64K/128K27%58%
Volblocksize: 256K/512K26%59%
Volblocksize: 1M25%60%
"Usable raw capacity for zvols" includes the 20% of the ZFS pool that always should be kept free for good performance and to minimize fragmentation.

So I probably would go with the 16K volblocksize even if you loose some capacity because otherwise running stuff like MySQL would cause terrible overhead. In that case of your 14.5T around 7.83T would be usable for virtual disks. I personally would create a quota so these 20% always will be kept free: zfs set quota=7.83T ALMACENDATOS

The volblocksize of virtual disks (zvols) can only be set at creation, so to change it you would need to change your pools blocksize ("WebUI: Datacenter -> Storage -> ALMACENDATOS -> Edit -> Block size" and change it from the default 8K to something like 16K). After that you have to destroy and recreate all of your VMs. Easiest way to do this would be to create Vzdump/PBS backups of all VMs, then destroy them and restore them from backups. Or migrating VMs between nodes would work too if you got a cluster.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!