2tb of vms 4.4tb of storage used.

jzuck74

New Member
May 16, 2022
8
0
1
I have 20 vms running on one server each server is 110gb to 137gb in size. I am using around 2.2tb to 2.4tb of vms yet my 4.4tb storage pool is at 89.41% (4.40 TB of 4.92 TB). Im unsure how I lost 2tbs of storage.
 
Name Type Status Total Used Available %
local dir active 564892288 206909568 357982720 36.63%
local-zfs zfspool active 357982884 96 357982788 0.00%
storage zfspool active 4804575232 4295878038 508697193 89.41%


NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 558.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 558.4G 0 part
sdb 8:16 0 558.9G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 558.4G 0 part
sdc 8:32 0 1.6T 0 disk
├─sdc1 8:33 0 1.6T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 1.6T 0 disk
├─sdd1 8:49 0 1.6T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 1.6T 0 disk
├─sde1 8:65 0 1.6T 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 1.6T 0 disk
├─sdf1 8:81 0 1.6T 0 part
└─sdf9 8:89 0 8M 0 part
sdg 8:96 0 1.6T 0 disk
├─sdg1 8:97 0 1.6T 0 part
└─sdg9 8:105 0 8M 0 part
sdh 8:112 0 1.6T 0 disk
├─sdh1 8:113 0 1.6T 0 part
└─sdh9 8:121 0 8M 0 part
zd0 230:0 0 100G 0 disk
├─zd0p1 230:1 0 450M 0 part
├─zd0p2 230:2 0 99M 0 part
├─zd0p3 230:3 0 16M 0 part
├─zd0p4 230:4 0 99G 0 part
└─zd0p5 230:5 0 502M 0 part
zd16 230:16 0 1M 0 disk
zd32 230:32 0 100G 0 disk
├─zd32p1 230:33 0 450M 0 part
├─zd32p2 230:34 0 99M 0 part
├─zd32p3 230:35 0 16M 0 part
├─zd32p4 230:36 0 99G 0 part
└─zd32p5 230:37 0 503M 0 part
zd48 230:48 0 100G 0 disk
├─zd48p1 230:49 0 450M 0 part
├─zd48p2 230:50 0 99M 0 part
├─zd48p3 230:51 0 16M 0 part
├─zd48p4 230:52 0 99G 0 part
└─zd48p5 230:53 0 501M 0 part
zd64 230:64 0 100G 0 disk
├─zd64p1 230:65 0 450M 0 part
├─zd64p2 230:66 0 99M 0 part
├─zd64p3 230:67 0 16M 0 part
├─zd64p4 230:68 0 99G 0 part
└─zd64p5 230:69 0 501M 0 part
zd80 230:80 0 1M 0 disk
zd96 230:96 0 100G 0 disk
├─zd96p1 230:97 0 450M 0 part
├─zd96p2 230:98 0 99M 0 part
├─zd96p3 230:99 0 16M 0 part
├─zd96p4 230:100 0 99G 0 part
└─zd96p5 230:101 0 500M 0 part
zd112 230:112 0 1M 0 disk
zd128 230:128 0 100G 0 disk
├─zd128p1 230:129 0 450M 0 part
├─zd128p2 230:130 0 99M 0 part
├─zd128p3 230:131 0 16M 0 part
├─zd128p4 230:132 0 99G 0 part
└─zd128p5 230:133 0 500M 0 part
zd144 230:144 0 1M 0 disk
zd160 230:160 0 1M 0 disk
zd176 230:176 0 1M 0 disk
zd192 230:192 0 100G 0 disk
├─zd192p1 230:193 0 450M 0 part
├─zd192p2 230:194 0 99M 0 part
├─zd192p3 230:195 0 16M 0 part
├─zd192p4 230:196 0 99G 0 part
└─zd192p5 230:197 0 502M 0 part
zd208 230:208 0 1M 0 disk
zd224 230:224 0 1M 0 disk
zd240 230:240 0 100G 0 disk
├─zd240p1 230:241 0 450M 0 part
├─zd240p2 230:242 0 99M 0 part
├─zd240p3 230:243 0 16M 0 part
├─zd240p4 230:244 0 99G 0 part
└─zd240p5 230:245 0 500M 0 part
zd256 230:256 0 100G 0 disk
├─zd256p1 230:257 0 450M 0 part
├─zd256p2 230:258 0 99M 0 part
├─zd256p3 230:259 0 16M 0 part
├─zd256p4 230:260 0 99G 0 part
└─zd256p5 230:261 0 501M 0 part
zd272 230:272 0 1M 0 disk
zd288 230:288 0 1M 0 disk
zd304 230:304 0 100G 0 disk
├─zd304p1 230:305 0 450M 0 part
├─zd304p2 230:306 0 99M 0 part
├─zd304p3 230:307 0 16M 0 part
├─zd304p4 230:308 0 99G 0 part
└─zd304p5 230:309 0 500M 0 part
zd320 230:320 0 1M 0 disk
zd336 230:336 0 127G 0 disk
├─zd336p1 230:337 0 450M 0 part
├─zd336p2 230:338 0 99M 0 part
├─zd336p3 230:339 0 16M 0 part
├─zd336p4 230:340 0 126G 0 part
└─zd336p5 230:341 0 502M 0 part
zd352 230:352 0 1M 0 disk
zd368 230:368 0 1M 0 disk
zd384 230:384 0 1M 0 disk
zd400 230:400 0 1M 0 disk
zd416 230:416 0 1M 0 disk
zd432 230:432 0 1M 0 disk
zd448 230:448 0 1M 0 disk
zd464 230:464 0 1M 0 disk
zd480 230:480 0 1M 0 disk
zd496 230:496 0 127G 0 disk
zd512 230:512 0 127G 0 disk
├─zd512p1 230:513 0 450M 0 part
├─zd512p2 230:514 0 99M 0 part
├─zd512p3 230:515 0 16M 0 part
├─zd512p4 230:516 0 126G 0 part
└─zd512p5 230:517 0 500M 0 part
zd528 230:528 0 127G 0 disk
├─zd528p1 230:529 0 450M 0 part
├─zd528p2 230:530 0 99M 0 part
├─zd528p3 230:531 0 16M 0 part
├─zd528p4 230:532 0 126G 0 part
└─zd528p5 230:533 0 500M 0 part
zd544 230:544 0 127G 0 disk
├─zd544p1 230:545 0 450M 0 part
├─zd544p2 230:546 0 99M 0 part
├─zd544p3 230:547 0 16M 0 part
├─zd544p4 230:548 0 126G 0 part
└─zd544p5 230:549 0 500M 0 part
zd560 230:560 0 127G 0 disk
├─zd560p1 230:561 0 450M 0 part
├─zd560p2 230:562 0 99M 0 part
├─zd560p3 230:563 0 16M 0 part
├─zd560p4 230:564 0 126G 0 part
└─zd560p5 230:565 0 500M 0 part
zd576 230:576 0 127G 0 disk
├─zd576p1 230:577 0 450M 0 part
├─zd576p2 230:578 0 99M 0 part
├─zd576p3 230:579 0 16M 0 part
├─zd576p4 230:580 0 126G 0 part
└─zd576p5 230:581 0 500M 0 part
zd592 230:592 0 127G 0 disk
├─zd592p1 230:593 0 450M 0 part
├─zd592p2 230:594 0 99M 0 part
├─zd592p3 230:595 0 16M 0 part
├─zd592p4 230:596 0 126G 0 part
└─zd592p5 230:597 0 503M 0 part
zd608 230:608 0 127G 0 disk
├─zd608p1 230:609 0 450M 0 part
├─zd608p2 230:610 0 99M 0 part
├─zd608p3 230:611 0 16M 0 part
├─zd608p4 230:612 0 126G 0 part
└─zd608p5 230:613 0 502M 0 part
zd624 230:624 0 127G 0 disk
├─zd624p1 230:625 0 450M 0 part
├─zd624p2 230:626 0 99M 0 part
├─zd624p3 230:627 0 16M 0 part
├─zd624p4 230:628 0 126G 0 part
└─zd624p5 230:629 0 500M 0 part
zd640 230:640 0 127G 0 disk
├─zd640p1 230:641 0 450M 0 part
├─zd640p2 230:642 0 99M 0 part
├─zd640p3 230:643 0 16M 0 part
├─zd640p4 230:644 0 126G 0 part
└─zd640p5 230:645 0 500M 0 part

Filesystem Size Used Avail Use% Mounted on
udev 189G 0 189G 0% /dev
tmpfs 38G 2.7M 38G 1% /run
rpool/ROOT/pve-1 539G 198G 342G 37% /
tmpfs 189G 79M 189G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
rpool 342G 128K 342G 1% /rpool
rpool/ROOT 342G 128K 342G 1% /rpool/ROOT
rpool/data 342G 128K 342G 1% /rpool/data
storage 486G 256K 486G 1% /storage
/dev/fuse 128M 32K 128M 1% /etc/pve
tmpfs 38G 0 38G 0% /run/user/0
 
Its usually either snapshots, padding overhead because of using raidz1/2/3 with too low volblocksize or you didn'T correctly setup discard/TRIM.
Output of zfs list -o space would also be useful to see if snapshots or missing discard is the problem. As well as zpool get ashift and zfs get volblocksize for padding overhead.
And please put your output in CODE-tags...makes it much easiert to read the tables...
 
Last edited:
Its usually either snapshots, padding overhead because of using raidz1/2/3 with too low volblocksize or you didn'T correctly setup discard/TRIM.
just to precise this: not an exclusive either. It could also be all three of them and that is my guess too.
 
root@LPVE001:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:15:38 with 0 errors on Sun Aug 14 00:39:40 2022
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-3500003969833832d-part3 ONLINE 0 0 0
scsi-35000039698338425-part3 ONLINE 0 0 0

errors: No known data errors

pool: storage
state: ONLINE
scan: scrub repaired 0B in 00:28:47 with 0 errors on Sun Aug 14 00:52:50 2022
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
scsi-35000cca03117f3bc ONLINE 0 0 0
scsi-35000cca03117dfdc ONLINE 0 0 0
scsi-35000cca03117f1a8 ONLINE 0 0 0
scsi-35000cca03117e04c ONLINE 0 0 0
scsi-35000cca03117decc ONLINE 0 0 0
scsi-35000cca03117df64 ONLINE 0 0 0

errors: No known data errors
root@LPVE001:~# zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 374G 165G 0B 104K 0B 165G
rpool/ROOT 374G 164G 0B 96K 0B 164G
rpool/ROOT/pve-1 374G 164G 0B 164G 0B 0B
rpool/data 374G 96K 0B 96K 0B 0B
storage 486G 4.00T 0B 180K 0B 4.00T
storage/vm-311-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-311-disk-1 628G 180G 0B 37.4G 143G 0B
storage/vm-312-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-312-disk-1 628G 175G 0B 32.4G 143G 0B
storage/vm-313-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-313-disk-1 628G 176G 0B 33.3G 143G 0B
storage/vm-314-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-314-disk-1 628G 175G 0B 32.7G 143G 0B
storage/vm-315-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-315-disk-1 628G 176G 0B 33.1G 143G 0B
storage/vm-316-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-316-disk-1 628G 177G 0B 33.9G 143G 0B
storage/vm-317-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-317-disk-1 628G 184G 0B 41.7G 143G 0B
storage/vm-318-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-318-disk-1 628G 185G 0B 41.9G 143G 0B
storage/vm-319-disk-0 486G 3.59M 0B 180K 3.41M 0B
storage/vm-319-disk-1 628G 212G 0B 69.5G 143G 0B
storage/vm-320-disk-0 486G 3.58M 0B 172K 3.41M 0B
storage/vm-320-disk-1 628G 190G 0B 47.3G 143G 0B
storage/vm-321-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-321-disk-1 667G 207G 179M 25.7G 181G 0B
storage/vm-322-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-322-disk-1 667G 181G 0B 105K 181G 0B
storage/vm-322-disk-2 667G 209G 131M 27.7G 181G 0B
storage/vm-323-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-323-disk-1 667G 208G 190M 27.1G 181G 0B
storage/vm-324-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-324-disk-1 667G 208G 136M 26.4G 181G 0B
storage/vm-325-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-325-disk-1 667G 209G 122M 27.7G 181G 0B
storage/vm-326-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-326-disk-1 667G 207G 114M 26.0G 181G 0B
storage/vm-327-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-327-disk-1 667G 207G 124M 26.2G 181G 0B
storage/vm-328-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-328-disk-1 667G 208G 112M 27.1G 181G 0B
storage/vm-329-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-329-disk-1 667G 211G 122M 29.6G 181G 0B
storage/vm-330-disk-0 486G 3.57M 0B 158K 3.41M 0B
storage/vm-330-disk-1 667G 209G 109M 27.6G 181G 0B
root@LPVE001:~# zpool get ashift
NAME PROPERTY VALUE SOURCE
rpool ashift 12 local
storage ashift 12 local
root@LPVE001:~# zfs get volblocksize
NAME PROPERTY VALUE SOURCE
rpool volblocksize - -
rpool/ROOT volblocksize - -
rpool/ROOT/pve-1 volblocksize - -
rpool/data volblocksize - -
storage volblocksize - -
storage/vm-311-disk-0 volblocksize 16K -
storage/vm-311-disk-0@__replicate_311-0_1660667403__ volblocksize - -
storage/vm-311-disk-1 volblocksize 16K -
storage/vm-311-disk-1@__replicate_311-0_1660667403__ volblocksize - -
storage/vm-312-disk-0 volblocksize 16K -
storage/vm-312-disk-0@__replicate_312-0_1660667418__ volblocksize - -
storage/vm-312-disk-1 volblocksize 16K -
storage/vm-312-disk-1@__replicate_312-0_1660667418__ volblocksize - -
storage/vm-313-disk-0 volblocksize 16K -
storage/vm-313-disk-0@__replicate_313-0_1660667430__ volblocksize - -
storage/vm-313-disk-1 volblocksize 16K -
storage/vm-313-disk-1@__replicate_313-0_1660667430__ volblocksize - -
storage/vm-314-disk-0 volblocksize 16K -
storage/vm-314-disk-0@__replicate_314-0_1660667445__ volblocksize - -
storage/vm-314-disk-1 volblocksize 16K -
storage/vm-314-disk-1@__replicate_314-0_1660667445__ volblocksize - -
storage/vm-315-disk-0 volblocksize 16K -
storage/vm-315-disk-0@__replicate_315-0_1660667459__ volblocksize - -
storage/vm-315-disk-1 volblocksize 16K -
storage/vm-315-disk-1@__replicate_315-0_1660667459__ volblocksize - -
storage/vm-316-disk-0 volblocksize 16K -
storage/vm-316-disk-0@__replicate_316-0_1660667474__ volblocksize - -
storage/vm-316-disk-1 volblocksize 16K -
storage/vm-316-disk-1@__replicate_316-0_1660667474__ volblocksize - -
storage/vm-317-disk-0 volblocksize 16K -
storage/vm-317-disk-0@__replicate_317-0_1660667485__ volblocksize - -
storage/vm-317-disk-1 volblocksize 16K -
storage/vm-317-disk-1@__replicate_317-0_1660667485__ volblocksize - -
storage/vm-318-disk-0 volblocksize 16K -
storage/vm-318-disk-0@__replicate_318-0_1660667500__ volblocksize - -
storage/vm-318-disk-1 volblocksize 16K -
storage/vm-318-disk-1@__replicate_318-0_1660667500__ volblocksize - -
storage/vm-319-disk-0 volblocksize 16K -
storage/vm-319-disk-0@__replicate_319-0_1660667515__ volblocksize - -
storage/vm-319-disk-1 volblocksize 16K -
storage/vm-319-disk-1@__replicate_319-0_1660667515__ volblocksize - -
storage/vm-320-disk-0 volblocksize 16K -
storage/vm-320-disk-0@__replicate_320-0_1660667531__ volblocksize - -
storage/vm-320-disk-1 volblocksize 16K -
storage/vm-320-disk-1@__replicate_320-0_1660667531__ volblocksize - -
storage/vm-321-disk-0 volblocksize 16K -
storage/vm-321-disk-0@__replicate_321-0_1660667402__ volblocksize - -
storage/vm-321-disk-1 volblocksize 16K -
storage/vm-321-disk-1@__replicate_321-0_1660667402__ volblocksize - -
storage/vm-322-disk-0 volblocksize 16K -
storage/vm-322-disk-0@__replicate_322-0_1660667420__ volblocksize - -
storage/vm-322-disk-1 volblocksize 16K -
storage/vm-322-disk-2 volblocksize 16K -
storage/vm-322-disk-2@__replicate_322-0_1660667420__ volblocksize - -
storage/vm-323-disk-0 volblocksize 16K -
storage/vm-323-disk-0@__replicate_323-0_1660667439__ volblocksize - -
storage/vm-323-disk-1 volblocksize 16K -
storage/vm-323-disk-1@__replicate_323-0_1660667439__ volblocksize - -
storage/vm-324-disk-0 volblocksize 16K -
storage/vm-324-disk-0@__replicate_324-0_1660667458__ volblocksize - -
storage/vm-324-disk-1 volblocksize 16K -
storage/vm-324-disk-1@__replicate_324-0_1660667458__ volblocksize - -
storage/vm-325-disk-0 volblocksize 16K -
storage/vm-325-disk-0@__replicate_325-0_1660667476__ volblocksize - -
storage/vm-325-disk-1 volblocksize 16K -
storage/vm-325-disk-1@__replicate_325-0_1660667476__ volblocksize - -
storage/vm-326-disk-0 volblocksize 16K -
storage/vm-326-disk-0@__replicate_326-0_1660667495__ volblocksize - -
storage/vm-326-disk-1 volblocksize 16K -
storage/vm-326-disk-1@__replicate_326-0_1660667495__ volblocksize - -
storage/vm-327-disk-0 volblocksize 16K -
storage/vm-327-disk-0@__replicate_327-0_1660667512__ volblocksize - -
storage/vm-327-disk-1 volblocksize 16K -
storage/vm-327-disk-1@__replicate_327-0_1660667512__ volblocksize - -
storage/vm-328-disk-0 volblocksize 16K -
storage/vm-328-disk-0@__replicate_328-0_1660667528__ volblocksize - -
storage/vm-328-disk-1 volblocksize 16K -
storage/vm-328-disk-1@__replicate_328-0_1660667528__ volblocksize - -
storage/vm-329-disk-0 volblocksize 16K -
storage/vm-329-disk-0@__replicate_329-0_1660667544__ volblocksize - -
storage/vm-329-disk-1 volblocksize 16K -
storage/vm-329-disk-1@__replicate_329-0_1660667544__ volblocksize - -
storage/vm-330-disk-0 volblocksize 16K -
storage/vm-330-disk-0@__replicate_330-0_1660667561__ volblocksize - -
storage/vm-330-disk-1 volblocksize 16K -
storage/vm-330-disk-1@__replicate_330-0_1660667561__ volblocksize - -
root@LPVE001:~#
 
So snapshots and discard isn't the problem but padding overhead. With your ashift of 12 and volblocksize of 16K with a 6 disk raidz3 that means that only 33% of the raw storage is usable (and of that 20% should be kept free, so actually only 26% of raw storage usable for virtual disks).
So it's expected that everything will be 150% in size, as for every 1GB of data there also will be 500MB of padding blocks stored.
If you dont want to loose so much capacity you would need to increase your volblocksize to something like 128K or even higher. But that also got it downsides...I wouldn't run DBs on that storage and so on as performance and wear of all small reads/writes will be terrible.

If your workload doesn't allow you to increase the volblocksize that high it might be better to just use a striped mirror. A striped three-way mirror (so two mirrors of 3 disks each striped together) might be an option. You will get the same 33% of usable capacity as now with your raidz3 but way better performance, faster resilvering and its very reliable too. Any two disks could fail and up to 4 disks without loosing data. Not as reliable as a raidz3 where any 3 disks could fail, but atleast the silvering is faster so shorter time that your pool would be vulnerable.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!