Unable to create image with full disk space.

Jan 24, 2023
21
1
3
Hi!
Apologies if this should be relatively simple to solve.
I am trying to use "almost" all disk space on a single image (As I want the full HDD space available to that VM, happy to hear other solutions).

I have created a RAID6 partition using mdadm, in total there's 5 HDDs, each with 8TB. With double redundancy there's 24TB available space.
Bash:
root@pve:~# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 11 12:35:49 2023
        Raid Level : raid6
        Array Size : 23441679360 (21.83 TiB 24.00 TB)
     Used Dev Size : 7813893120 (7.28 TiB 8.00 TB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Jul 26 16:43:17 2024
             State : active
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : pve:1  (local to host pve)
              UUID : 592b0087:cfab0caf:5ca8c3cc:7c5f4fc4
            Events : 77829

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1

I have it mounted at /mnt/md1 and added to Proxmox as directory. As shown here, less than 600GB is used.
Bash:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   30G     0   30G   0% /dev
tmpfs                 5.9G  1.6M  5.9G   1% /run
/dev/mapper/pve-root   94G   42G   48G  48% /
tmpfs                  30G   46M   30G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
efivarfs              128K   34K   90K  28% /sys/firmware/efi/efivars
/dev/nvme1n1p2        511M  336K  511M   1% /boot/efi
/dev/md1               22T  527G   21T   3% /mnt/md1
/dev/fuse             128M   24K  128M   1% /etc/pve
tmpfs                 5.9G     0  5.9G   0% /run/user/0



Unfortunately the max size image I am able to create is 16TB.
I have tried raw and qcow2.
I am able to create at most 16TB, but no more.

Trying to create a 17TB image:
Bash:
TASK ERROR: unable to create image: qemu-img: /mnt/md1/images/115/vm-115-disk-0.qcow2: The image size is too large for file format 'qcow2' (try using a larger cluster size)

and trying to resize it to +1TB after creating 16TB image:

Bash:
qemu-img: Could not resize file: File too large
TASK ERROR: command '/usr/bin/qemu-img resize -f raw /mnt/md1/images/115/vm-115-disk-0.raw 17716740096000' failed: exit code 1



I am very curious if I am doing something wrong somewhere or there's some limitation I do not know about.
The only solution I have considered so far is having 2 12TB images and either mounting as RAID0 inside of a VM, or just having them both mounted as separate drives and move files between to balance it out.

Thank you for reading, I hope knows more than I do!
 
  • Like
Reactions: UdoB
From what I've gathered 16TB might be the max file size for your ext4 filesystem (depending on the block size). You could do what @leesteken suggested or maybe consider using ZFS?
 
As the error message says, qcow2 does not support more than 16TB. Why not passthrough the whole /dev/md1 to your VM instead: https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM) ? Or did you intend to make backups or snapshots of the virtual disk? Or maybe create a LVM-thin on /dev/md1 instead: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_lvmthin ?
I get this error on both qcow2 and raw.
Bash:
failed to update VM 115: unable to create image: qemu-img: /mnt/md1/images/115/vm-115-disk-0.raw: The image size is too large for file format 'raw' (500)

I'm very curious why that'd be the limit for my file system. As far as I understood it, ext4 can be seemingly as large as you want.
Is there a way I can check this, or change this limit?

@KevinS
Main reason I did it this way is so that I can have RAID6. I did not want to use ZFS as the performance impact is higher, and the RAM usage is also high.... This is not a large server, it's 64GB RAM with a standard desktop CPU so I do not have a need or ZFS.
 
Why would you think there is no limit? Of course there is. Things like "how many bits in a block index" will limit the size. Making things "infinite" is not possible without killing performance or wasting a lot of space.

Anyhow, ext4 has a max file size of 16 TiB with the default 4k block size. I guess in theory you can use larger block sizes to have larger files?
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout

For your use case LVM-thin on top of MD-raid would make sense. LVM2 has a maximum volume size but it is in the Petabytes IIRC. Or just pass the disks into the VM and do the RAID there.
 
Why would you think there is no limit? Of course there is. Things like "how many bits in a block index" will limit the size. Making things "infinite" is not possible without killing performance or wasting a lot of space.

Anyhow, ext4 has a max file size of 16 TiB with the default 4k block size. I guess in theory you can use larger block sizes to have larger files?
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout

For your use case LVM-thin on top of MD-raid would make sense. LVM2 has a maximum volume size but it is in the Petabytes IIRC. Or just pass the disks into the VM and do the RAID there.
Thanks. I'll keep that in mind. I got the idea that 16TB was not the limit before of this:
Code:
By default a filesystem can contain 2^32 blocks; if the '64bit' feature is enabled, then a filesystem can have 2^64 blocks.

I knew it wasn't infinite, but at least large enough that it'll never be a personal worry. I was under the assumption that 64bit was enabled by default. It seems it is not.

I'll keep it simple and do an LVM-thin. Thank you everyone for the help!
 
The simplest solution would be ZFS ... why bother with an unsupported setup like mdadm and jump through so many hoops.

I did not want to use ZFS as the performance impact is higher, and the RAM usage is also high.... This is not a large server, it's 64GB RAM with a standard desktop CPU so I do not have a need or ZFS.
 
Why create a storage (LVM-thin or otherwise) and put only a single virtual disk on it? Just disk-passthrough you dm1 or passthrough all drives separately and setup dm inside the VM (so it's nicely self contained).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!