Hi!
Apologies if this should be relatively simple to solve.
I am trying to use "almost" all disk space on a single image (As I want the full HDD space available to that VM, happy to hear other solutions).
I have created a RAID6 partition using mdadm, in total there's 5 HDDs, each with 8TB. With double redundancy there's 24TB available space.
I have it mounted at /mnt/md1 and added to Proxmox as directory. As shown here, less than 600GB is used.
Unfortunately the max size image I am able to create is 16TB.
I have tried raw and qcow2.
I am able to create at most 16TB, but no more.
Trying to create a 17TB image:
and trying to resize it to +1TB after creating 16TB image:
I am very curious if I am doing something wrong somewhere or there's some limitation I do not know about.
The only solution I have considered so far is having 2 12TB images and either mounting as RAID0 inside of a VM, or just having them both mounted as separate drives and move files between to balance it out.
Thank you for reading, I hope knows more than I do!
Apologies if this should be relatively simple to solve.
I am trying to use "almost" all disk space on a single image (As I want the full HDD space available to that VM, happy to hear other solutions).
I have created a RAID6 partition using mdadm, in total there's 5 HDDs, each with 8TB. With double redundancy there's 24TB available space.
Bash:
root@pve:~# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Apr 11 12:35:49 2023
Raid Level : raid6
Array Size : 23441679360 (21.83 TiB 24.00 TB)
Used Dev Size : 7813893120 (7.28 TiB 8.00 TB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Jul 26 16:43:17 2024
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : pve:1 (local to host pve)
UUID : 592b0087:cfab0caf:5ca8c3cc:7c5f4fc4
Events : 77829
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 8 65 4 active sync /dev/sde1
I have it mounted at /mnt/md1 and added to Proxmox as directory. As shown here, less than 600GB is used.
Bash:
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 30G 0 30G 0% /dev
tmpfs 5.9G 1.6M 5.9G 1% /run
/dev/mapper/pve-root 94G 42G 48G 48% /
tmpfs 30G 46M 30G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 128K 34K 90K 28% /sys/firmware/efi/efivars
/dev/nvme1n1p2 511M 336K 511M 1% /boot/efi
/dev/md1 22T 527G 21T 3% /mnt/md1
/dev/fuse 128M 24K 128M 1% /etc/pve
tmpfs 5.9G 0 5.9G 0% /run/user/0
Unfortunately the max size image I am able to create is 16TB.
I have tried raw and qcow2.
I am able to create at most 16TB, but no more.
Trying to create a 17TB image:
Bash:
TASK ERROR: unable to create image: qemu-img: /mnt/md1/images/115/vm-115-disk-0.qcow2: The image size is too large for file format 'qcow2' (try using a larger cluster size)
and trying to resize it to +1TB after creating 16TB image:
Bash:
qemu-img: Could not resize file: File too large
TASK ERROR: command '/usr/bin/qemu-img resize -f raw /mnt/md1/images/115/vm-115-disk-0.raw 17716740096000' failed: exit code 1
I am very curious if I am doing something wrong somewhere or there's some limitation I do not know about.
The only solution I have considered so far is having 2 12TB images and either mounting as RAID0 inside of a VM, or just having them both mounted as separate drives and move files between to balance it out.
Thank you for reading, I hope knows more than I do!