How big of Virtual Disk Image is too big?

wahmed

Famous Member
Oct 28, 2012
1,116
44
113
Calgary, Canada
www.symmcom.com
We have a need to create a VM with big virtual disk image. To start it needs 4TB disk space and grow to 10TB in 2 years. Is it too big for a VM? Anybody go big VM running on Proxmox without issue? Our storage backend is Ceph. I am thinking 4 1TB virtual disk with LVM setup inside VM. I am still going to run test and a test VM is in process of setup as i write this. Just wanted to see what Proxmox community thinks.
 
You could also try with direct attached iscsi or FC from a nas if the disks gets to big for ceph/kvm. Less flexible storage but at least the server in running virtual. High end Qnap and Synology can be equipped with FC.

Another solution could be infiniband using iSER to a ZFS storage.
See video from Mellanox demonstrating 4x throughput for 4x reduced access time compared to iscsi
https://community.mellanox.com/docs/DOC-1412
 
Last edited:
I'm running a VM as a fileserver which has a large host disk device exported like "scsi0 : /dev/sdd1" which is currently 18.5 TiB.
(6x 4TB RAID-5). I wouldn't worry about 'too' big ... You should of course be running a filesystem that can handle devices that big though. (I'm currently running XFS on the 18.5 TB device without problem)
 
We have a need to create a VM with big virtual disk image. To start it needs 4TB disk space and grow to 10TB in 2 years. Is it too big for a VM? Anybody go big VM running on Proxmox without issue? Our storage backend is Ceph. I am thinking 4 1TB virtual disk with LVM setup inside VM. I am still going to run test and a test VM is in process of setup as i write this. Just wanted to see what Proxmox community thinks.

Hi Wasim,
I have one VM, which are an nfs-server for archive-data. If nessesary I add 16TB-Slices to the lvm-storage.

At this time I use 4 * 16TB ceph-storage (from an EC-pool) in this VM.

On our production-server I use 4TB-Slices as max size. (16TB now)

Udo

PS: If you use inside the VM ext4 (which I prefer, because big xfs-partition are not able to do an fscheck, because they need too much ram for that) see which version do you use. esp. debian has too old mkfs-tools so they can't go over 16TB! My archive-nfs-server run on jessie ;-)
 
Good to see large VMs are being used in Proxmox. :)

I was thinking going with 2TB disk images with LVM So that each VM can be expanded upto 20TB. Bigger size, less number of image vs smaller sizes, higher image count makes any performance difference? ext4 is what i would use.


mir, didnt know about iSER. Thanks for the info. Our primary goal is to keep everything in Ceph, so the direct attach iSCSI probably would not work. But i came across iSCSI on Ceph recently. Did not try yet, looks promising.
 
Bigger size, less number of image vs smaller sizes, higher image count makes any performance difference?
Hi,
at this time not, but Spirit is working on patches, where you got for each disk an own io-thread (now the kvm-process has one io-thread only).
After that it should make an big difference - if your data, which are accessed, on different vm-disks.

Udo
 
Hi,
at this time not, but Spirit is working on patches, where you got for each disk an own io-thread (now the kvm-process has one io-thread only).
After that it should make an big difference - if your data, which are accessed, on different vm-disks.
That would be awesome! I can imagine the performance difference it will make in Ceph storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!