Creating volumes for a VM

mrjava

New Member
Jan 31, 2025
2
0
1
I have installed Proxmox VE and made a virtual machine with Debian 12 on it. There I installed nextcloud and gitea but I have second thoughts on my drive configuration.
What I did:
1. Created ZFS pool in proxmox on dedicated SSD.
2. Created and attached disk in this pool.
3. Inside debian VM created partition via mkfs.ext4 on this disk.
4. Mounted it and used as a file storage.

But now I have something like zfs pool -> vm disk -> ext4 partition. Is it okay? Sounds wrong, but I am not really into filesystems, so I am not sure.
Will I be able to easily restore those disks in case of backup? Or should I change something in my config?

I did think about creating LXC instead of VM, but since my services run in docker, it did not sound right either.
 
I don't think that is optimal. Why did you feel the need to partition the VM disk rather than just adding another ZFS disk in your VM hardware configuration screen. You can add as many as you want, you just need to edit FSTAB to mount the new disk(s) at boot.

Taking it one step further, I really try very hard to NOT store any data in my VM disks. If I install nextcloud on a VM, I will mount an NFS share and during the install, point the data directory towards the NFS share, which I have running on TrueNAS (or sometimes on Synology). I can do my data management/protection inside of TrueNAS or Synology with snapshots, backups, etc. Yes I could do this in the VM drive too, but I like to use NVME drives for VM storage and slower drives for data storage. In the case of my TrueNAS machine it is all enterprise SSDs in mirrored pools that are capable of saturating my 10gbe network connection. Synology has spinning rust. Keeping as much of my data on TreNAS or Synology as possible has several benefits for me. First, backing up my VMs is very quick without that extra data. Second, if I need to re-install Proxmox for some reason, it can be done in under 30 minutes. I also backup my VMs to a different share on Synology. So a complete reinstall takes like 10-15 minutes from a fresh iso image. All of my IP addresses are done as revervations in pfSense, so the Proxmox unit get's the same IP automatically as the MAS addresses on physical hardware doesn't change. And when I restore my VMs they are restored with the same MAC addresses they had at backup time, meaning they also come back with all the same IP addresses. FSTAB on my Nextcloud VM will also come back and instantly connect to the TrueNAS share, no problem.

So long story short, my advice is don't store any files in your VM drives. Use SMB, NFS or Iscsi...whichever you prefer.
 
Thank you for your reply!
Why did you feel the need to partition the VM disk rather than just adding another ZFS disk in your VM hardware configuration screen. You can add as many as you want, you just need to edit FSTAB to mount the new disk(s) at boot.
Well, it was my intention at first. I added a disk in a configuration screen and pointed it to zfs storage. Then in VM I saw /dev/sdb without any partitions so I had to make it first and then add to fstab /dev/sdb1 with ext4 format. Or could I just write (/dev/sdb ... zfs ...) to fstab?
I am sorry for those questions, this is my first time with both zfs and proxmox :)

So long story short, my advice is don't store any files in your VM drives. Use SMB, NFS or Iscsi...whichever you prefer.
I need to have NFS server installed somewhere else (like Synology), not in Proxmox, am I right? If so, I currently do not have an opportunity to have it.
 
But now I have something like zfs pool -> vm disk -> ext4 partition. Is it okay?
Yes.

As long as "zfs pool" is a ZVOL based block device - which it is as long as you follow the manual. For the VM this results in a "filesystem on a block device".

The "wrong" way would be to establish a "Directory Storage" on a dataset on ZFS. This would then create a file on ZFS and deliver it to the VM. In this case you get a "filesystem on a virtual block device in a file on a filesystem" - which is not recommended.

Technically both does work.

Disclaimer: this post includes my opinion ;-)
 
Last edited:
I have installed Proxmox VE and made a virtual machine with Debian 12 on it. There I installed nextcloud and gitea but I have second thoughts on my drive configuration.
What I did:
1. Created ZFS pool in proxmox on dedicated SSD.
2. Created and attached disk in this pool.
3. Inside debian VM created partition via mkfs.ext4 on this disk.
4. Mounted it and used as a file storage.

But now I have something like zfs pool -> vm disk -> ext4 partition. Is it okay? Sounds wrong, but I am not really into filesystems, so I am not sure.
Will I be able to easily restore those disks in case of backup? Or should I change something in my config?

I did think about creating LXC instead of VM, but since my services run in docker, it did not sound right either.
Like @UdoB said, it is OK. I do almost what you did for a NextCloud VM.
@louie1961 what is not optimal for you could be optimal for someone else. For example for simplicity, robustness, integration. Personally I like to have Proxmox Virtual Environment dumb and complete backup, knowing that all my data is present on this backup system (KISS principle). But this is for simple infrastructure.
 
  • Like
Reactions: UdoB
Thank you for your reply!

Well, it was my intention at first. I added a disk in a configuration screen and pointed it to zfs storage. Then in VM I saw /dev/sdb without any partitions so I had to make it first and then add to fstab /dev/sdb1 with ext4 format. Or could I just write (/dev/sdb ... zfs ...) to fstab?
I am sorry for those questions, this is my first time with both zfs and proxmox :)
I misunderstood, I thought you only had the one qemu disk that you installed to and then were partitioning.
I need to have NFS server installed somewhere else (like Synology), not in Proxmox, am I right? If so, I currently do not have an opportunity to have it.
I have both. I have a separate synology box, and on my main server I have a TrueNAS scale running in a VM. I use the motherboard SATA ports for the Proxmox drives and I have one of these installed that I do a PCI passthrough to the TrueNAS VM. Works flawlessly https://www.newegg.com/p/17Z-0061-000D5?Item=9SIARE9K9G2395

I ditched the heavy SATA cables in favor of these however: https://www.amazon.com/gp/product/B0C669RZYT

Because it is a PCI passthrough, Proxmox doesn't see this device and it TrueNAS controls it directly. All of the drives show up with smart reporting, the ability to sleep, etc. as if this were a bare metal installation, which is nice. For $39 for that adapter, its hard to go wrong. Speeds are very good as well. I have six enterprise SSD drives in 3 groups of mirrored pairs, and it has no problem saturating the 10 gbe network card on sequential reads and writes.
 
  • Like
Reactions: UdoB
@louie1961 what is not optimal for you could be optimal for someone else. For example for simplicity, robustness, integration. Personally I like to have Proxmox Virtual Environment dumb and complete backup, knowing that all my data is present on this backup system (KISS principle). But this is for simple infrastructure.
To each his own. I made it clear it was just my preference. Yet somehow this feels like you are chastising me.