Directory storage as actual directory

generalproxuser

Active Member
Mar 14, 2021
107
34
33
44
instead of passing an entire disk to a vm: Is there a way to specify a directory wherever I want, and then specify that the files to be stored there would be "raw"?.

When I create a vm and go to add another hard drive, it gives me the option to choose my storage location BUT I only have the options raw, qcow2 or esxi vm disk. They are all virtual disk files.

If i wanted to get the data with the vm powered off I have to manipulate the image files.

What I want is to create a vm with two "hard disks". 1 disk is the boot disk. The 2nd disk is the storage disk. In the vm this 2nd disk appears as a device (dev/sdX) but it is actually mounted to a folder on the proxmox host. Inside the vm, if I create a text file inside the storage disk, the text file appears in the folder on the proxmox host.

Is this possible?
 
Hi,
that's not directly possible, because a VM needs to have it's own disks; at least most guest OSes expect this to be the case ;). There are things like virtiofs, but it's not officially supported by Proxmox VE. Why not use a container instead? EDIT: You can also use a network storage and mount it on both host and guest.
 
Last edited:
@Fabian_E

Thanks for the tips. I have questions on those though.

I would use a container if I could but some of the guest setups I want to use/test are not advised to be run inside containers. Omv and nextcloud are the two biggest candidates right now for the use/test.

Using a network storage would work though I am trying to consolidate/centralize the hardware. This is for a home net/lab infrastructure so i am trying to shrink my physical footprint as much as possible.

On the subject of containers, how would I go about storing data on the proxmox host directory structure from within the vm? And still have that data accessible in case the container is unavailable? I have treated containers so far as systems that don't necessarily process data files more than network data ie routers, pihole, home automation, etc.

In the end I am more concerned in being able to backup data more than whole vms/containers.
 
@Fabian_E

Thanks for the tips. I have questions on those though.

I would use a container if I could but some of the guest setups I want to use/test are not advised to be run inside containers. Omv and nextcloud are the two biggest candidates right now for the use/test.
Of course some things are better done as VMs. But that also brings the downsides of VMs with it.

Using a network storage would work though I am trying to consolidate/centralize the hardware. This is for a home net/lab infrastructure so i am trying to shrink my physical footprint as much as possible.
You don't need a different machine for this. Client or host can export the network storage via a virtual network that both can access.

On the subject of containers, how would I go about storing data on the proxmox host directory structure from within the vm? And still have that data accessible in case the container is unavailable? I have treated containers so far as systems that don't necessarily process data files more than network data ie routers, pihole, home automation, etc.
You would just have to mount the filesystems on the host, e.g. using pct mount <ID>. There's also bind mounts, which makes things accessible at the same time.

In the end I am more concerned in being able to backup data more than whole vms/containers.
For backups, Proxmox VE is designed to treat a VM or container as a unit. You can exclude single disks from backups, but there's not really an integrated way to backup single disks. Shameless advertisment: with Proxmox Backup Server you could use proxmox-backup-client for file-level backups, but it's not tightly integrated either, because it needs to be set up inside the VM.
 
  • Like
Reactions: generalproxuser
You don't need a different machine for this. Client or host can export the network storage via a virtual network that both can access. <-- This sounds like an answer. I have not found any documentation (yet) to help me get this setup.


You would just have to mount the filesystems on the host, e.g. using pct mount <ID>. There's also bind mounts, which makes things accessible at the same time. <-- Not sure how I missed this (bind mounts). Gonna give this a try.
I am reviewing the bind mounts doc page and going to try to figure that one out.

When you stated "export the network storage via a virtual network" I am trying to guess what you mean. If you have any documentation link that would be great.

Right now I have my openwrt router container working and any/all ct/vms created are automatically assigned IP addresses via dhcp server and ct->vm | vm->ct | ct|vm->proxmox host network communication works. I can access vms|cts from vms|cts. Unless you are referring to some other virtual network then I am all ears.

Many thanks so far.
 
I am reviewing the bind mounts doc page and going to try to figure that one out.

When you stated "export the network storage via a virtual network" I am trying to guess what you mean. If you have any documentation link that would be great.

Right now I have my openwrt router container working and any/all ct/vms created are automatically assigned IP addresses via dhcp server and ct->vm | vm->ct | ct|vm->proxmox host network communication works. I can access vms|cts from vms|cts. Unless you are referring to some other virtual network then I am all ears.

Many thanks so far.
You got your vmbr0 bridge. That is an internal network that than handle 10Gbit or more (depending on how powerfull your CPU is). Its like an virtual switch and each VM/LXC you attach to that bridge is part of that virtual network. So you can manually configre your host to act as a NAS to share folders of the host via NFS/SMB to your guests. Or you could create a LXC/VM that will act as a NAS like TrueNAS/OpenMediaVault sharing folders to other VMs.
 
Last edited:
  • Like
Reactions: generalproxuser
@Dunuin

Thanks. I was leaning toward creating a container whose purpose was to be just a server for nfs, tftp etc. Looks like I will still have to use bind mounts with that setup though.
 
Yes, if you really want to share your hosts folders using SMB/NFS using a NAS-LXC then bind-mounting the folder to that LXC woukd be the way to go. But if you use unprivileged LXCs bind-mounting isn't easy because you need to manually edit the user/group-remapping for each user or you wont be abke to access those folders from inside the guest because of missing user privileges. Here is descriped how to do that

If you just want a shared folder that all your VMs can access but you dont care where that folder actually is stored, you could also just create a VM and attach a big virtual disk and share a folder on that virtual disk using NFS/SMB.
But I would prefer the LXC option.
 
Last edited:
@Dunuin

Sharing a folder from within a vm image disk is what I want to avoid. Basically i want to be able to access the data whether the vm/ct is powered on or not. Especially if it is powered off.

I would like to avoid data files being stored inside the virtual disk files if at all possible. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!