Physical partition in VM - how? where?

jerzzz

Member
Dec 25, 2020
6
0
6
63
Hello, I apologize in advance if it has already been somewhere, I have been looking for a solution for several days - also on this forum and probably a stupid simple problem but I can not find a solution anywhere ... sorry for that

I have proxmox ve 6.3 installed on debian 10.7 and I have an additional sda4 partition mounted to Debian

Code:
root @ prox6node01: lsblk
├─sda1 8: 1 0 512M 0 part / boot / efi
├─sda2 8: 2 0 490.8G 0 part /
├─sda4 8: 4 0 378G 0 part / media / DATA
└─sda5 8: 5 0 25G 0 part [SWAP]

And I would like to have it in the Debian VM as well, but I can't make it there at all

Code:
root @ debian: ~ # lsblk
sda 8: 0 0 32G 0 disk
├─sda1 8: 1 0 31G 0 part /
├─sda2 8: 2 0 1C 0 part
└─sda5 8: 5 0 975M 0 part [SWAP]
sr0 11: 0 1 694M 0 rom

I talk about partition:
Code:
sda4 8: 4 0 378G 0 
part / media / DATA
Where can I find proxmox in the GUI or anywhere else, to mount this missing partition?

Best regards to everyone!
 
Last edited:
You can't mount a partition from the proxmox host to a VM. If you really want to do that you need to use a LXC where that would be possible or create a smb/nfs server on your host and mount that partition to your VM using smb/nfs shares.
 
Will any decrease in performance will be notice when share disk via smb / nfs compared to the direct sata/scsi/etc ???
 
Will any decrease in performance will be notice when share disk via smb / nfs compared to the direct sata/scsi/etc ???
Sure, but that is the concept of a VM. It should be fully isolated so you can't share hardware between the host and the guest.
If you want full physical hardware access without speed decrease inside a VM you need to buy a PCIe HBA and a another drive, attach that drive to the HBA and pass the HBA through using PCI passthrough. That way the drive would be exclusively accessible by the one VM you passed it through.
Or you create a virtual HDD stored on your drive and attach that to your VM but that way it is virtualized and you get overhead too. And you can't share a virtual HDD between different VMs or the host and a VM.

If you want to share stuff you need Containers like LXCs which are only half isolated or you need to use network shares.
 
Well, there is another possibility: to pass physical devices (disks) into a proxmox VM, but still this device the isn't shared with tho host.
Not sure if that helps.
 
Sure, but that is the concept of a VM. It should be fully isolated so you can't share hardware between the host and the guest.
If you want full physical hardware access without speed decrease inside a VM you need to buy a PCIe HBA and a another drive, attach that drive to the HBA and pass the HBA through using PCI passthrough. That way the drive would be exclusively accessible by the one VM you passed it through.
Or you create a virtual HDD stored on your drive and attach that to your VM but that way it is virtualized and you get overhead too. And you can't share a virtual HDD between different VMs or the host and a VM.

If you want to share stuff you need Containers like LXCs which are only half isolated or you need to use network shares.

Ok, thank you very much and I think I understand, and anyway I know how to look for solutions.

Well, there is another possibility: to pass physical devices (disks) into a proxmox VM, but still this device the isn't shared with tho host.
Not sure if that helps.

Hmm, I'm not even sure how to imagine it, since Debian hosts Proxmox on its own ???
 
Well, there is another possibility: to pass physical devices (disks) into a proxmox VM, but still this device the isn't shared with tho host.
Not sure if that helps.
Hmm, I'm not even sure how to imagine it, since Debian hosts Proxmox on its own ???
And its also some kind of virtualized like a virtual HDD, because it uses for example the virtual qemu SCSI controller.
The KVM hypervisor is like a man in the middle. It uses virtual SCSI controllers so a VM can use a disc connected to your host and KVM manages all between. So its still no direkt physical access.
 
Last edited:
Ok, so why can't it be (or maybe it can?) make the same partition as virtual SATA and mount it to a VM? After all, USB, NIC and other devices or peripherals are possible, after all, the partition for OS is like a separate disk ...
Of course, I understand the advantages of separating machines, equipment, resources, etc., but it is strange because it would be a fine that such a POSSIBILITY exists for user that for users who really know what they are doing
 
Ok, so why can't it be (or maybe it can?) make the same partition as virtual SATA and mount it to a VM? After all, USB, NIC and other devices or peripherals are possible, after all, the partition for OS is like a separate disk ...
Of course, I understand the advantages of separating machines, equipment, resources, etc., but it is strange because it would be a fine that such a POSSIBILITY exists for user that for users who really know what they are doing
Most of the time you can passthrough a device to a VM, like USB for example, but that device can't be shared too. If you attach a USB stick, a webcam or something else to a VM that device is exclusive only for that VM. No other VM or the host can use it then. NIC is a special case. If you really physically passthrough a NIC to a VM no other VM nor the host can use that NIC. You can use virtualized NICs so multiple VMs can use the same physical NIC but all that is virtualized and will cost performance because of the overhead.

Why to programm a virtual harddrive with alot of overhead that is capable of managing access from different machines if NFS shares already exist which are doing the same?
I think if you want to virtualize a block device like a HDDs/SSDs so it can be shared between hosts or guests that would require alot of complicated stuff to keep everything synced and the performance would drop as much as using NFS. And I'm not sure if it would be possible at all. If you virtualize a block device you need to emulate a SCSI, AHCI or IDE interface. There is no real HDD/SSD that you can connect to multiple computers at the same time so I would think that the SCSI, AHCI and IDE protocols aren't specified for that task and it's impossible to to this.
And stuff like NFS works on file level and not on block level.
Using LXCs you can share that partition because everything is run on that same host. If you got 10 LXCs you don't have 11 different computers like it would be the case if you would got 10 VMs. 1 host with 10 LXCs is basically just 1 computer using 1 kernel with some of the programs isolated.

If you really care about the performance drop of NFS, using LXCs is an option as long as you don't want to run Windows/Unix. And if you really need Unix/Windows passing through a dedicated drive is always an option.
 
Last edited:
  • Like
Reactions: jerzzz
Thank you very much for the answer, what you wrote appeals to me, especially the example with hard drives, I have to arrange everything for myself and I do not promise that I will not ask anymore :)
On the other hand - isn't creating virtual directories just something like virtual disks? I haven't stopped all aspects of such solutions yet, but this solution is probably some alternative to CIFS / NFS / SMB?

Screenshot_2020-12-28 prox6node01 - Proxmox Virtual Environment.png
 
If you create a "Directory" in Proxmox you are just creating a folder on an existing Filesystem:
physical block device <-- physical file system <-- Folder

If you create a virtual HDD you are creating a virtualized block device:
physical block device <-- physical file system <-- emulated SCSI controller <-- virtual block device <-- virtual file system

Each step overhead is adding, so you want is as simple as possible.
 
  • Like
Reactions: jerzzz
Not sure if someone is still looking for a better solution, I find the partition using lsblk and then in the VM config file I mount using virtio. Add something like this below and it will show up in the VM too. The challenge I have is, this works in all my linux based systems but doesn't work with windows. I am still figuring out how but not sure.

Code:
virtio2: /dev/sdb1

Ideally NFS should work for most cases but my case was unique. If I am connected to my VPN then my NFS drive will not work. For me, thats a deal breaker. I know people use VPN splitting to solve the issue of the NFS drive but not all providers support this and it requires manual changes on the routing.
 
The challenge I have is, this works in all my linux based systems but doesn't work with windows. I am still figuring out how but not sure.

Code:
virtio2: /dev/sdb1
Windows doesn't like disk passthrough of partitions, like your "/dev/sdb1". What should still work is passing through a whole disk like "/dev/sdb" and then formating it from within the Windows guest.
 
Last edited:
  • Like
Reactions: _gabriel
Windows doesn't like disk passthrough of partitions, like your "/dev/sdb1". What should still work is passing through a whole disk like "/dev/sdb" and then formating it from within the Windows guest.
ya thats a bummer. I have data in that drive so I cannot format it. USB passthrough works but I want to be able to connect the drive across different VM's (windows and linux) so I cannot use USB passthrough
 
ya thats a bummer. I have data in that drive so I cannot format it. USB passthrough works but I want to be able to connect the drive across different VM's (windows and linux) so I cannot use USB passthrough
Mounting the same disk on two VMs (or PVE Host + VM) at the same time will corrupt the data on it. If you need to access that disk from different VMs, mount it to a single NAS VM and let other VMs access it via NFS/SMB.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!