[P] File system Passthrough - or are there other alternatives?

cmonty14

Well-Known Member
Mar 4, 2014
343
5
58
Hello!

I have configured multi-disk spanning BTRFS partition including subvolumes.
The subvolumes are used to store typical media files, e.g. music, video.

On the host, any subvolume is mounted to a file system:
Code:
Subvolume     Mount
music         /mnt/music
video         /mnt/video

Now I intend to use a NAS software running in KVM: OpenMediaVault

Question:
What is the best / efficient approach to utilize the host file system with the guest "OpenMediaVault"?

I found this blog explaining the configuration "File System Pass-Through in KVM/Qemu/libvirt".
Would this be a reasonable approach?

THX
 
1- libvirt isn't used by PVE

2- Comparing the access speed to disk between a disk local and remote, always the local disk will be more fast.Ie, for example, a RAID controller have 32 Gb/s of speed only in his bus (PCIe kind connection), and i guess that any hard disk controller modern, also will have a similar speed.
You can see the technical specifications of your RAID controller or of his chipset if you don't uses a RAID controller.

And finally, compare it with the network speed between your PVE Host and your NAS (OpenMediaVault, Synology, SAN or whatever)

What do you think that will be more fast in terms of transmission of data? ....
 
Last edited:
Hello!

I found this blog explaining the configuration "File System Pass-Through in KVM/Qemu/libvirt".
Would this be a reasonable approach?

THX

yes, this is the only way to access host fs from guest, it's called virtio 9p

But it's not implemented in proxmox.

see doc here :
http://www.linux-kvm.org/page/9p_virtio


you can try to add params with "args: ....." in vmid.conf
 
With PVE3.4 you won't be that lucky because 9p virtio is disabled in the redhat kernel we use there.

On the PVE4 kernel it should be enabled and you can follow directions on: http://wiki.qemu.org/Documentation/9psetup#Starting_the_Guest_directly
The arguments you need to pass to qemu can be added with qm set 101 -args "... your args"
this adds an args: ... line in your VM conf and passes this to qemu on startup.

I didn't tested it if it works, but it should be worth a try if you have an PVE4Beta testing installation lying around somewhere.
 
hi c.monty.
I know you post it on one of my older thread as well, but I will post my reply here.

you can certainly pass the drives to VM. it is not even difficult
(at least on ProxMox v4). here are couple of links in addition to what already have been provided on this thread.
http://pve.proxmox.com/wiki/Physical_disk_to_kvm
http://forum.proxmox.com/threads/17856-Proxmox-and-HDD-Passthrough

in my case I had to pass several disk in as my data drives are raid1 setup on btrfs thus I had to pass all the drives in a set(pool) obviously.
it works very nice. as long as all the drive present on VM the pool is assembled and usable.

as said above I run ProxMox 4 beta setup on Debian 8. I did it this way to make sure that host(ProxMox host) supports btrfs as I though of using NFS shares on the host to be access by all guests VM and real PC on the network.

if you going the OMV way you do not need that and can simply setup ProxMox the normal way,via ISO, and just pass thorough the drives to OMV VM.

I run OMV v 2.1.8 (Stone Burner) fully updated
with a Linux 3.16.0.0 kernel. to support btrfs.
to do that you would setup OMV as usual, do a full update (apt-get update apt-get upgrade.)
than install "omv-Extras plugin"
http://omv-extras.org/simple/index.php?id=how-to-install-omv-extras-plugin

and folow this directions http://forums.openmediavault.org/index.php/Thread/7918-btrfs-install/
the first post upto step 5.
all else is just regular btrfs use things and are up to you.

that would make your OMV setup support btrfs.


I am still in the process of configuring my server. and possibly thinking of doing an NFS share on main host to test the speed. as the OMV pass-through is not overly fast on write. it's ok but not too fast. I want to see if NFS share on main host is faster or the bottleneck is on my network somewhere.
also I came across the OMV plugin that allows you to mount remote shares in OMV and manage them as local storage. so maybe I can NFS share my data drive on main host and still manage all from OMV. will see.


FYI>> I would highly,very highly recommend you NOT to use btrfs pools on RAW
prepaire the drive first by creating the partition table and a primary partition on each drive you van to use in btrfs pool. than create file system on that partition
with mkfs.btrfs.


devices. I just lost about 2TB of data do to drive failure that I can not recover.

I think there is a bug somewhere on how btrfs handle system on RAW devices.
I can mostly reliably reconstruct the failure scenario when using raid1 pool on RAW devices as opposed to using devices with partition table and partitions.

when I assemble the pool with RAW devices only the first device in a list acts as expected on device failure.
i.e. if I have sda and sdb in a pool and sdb fails the sda can still be mounted in degraded mode and data is accessible. if however sda fails sdb is not accessible. it appears empty and unused

now if you assemble same pool using pre-partitioned devices
like mkfs.btrfs /dev/sda1 /dev/sdb1 (note the use of the partition not device notation) and make it a raid1 pool.

than all acts as expected and if any device fails the other(s) is mountable and usable and all data is accessible. tested 3 times in several variation of setup and fail condition. so be careful with btrfs and RAW devices.
 
With PVE3.4 you won't be that lucky because 9p virtio is disabled in the redhat kernel we use there.

On the PVE4 kernel it should be enabled and you can follow directions on: http://wiki.qemu.org/Documentation/9psetup#Starting_the_Guest_directly
The arguments you need to pass to qemu can be added with qm set 101 -args "... your args"
this adds an args: ... line in your VM conf and passes this to qemu on startup.

I didn't tested it if it works, but it should be worth a try if you have an PVE4Beta testing installation lying around somewhere.

Hi,

can I upgrade from PVE 3.4 to PVE 4.0 by adapting the repository?

THX
 
I forgot to mention that PVE4 is in an beta stadium and not ready for production yet. Testing and maybe some light home use should be good though.

An upgrade guide will be published when we release the stable version.
 
Hi, I like share folder the host with the guest.

I follow this http://www.linux-kvm.org/page/9p_virtio:

HOST:

qm set 104 -args "if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"

cat /etc/pve/qemu-server/104.conf:

args: if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare
balloon: 3000
boot: cdn
bootdisk: virtio0
cores: 4
hotplug: disk,network,usb,memory,cpu
ide2: local:iso/ubuntu-16.04.1-server-amd64.iso,media=cdrom,size=667
.....

In guest:

modprobe:

9p
9pnet
9pnet_virtio

But I launch the guest VM I received this error:

kvm: -k es: Could not open 'if=virtio': No such file or directory
start failed: command '/usr/bin/kvm .....

Any ideas??
 
qm set 104 -args "if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"
you are missing something

the correct first part is:
Code:
-drive file=/images/f15.img,if=virtio

instead of
Code:
if=virtio

edit:
after revisiting, you dont need the first part at all
so only the string beginning with -fsdev
 
Thanks.

I use the lvm thin for drive disk for VM KVM. How do it?

vm-104-disk-1

Thanks again.
 
Thanks.

I use the lvm thin for drive disk for VM KVM. How do it?

vm-104-disk-1

Thanks again.
What exactly is the problem?
if it is a different one, please open a new thread

thanks
 
How specified the disk drive vm lvm thin? with "-drive file=vm-104-disk-1,if=virtio" ? Do you understand me?

qm list:

VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
104 homeserver.sytes.net running 5120 32.00 18089

lvs:

....

vm-104-disk-1 pve Vwi-aotz-- 32.00g data 83.72
 
qm set 104 -args "-drive file=/dev/pve/vm-104-disk-1,if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"


ipcc_send_rec failed: File too large
WARNING: Image format was not specified for '/dev/pve/vm-104-disk-1' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
kvm: -device virtio-9p: Could not open 'pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare': No such file or directory
start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'typ
e=1,uuid=54dc868d-9b2a-4d11-9c30-4e141cd78cd2' -name homeserver.sytes.net -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.j
pg' -vga cirrus -vnc unix:/var/run/qemu-server/104.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 'size=1024,slots=255,maxmem=4194304M' -object 'memory-backend-ram,id=ram-node0
,size=1024M' -numa 'node,nodeid=0,cpus=0-3,memdev=ram-node0' -object 'memory-backend-ram,id=mem-dimm0,size=512M' -device 'pc-dimm,id=dimm0,memdev=mem-dimm0,node=0' -object 'memory-backend-ram,id=mem-dimm1,size=
512M' -device 'pc-dimm,id=dimm1,memdev=mem-dimm1,node=0' -object 'memory-backend-ram,id=mem-dimm2,size=512M' -device 'pc-dimm,id=dimm2,memdev=mem-dimm2,node=0' -object 'memory-backend-ram,id=mem-dimm3,size=512M
' -device 'pc-dimm,id=dimm3,memdev=mem-dimm3,node=0' -object 'memory-backend-ram,id=mem-dimm4,size=512M' -device 'pc-dimm,id=dimm4,memdev=mem-dimm4,node=0' -object 'memory-backend-ram,id=mem-dimm5,size=512M' -d
evice 'pc-dimm,id=dimm5,memdev=mem-dimm5,node=0' -object 'memory-backend-ram,id=mem-dimm6,size=512M' -device 'pc-dimm,id=dimm6,memdev=mem-dimm6,node=0' -object 'memory-backend-ram,id=mem-dimm7,size=512M' -devic
e 'pc-dimm,id=dimm7,memdev=mem-dimm7,node=0' -k es -drive 'file=/dev/pve/vm-104-disk-1,if=virtio' -fsdev 'local,security_model=passthrough,id=fsdev0,path=/tmp/share' -device virtio-9p 'pci,id=fs0,fsdev=fsdev0,m
ount_tag=hostshare' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device
'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1118884d7bb4' -drive 'file=/var/lib/vz/template/iso/ubuntu-1
6.04.1-server-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-104-disk-1,if=none,id=drive-virtio0,forma
t=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-b
ridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=9E:59:AE:78:FA:3B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
ipcc_send_rec failed: File too large
 
Why this error? Anybody can help me please?

qm set 104 -args "-drive file=/dev/pve/vm-104-disk-1,if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"


ipcc_send_rec failed: File too large
WARNING: Image format was not specified for '/dev/pve/vm-104-disk-1' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
kvm: -device virtio-9p: Could not open 'pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare': No such file or directory
start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'typ
e=1,uuid=54dc868d-9b2a-4d11-9c30-4e141cd78cd2' -name homeserver.sytes.net -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.j
pg' -vga cirrus -vnc unix:/var/run/qemu-server/104.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 'size=1024,slots=255,maxmem=4194304M' -object 'memory-backend-ram,id=ram-node0
,size=1024M' -numa 'node,nodeid=0,cpus=0-3,memdev=ram-node0' -object 'memory-backend-ram,id=mem-dimm0,size=512M' -device 'pc-dimm,id=dimm0,memdev=mem-dimm0,node=0' -object 'memory-backend-ram,id=mem-dimm1,size=
512M' -device 'pc-dimm,id=dimm1,memdev=mem-dimm1,node=0' -object 'memory-backend-ram,id=mem-dimm2,size=512M' -device 'pc-dimm,id=dimm2,memdev=mem-dimm2,node=0' -object 'memory-backend-ram,id=mem-dimm3,size=512M
' -device 'pc-dimm,id=dimm3,memdev=mem-dimm3,node=0' -object 'memory-backend-ram,id=mem-dimm4,size=512M' -device 'pc-dimm,id=dimm4,memdev=mem-dimm4,node=0' -object 'memory-backend-ram,id=mem-dimm5,size=512M' -d
evice 'pc-dimm,id=dimm5,memdev=mem-dimm5,node=0' -object 'memory-backend-ram,id=mem-dimm6,size=512M' -device 'pc-dimm,id=dimm6,memdev=mem-dimm6,node=0' -object 'memory-backend-ram,id=mem-dimm7,size=512M' -devic
e 'pc-dimm,id=dimm7,memdev=mem-dimm7,node=0' -k es -drive 'file=/dev/pve/vm-104-disk-1,if=virtio' -fsdev 'local,security_model=passthrough,id=fsdev0,path=/tmp/share' -device virtio-9p 'pci,id=fs0,fsdev=fsdev0,m
ount_tag=hostshare' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device
'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1118884d7bb4' -drive 'file=/var/lib/vz/template/iso/ubuntu-1
6.04.1-server-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-104-disk-1,if=none,id=drive-virtio0,forma
t=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-b
ridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=9E:59:AE:78:FA:3B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
ipcc_send_rec failed: File too large
 
qm set 104 -args "-drive file=/dev/pve/vm-104-disk-1,if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"
you dont need the -drive argument

and the error for the file system passthrough is:
kvm: -device virtio-9p: Could not open 'pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare': No such file or directory

so i gues there is no directory /tmp/share (you simply have to create it i guess)
 
  • Like
Reactions: Elliott Partridge
Thanks for your responso but not works too...

root@proxmox:/etc/pve/qemu-server# qm set 104 -args "file=/dev/pve/vm-104-disk-1,if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"
update VM 104: -args file=/dev/pve/vm-104-disk-1,if=virtio -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare
root@proxmox:/etc/pve/qemu-server# ls -ld /tmp/share/
drwxr-xr-x 2 root root 4096 Dec 8 20:11 /tmp/share/
root@proxmox:/etc/pve/qemu-server# qm start 104
kvm: -k es: Could not open 'file=/dev/pve/vm-104-disk-1,if=virtio': No such file or directory
start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=54dc868d-9b2a-4d11-9c30-4e141cd78cd2' -name homeserver.sytes.net -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/104.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 'size=1024,slots=255,maxmem=4194304M' -object 'memory-backend-ram,id=ram-node0,size=1024M' -numa 'node,nodeid=0,cpus=0-3,memdev=ram-node0' -object 'memory-backend-ram,id=mem-dimm0,size=512M' -device 'pc-dimm,id=dimm0,memdev=mem-dimm0,node=0' -object 'memory-backend-ram,id=mem-dimm1,size=512M' -device 'pc-dimm,id=dimm1,memdev=mem-dimm1,node=0' -object 'memory-backend-ram,id=mem-dimm2,size=512M' -device 'pc-dimm,id=dimm2,memdev=mem-dimm2,node=0' -object 'memory-backend-ram,id=mem-dimm3,size=512M' -device 'pc-dimm,id=dimm3,memdev=mem-dimm3,node=0' -object 'memory-backend-ram,id=mem-dimm4,size=512M' -device 'pc-dimm,id=dimm4,memdev=mem-dimm4,node=0' -object 'memory-backend-ram,id=mem-dimm5,size=512M' -device 'pc-dimm,id=dimm5,memdev=mem-dimm5,node=0' -object 'memory-backend-ram,id=mem-dimm6,size=512M' -device 'pc-dimm,id=dimm6,memdev=mem-dimm6,node=0' -object 'memory-backend-ram,id=mem-dimm7,size=512M' -device 'pc-dimm,id=dimm7,memdev=mem-dimm7,node=0' -k es 'file=/dev/pve/vm-104-disk-1,if=virtio' -fsdev 'local,security_model=passthrough,id=fsdev0,path=/tmp/share' -device virtio-9p 'pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1118884d7bb4' -drive 'file=/var/lib/vz/template/iso/ubuntu-16.04.1-server-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-104-disk-1,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=9E:59:AE:78:FA:3B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1


I probed without arg file but not works...

Thanks for your response but not works.
 
I was able to get this working, thanks to the corrections provided by @dcsapak and @MoxProxxer. Using your example, it would be:
qm set 104 -args "-fsdev local,security_model=passthrough,id=fsdev0,path=/tmp/share -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare"

This produces the following line in the configuration file (/etc/pve/qemu-server/104.conf):
args: -fsdev local,security_model=passthrough,id=fsdev0,path=/backup -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=backupshare

I am using this mechanism for automounting different backup drives to the same /path and not having to constantly modify VM settings for which drive is passed through to the VM. Not sure if there's a better way to do this, but this should work.
 
  • Like
Reactions: Rayures

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!