add physical disks to linux guest (kvm)

jeebustrain

Renowned Member
Jan 14, 2010
24
0
66
St Louis
I'm trying to add 3 sata disks to a kvm guest (running openfiler) so I can have a virtualized NAS. I found this in the FAQ, but I'm a bit confused. The first step says "add it first in the web interface." That's where I'm stuck. I've got 3x 2TB drives that I cannot figure any sort of way to either add it to the main storage section or attach them directly to the VM. I tried both unpartitioned disks and partitioned disks and it doesn't appear to change anything.

What am I missing here?
 
I'm trying to add 3 sata disks to a kvm guest (running openfiler) so I can have a virtualized NAS. I found this in the FAQ, but I'm a bit confused. The first step says "add it first in the web interface." That's where I'm stuck. I've got 3x 2TB drives that I cannot figure any sort of way to either add it to the main storage section or attach them directly to the VM. I tried both unpartitioned disks and partitioned disks and it doesn't appear to change anything.

What am I missing here?

yes, this wiki page is wrong. just add the drives to the VMID.conf file, e.g. with:

Code:
qm set <vmid> -ide# /dev/sdb

and as always when you change the hardware for a KVM guest, do a poweroff and a start (reboot is not enough).
 
awesome - this worked great.

one question though - does this method allow you to create virtual scsi disks? The only reason I'm asking is that I currently have 3x sata disks used inside an openfiler VM in a Raid5. Depending on how things come out, I'd like the capacity to be able to add more than 4x disks. I was playing around and noticed that it only did ide0-3.

On a note though - I am extremely impressed with this product. After banging my head against the wall for 3 weeks playing with the free commercial alternatives (esxi, hyper-v, xenserver) and xen, this is miles above everything else (for what I need to do). We use a combination of ESX and Hyper-V clusters at work and I understand their need for that (support mainly - I'm pretty much the only linux guy there and I'm just the dba), but for a guy like me, this is perfect. I almost want to build out a couple extra boxes just so I could try out the cluster functionality.
 
better would be using virtio, but as far as I see the openfiler kernel does not support this yet, see https://project.openfiler.com/tracker/ticket/900

oups!...beep!...wrong! :confused:
The solution is explained in the very ticket you are referring to.
Just follow the instructions and openfiler will run nicely with
VIRTIO net and block drivers.
I'm running it for weeks without problems now.
...virtio-blck performance is about double compared to adding
a physical disk to the vm using scsi.
...virtio-net is not a big advantage compared to the e1000 driver on my box.

<edit>
...to be more precise:
The openfiler kernel already supports VIRTIO (make a "# cat /proc/partitions" inside the openfiler Vm when
virtio is activated for this VM) or check the Hardware config displayed in the system tab of the OF admin interface
Alone virtio disks use a different naming scheme (vda instead sda for scsi).
The solution described in the ticket enables the openfiler admin interface to "see" the vdisks in addition to the conventional "sdisks".
</edit>

regards,
P3X-749
 
Last edited:
oups!...beep!...wrong! :confused:
The solution is explained in the very ticket you are referring to.
Just follow the instructions and openfiler will run nicely with
VIRTIO net and block drivers.
I'm running it for weeks without problems now.
...virtio-blck performance is about double compared to adding
a physical disk to the vm using scsi.
...virtio-net is not a big advantage compared to the e1000 driver on my box.

regards,
P3X-749

I would love to see the virtio on the default openfiler installation. really interesting that you got double performance, how do you measure this?
 
really interesting that you got double performance, how do you measure this?
...simply by copying some large videos back and forth (at the same time)
between a local disk of another computer and the OF VM.
Redo the tests with permutations of block (virtio|scsi) and net (virtio|e1000) driver
settings for the VM.
Using a direct GBit connection to the switch I got avrg. 28MB/sec (cifs)
and 35MB/sec (nfs/ftp) when transfering files to/from the share when using virtio
block drivers and avrg. 15MB/sec with scsi drivers.
Using virtio-net and e1000 drivers for net did not make any difference.

I must admit that this is not a technical method but rather a real world scenario
and your mileage may vary because performance will depend on other parts
of your infrastructure setup, but for me I found the config that would work best.
Also, compared to a bare metal OF install, performance of OF in a VM
is about 50-75 percent, but still sufficient for my purposes.
 
p3x-749,

Did you, or can you, try bandwidth performance using iPerf between your guest and Openfiler? iPerf comes pre-installed on Openfiler.
 
OK, this is for TCP

client to OF (with virtio-net)
[root@lc4eb8056658533 ~]# iperf -c 192.168.0.20 -fM -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.20, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.100 port 46286 connected with 192.168.0.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 315 MBytes 31.4 MBytes/sec
[ 5] local 192.168.0.100 port 5001 connected with 192.168.0.20 port 34944
[ 5] 0.0-10.0 sec 458 MBytes 45.8 MBytes/sec

[root@lc4eb8056658533 ~]# iperf -c 192.168.0.20 -fM -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.20, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.100 port 46293 connected with 192.168.0.20 port 5001
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.20 port 34949
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 178 MBytes 17.8 MBytes/sec
[ 4] 0.0-10.0 sec 210 MBytes 21.0 MBytes/sec
...and client to PVE host:
[root@lc4eb8056658533 ~]# iperf -c 192.168.0.10 -fM -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.10, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.100 port 39260 connected with 192.168.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 1114 MBytes 111 MBytes/sec
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.10 port 38771
[ 4] 0.0-10.0 sec 1121 MBytes 112 MBytes/sec
[root@lc4eb8056658533 ~]# iperf -c 192.168.0.10 -fM -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.10, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.100 port 39269 connected with 192.168.0.10 port 5001
[ 5] local 192.168.0.100 port 5001 connected with 192.168.0.10 port 38772
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 929 MBytes 92.8 MBytes/sec
[ 5] 0.0-10.0 sec 955 MBytes 95.4 MBytes/sec
 
...oups...forgot...this is for OF with e1000 net:

[root@lc4eb8056658533 ~]# iperf -c 192.168.0.20 -fM -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.20, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.100 port 34998 connected with 192.168.0.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 382 MBytes 38.2 MBytes/sec
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.20 port 49852
[ 4] 0.0-10.0 sec 205 MBytes 20.5 MBytes/sec

[root@lc4eb8056658533 ~]# iperf -c 192.168.0.20 -fM -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.20, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.100 port 35019 connected with 192.168.0.20 port 5001
[ 5] local 192.168.0.100 port 5001 connected with 192.168.0.20 port 49861
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 93.1 MBytes 9.29 MBytes/sec
[ 4] 0.0-10.0 sec 205 MBytes 20.4 MBytes/sec
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!