Dual ethernet on VM

chrisalavoine

Renowned Member
Sep 30, 2009
152
0
81
Hi there,

I'm planning to migrate our Physical Windows 2003 Server to a VM.

I want to create a small front end VM which will then use an iSCSI initiator from inside the VM to connect to my SAN, where I will have a large LUN for data. The SAN is on a different subnet, of course.

I can't seem to figure out a way to do this in either a Linux (ubuntu) or Windows VM.

Any pointers most welcome.

Chris.
 
Hi,
i think you have two options:
First - use a seperate bridge (like vmbr1 - connected to eth1) and use them as second network interface at your VM.
Second - connect the iscsi-disk to the proxmoxnode (eg. as sdb) and connect the disk to the vm (ide1: /dev/sdb).

But check the performance for the fileservices in a vm! For IO with windows-guest you should only use one guest-cpu.

Udo
 
Hi Udo,

Thanks for the quick reply.
I like the idea of your first suggestion. This is my current ifconfig on the proxmox master:

eth0 Link encap:Ethernet HWaddr a4:ba:db:3d:00:6d
inet6 addr: fe80::a6ba:dbff:fe3d:6d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:178748 errors:0 dropped:0 overruns:0 frame:0
TX packets:126824 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:45361455 (43.2 MiB) TX bytes:47528962 (45.3 MiB)
Interrupt:36 Memory:d6000000-d6012800

eth2 Link encap:Ethernet HWaddr 00:10:18:63:0d:58
inet addr:192.168.20.64 Bcast:192.168.20.255 Mask:255.255.255.0
inet6 addr: fe80::210:18ff:fe63:d58/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1021148 errors:0 dropped:0 overruns:0 frame:0
TX packets:566415 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1484720136 (1.3 GiB) TX bytes:94638152 (90.2 MiB)
Interrupt:38 Memory:da000000-da012800

eth3 Link encap:Ethernet HWaddr 00:10:18:63:0d:5a
inet addr:192.168.20.65 Bcast:192.168.20.255 Mask:255.255.255.0
inet6 addr: fe80::210:18ff:fe63:d5a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:129 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8256 (8.0 KiB) TX bytes:492 (492.0 B)
Interrupt:45 Memory:dc000000-dc012800

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:186959 errors:0 dropped:0 overruns:0 frame:0
TX packets:186959 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:78573952 (74.9 MiB) TX bytes:78573952 (74.9 MiB)

vmbr0 Link encap:Ethernet HWaddr a4:ba:db:3d:00:6d
inet addr:192.168.16.252 Bcast:192.168.16.255 Mask:255.255.255.0
inet6 addr: fe80::a6ba:dbff:fe3d:6d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:174030 errors:0 dropped:0 overruns:0 frame:0
TX packets:113988 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:41483668 (39.5 MiB) TX bytes:46158990 (44.0 MiB)

vmtab101i0 Link encap:Ethernet HWaddr fe:b5:73:09:36:29
inet6 addr: fe80::fcb5:73ff:fe09:3629/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:219 errors:0 dropped:0 overruns:0 frame:0
TX packets:24371 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:28636 (27.9 KiB) TX bytes:3073176 (2.9 MiB)

vmtab103i0 Link encap:Ethernet HWaddr de:a0:25:ac:6f:0d
inet6 addr: fe80::dca0:25ff:feac:6f0d/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:13 errors:0 dropped:0 overruns:0 frame:0
TX packets:24241 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:1022 (1022.0 B) TX bytes:3058125 (2.9 MiB)

As you can see I currently have eth0 slaved to vmbr0 and eth2 and eth3 are multipathed to two Cisco 3560 switches on the SAN network. I guess I'd have to lose the multipath or get another interface installed to make vmbr1, unless you can see another way of doing it?

Kind regards,
Chris.
 
Scrap that, I've gone for option 2.

Connected directly to iSCSI volume from proxmox node. Partitioned and formatted this disk, then loaded onto VM using: ide1: /dev/sdf

VM is CentOS and mounted volume fine.

Am getting IO of around 150MB/sec which is a little less than I was getting before. Are there any further steps I can take to improve IO? This is going to be an oracle dbase server.

c:)
 
Scrap that, I've gone for option 2.

Connected directly to iSCSI volume from proxmox node. Partitioned and formatted this disk, then loaded onto VM using: ide1: /dev/sdf

VM is CentOS and mounted volume fine.

Am getting IO of around 150MB/sec which is a little less than I was getting before. Are there any further steps I can take to improve IO? This is going to be an oracle dbase server.

c:)
Hi,
your VM is now CentOS? In your first post you wrote about a win 2003?
For Linux-VM you can use more than one cpu in the guest without bad io-performance.

If you use the virtio instead ide perhaps you will have an performance gain.
You can also try the option cache=none: (ide1: /dev/sdf,cache=none)

150MB/s sounds not to bad - for a VM.

Udo
 
Hi Udo,

Apologies for confusion. I'm actually creating two new VM's - one CentOS (oracle DB server) and one Windows 2003 Storage Server.

You mention about using multiple CPU's with linux but not with Windows. How about multiple cores, or does the same rule apply?

Thanks for all your help btw.

c:)
 
Hi Udo,

Apologies for confusion. I'm actually creating two new VM's - one CentOS (oracle DB server) and one Windows 2003 Storage Server.

You mention about using multiple CPU's with linux but not with Windows. How about multiple cores, or does the same rule apply?

Thanks for all your help btw.

c:)
Hi,
my tests shows thats no differents between sockets and cores (it's the same, only usefull for licensing-options). If you use with windows more than one guest-cpu the io-performance drop noticeable. The cpu-performance are good with smp, so it's depends on you usage if the overall-performance gain or not.

Udo
 
Hi Udo,

Last night my new fileserver VM died a horrible death.

I added it to my snapshot backup routine and it tried to backup both the VM frontend (10GB) plus the iSCSI manually added volume (/dev/sdf - 1TB!). This made things grind to a nasty halt and I eventually had to rebuild the frontend from scratch.

I wondered if anyone knows of a way to backup just the ide0 drive and not ide1?

Regards,
Chris.