Bye bye ESXi - I am coming home to proxmox :)

yatesco

Renowned Member
Sep 25, 2009
214
5
83
Hi all,

Long time no see ;)

Anyway, I have dabbled with ESXi and it really is very nice once you get used to the weird placement of menu options. But I really cannot swallow the lack of basic features compared to proxmox, so I am considering coming back.

First though, a few questions.

I have 3 hosts (HP DL380 G6 with 24GB RAM) each with a single flash disk booting ESXi. VM storage is via a HP SAN. The 4GB flash disk probably isn't big enough to install proxmox on (bear in mind the VMs will live on the SAN) so I was going to stick a single (cheapo) SATA disk in there. Am I really going to see any performance improvements by upgrading to SAS disk. Again, the VMs will live on the SAN. It is only the host that will live on the local disks. (If I was clever I would get the hosts booting off the SAN...)

Secondly, I have a number of VLANs defined at the switch that all the machines connect to. This means that the VMs on any of the hosts can communicate with each other across the VLANs. For example, we have an infrastructureVLAN and a productionVLAN. The production VMs are on one host and they can communicate with the infrastructureVMs (via the infrastructureVLAN) on another host. I assume this is still possible as VLANs are standard networking protocols right? If so, er, how would I do this? The VLANs have numerical ids from 1 to 7.

Thirdly, each physical machine has two 1 GB NICs which go into different routers for failover only. Any hints on how to set this up?

Finally - I have been hearing scary things about stability when running block based virtio on Windows machines - is that still a problem? I will be virtualising DBs and would really like to get the most performance out of the VM.

Many thanks, and I look forward to looking in the /backups directory for the VM backups (as oppose to spending thousands of pounds on some terrible third party backup software which never seems to work :)).

Col
 
If you install on a 4 GB disk (Install Lenny and then Proxmox VE) you will have no or just minimal swap (maybe you can live with this) and no space for LVM snapshots (needed for vzdump) - or you find a way to mount a SAN disk for this.

Virtio on windows? why not. If you are unsure, just go for IDE. the performance gap is not that big, some users reports that IDE is faster anyways - depends on your settings (see also postings from Udo, pointing out that you should only use one cpu for maximum IO). In any case, do a lot of testing.
 
Hi all,

Long time no see ;)

Anyway, I have dabbled with ESXi and it really is very nice once you get used to the weird placement of menu options. But I really cannot swallow the lack of basic features compared to proxmox, so I am considering coming back.

First though, a few questions.

I have 3 hosts (HP DL380 G6 with 24GB RAM) each with a single flash disk booting ESXi. VM storage is via a HP SAN. The 4GB flash disk probably isn't big enough to install proxmox on (bear in mind the VMs will live on the SAN) so I was going to stick a single (cheapo) SATA disk in there. Am I really going to see any performance improvements by upgrading to SAS disk. Again, the VMs will live on the SAN. It is only the host that will live on the local disks. (If I was clever I would get the hosts booting off the SAN...)

Secondly, I have a number of VLANs defined at the switch that all the machines connect to. This means that the VMs on any of the hosts can communicate with each other across the VLANs. For example, we have an infrastructureVLAN and a productionVLAN. The production VMs are on one host and they can communicate with the infrastructureVMs (via the infrastructureVLAN) on another host. I assume this is still possible as VLANs are standard networking protocols right? If so, er, how would I do this? The VLANs have numerical ids from 1 to 7.
Hi,
use simply such a settings in /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address  0.0.0.0
        netmask  0.0.0.0

auto eth0.1
iface eth0.1 inet static
        address  0.0.0.0
        netmask  0.0.0.0

auto eth0.2                                                                                                                                                       
iface eth0.2 inet static                                                                                                                                            
        address  0.0.0.0                                                                                                                                             
        netmask  0.0.0.0

auto eth0.3
iface eth0.23 inet static
        address  0.0.0.0
        netmask  0.0.0.0

# futher till eth0.7
#

auto vmbr0
iface vmbr0 inet static
        address  10.20.30.40
        netmask  255.255.255.0
        gateway  10.20.30.14
        bridge_ports eth0.1
        bridge_stp off
        bridge_fd 0

# vmbr0 is nessesary - due the vlan numbering miss vmbr1
#

auto vmbr2
iface vmbr1 inet manual
        bridge_ports eth0.2
        bridge_stp off
        bridge_fd 0

auto vmbr3
iface vmbr2 inet manual
        bridge_ports eth0.3
        bridge_stp off
        bridge_fd 0

# further till vmbr7
#
Thirdly, each physical machine has two 1 GB NICs which go into different routers for failover only. Any hints on how to set this up?
Possible, but i haven't any experience with that.
Finally - I have been hearing scary things about stability when running block based virtio on Windows machines - is that still a problem? I will be virtualising DBs and would really like to get the most performance out of the VM.
virtio for disks works for me well, but with the virtio-nic i had some trouble...

Udo
 
Thanks all - VLANs are working great - simply create them through the web console and make sure the vmbrX binds to eth0.X where X is the ID of the VLAN. Restart and then off you go.