Hi Folks,
I'm setting up a (cloud) hosting environment and I'm focused on voice right now. I was thinking to use a Proxmox VE cluster to host PBX containers based on CentOS 5.
I've purchased new Intel-based Supermicro gear for the HN's. Maybe my problem is hardware related? It's quite modern hardware, but PVE seems to install fine on it.
I tried PVE 2.2, and now 2.3 with the same results.
Using the standard CentOS 5 (or 6) template downloaded via the web interface, I can create a CT assigning a static IP address (venet) and networking seems to work on the first boot.
After a reboot of the CT: No hostname (it assumes the HN hostname); no venet0:0 interface (just not there any more); no resolv.conf (not there either).
The network configuation files all seem to look fine. It looks like it should just work, but if I restart networking inside the CT with '/etc/init.d/network restart', it reports "SIOCGIFFLAGS: No such device" where the venet0:0 should be.
I am using an NFS share on a fast, new ZFS storage server over a 1GbE connection.
Using the web interface while the problem CT is running, if I remove then re-add the venet IP, it comes alive. If I also 'echo "nameserver 8.8.8.8" > /etc/resolv.conf' name resolution also works.
I'm pretty familiar with virtualization with other hypervisors, but not with OpenVZ. Maybe I just don't understand how it's supposed to work...
Questions
========
Am I going down the right road for the ultimate goal (hosting PBXs)?
Should I be using 'veth' instead of 'venet'? My Googling seems to indicate 'venet' is better for this task...
Am I making some obvious error in subnetting or topology? So far I'm just using a single interface on the HN and 192.168.1.x addresses on both the HN and the CT (I'll add the second NIC dedicated to storge after I get it working).
Is a fast NFS share the best plan for shared storage? I can use 10GbE or iSCSI or both if necessary or desireable, but I doubt the load on the storage will warrant either.
Am I just not 'finishing the job' of configuring the CentOS CT? Are additional steps required?
Are there any BIOS settings that might affect any of this? Like I mentioned, it's very new hardware.
I don't remember have any of these issues when I last used PVE for my home network (using KVM and CT guests) and I'm reaching the point of giving up and going back to Xen... That was the heretic in me talking. ;-)
Thanks in advance,
G
p.s. I really like the PVE concept. My gratitude to the developers. Thanks!
I'm setting up a (cloud) hosting environment and I'm focused on voice right now. I was thinking to use a Proxmox VE cluster to host PBX containers based on CentOS 5.
I've purchased new Intel-based Supermicro gear for the HN's. Maybe my problem is hardware related? It's quite modern hardware, but PVE seems to install fine on it.
I tried PVE 2.2, and now 2.3 with the same results.
Using the standard CentOS 5 (or 6) template downloaded via the web interface, I can create a CT assigning a static IP address (venet) and networking seems to work on the first boot.
After a reboot of the CT: No hostname (it assumes the HN hostname); no venet0:0 interface (just not there any more); no resolv.conf (not there either).
The network configuation files all seem to look fine. It looks like it should just work, but if I restart networking inside the CT with '/etc/init.d/network restart', it reports "SIOCGIFFLAGS: No such device" where the venet0:0 should be.
I am using an NFS share on a fast, new ZFS storage server over a 1GbE connection.
Using the web interface while the problem CT is running, if I remove then re-add the venet IP, it comes alive. If I also 'echo "nameserver 8.8.8.8" > /etc/resolv.conf' name resolution also works.
I'm pretty familiar with virtualization with other hypervisors, but not with OpenVZ. Maybe I just don't understand how it's supposed to work...
Questions
========
Am I going down the right road for the ultimate goal (hosting PBXs)?
Should I be using 'veth' instead of 'venet'? My Googling seems to indicate 'venet' is better for this task...
Am I making some obvious error in subnetting or topology? So far I'm just using a single interface on the HN and 192.168.1.x addresses on both the HN and the CT (I'll add the second NIC dedicated to storge after I get it working).
Is a fast NFS share the best plan for shared storage? I can use 10GbE or iSCSI or both if necessary or desireable, but I doubt the load on the storage will warrant either.
Am I just not 'finishing the job' of configuring the CentOS CT? Are additional steps required?
Are there any BIOS settings that might affect any of this? Like I mentioned, it's very new hardware.
I don't remember have any of these issues when I last used PVE for my home network (using KVM and CT guests) and I'm reaching the point of giving up and going back to Xen... That was the heretic in me talking. ;-)
Thanks in advance,
G
p.s. I really like the PVE concept. My gratitude to the developers. Thanks!
Last edited: