Let me attempt to redeem my self.
So I setup the interfaces on the NODES before creating the VE??
For example, if I make a new bridge device, vmbr1, I should specify eth0.5 as the bridged interfaces? Correct?
Did I fall off my rocker, or shouldn't I be able to specify a VLAN per VE/KVM?
For example, if my network is setup in a way that my cluster nodes are plugged into TRUNK ports, how ould VE100 be on vlan 5 and VE101 be on vlan 6?
Am I expecting too much, or just not understanding this?
Hehe, my rc.local script is starting to look more appealing now.. It actually seems like much less work.
Perhaps as ProxMox starts going down the shared storage roadmap, this functionality will be automatic, so I suppose I won't spend too much more time on it.
Ok I got it semi-working.. No more rc.local script but the order of boot still isnt right.. I added S19networking to rc2.d right before iscsi, problem is that in my /etc/network/if-up.d/ theres a mountnfs script that runs as soon as the networking is online... which means nfs tries to mount...
Interestingly enough, the only place a networking script is linked in any /etc/rcX.d's is in rc6.. no where else... how does THAT happen?
Ill try to readd it.. but all 3 of my nodes have the same thing.
For now I did it the rc.local (dirty) way.
mount /dev/sdc1 /var/lib/vz -t ext3 -o _netdev
mount XXXXXXX:/mnt/ofsan1/store1/proxmox-images/ /var/lib/vz/images -t nfs -o rsize=8k,wsize=8k,noatime,hard,_netdev
mount XXXXXXX:/mnt/ofsan1/store1/proxmox-templates/ /var/lib/vz/template -t nfs -o...
Having some issues regarding setting up an ISCSI target for /var/lib/vz.
The problem is, open-iscsi tries to start before vmbr0 is up and running, therefor fails, and leaves the node stuck at a Maintenance prompt.
Unfortunately, im not well enough versed in Debian to figure out how to get...
Yes, you've got the idea there.. Sounds all too like my VMware cluster (which I love, and hate $$$).
However, from experience OpenVZ VE's do not play nice on NFS (quotas), so I think as of right now the only shared storage would be for the QEMU images using NFS (as you suggested).
I wonder if...
I was speaking of the QEMU config that tells the QEMU server to wait X minutes before killing all active KVMs.
However, my Win2k3 KVM also didn't shutdown, so it looks like something else might be up with QEMU (obviously nothing wrong with PVE, though).
Ill see if theres other QEMUers that...
Is that "kill vm" variable set somewhere? I watched my machine this morning for over 5 minutes, it was indeed waiting forever. And I also tested a CentOS VM (Trixbox) on it and it was responding normally for the entire 5 minutes I waited, so the ACPI shutdown seemed to be ineffective.
I witnessed the same behavior on my cluster.. While I don't think its a problem with PVE, its probably something specific to how PVE configures QEMU.
I hope someone can find a fix for that, because right now my only work around is a remote reboot strip... which is less than acceptable for a...
What ISCSI level are you looking for? ISCSI within the VE/VM, or storing the NODES data and root on ISCSI?
I setup one of my nodes to use ISCSI (open-iscsi) so all the VE/VM's are stored on my NAS, which just gives me the benefit of not relying on cheap harddrives in my NODES..
I do hope to...
Ok disregard the migration of the KVM's because I just did a simple scp between two nodes and that did the trick.. I thought there was more to it, but wow, that was simple...
So.. in KVM land, using NFS to store the QEMU disks would be ok.. the only problem I forsee is that each node would...
We currently use a large production VMware ESX cluster, and just recently I started toying around with ProxMox VE, and I have a few questions..
Any roadmaps heading towards auto-failover support, where nodes can share the same filesystems, and takeover processing for VM's or VE's if the current...