Search results

  1. Z

    vlan howto

    Let me attempt to redeem my self. So I setup the interfaces on the NODES before creating the VE?? For example, if I make a new bridge device, vmbr1, I should specify eth0.5 as the bridged interfaces? Correct?
  2. Z

    vlan howto

    Did I fall off my rocker, or shouldn't I be able to specify a VLAN per VE/KVM? For example, if my network is setup in a way that my cluster nodes are plugged into TRUNK ports, how ould VE100 be on vlan 5 and VE101 be on vlan 6? Am I expecting too much, or just not understanding this?
  3. Z

    open-iscsi automount, with a NFS on top

    Hehe, my rc.local script is starting to look more appealing now.. It actually seems like much less work. Perhaps as ProxMox starts going down the shared storage roadmap, this functionality will be automatic, so I suppose I won't spend too much more time on it.
  4. Z

    open-iscsi automount, with a NFS on top

    Ok I got it semi-working.. No more rc.local script but the order of boot still isnt right.. I added S19networking to rc2.d right before iscsi, problem is that in my /etc/network/if-up.d/ theres a mountnfs script that runs as soon as the networking is online... which means nfs tries to mount...
  5. Z

    open-iscsi automount, with a NFS on top

    Interestingly enough, the only place a networking script is linked in any /etc/rcX.d's is in rc6.. no where else... how does THAT happen? Ill try to readd it.. but all 3 of my nodes have the same thing.
  6. Z

    open-iscsi automount, with a NFS on top

    For now I did it the rc.local (dirty) way. mount /dev/sdc1 /var/lib/vz -t ext3 -o _netdev mount XXXXXXX:/mnt/ofsan1/store1/proxmox-images/ /var/lib/vz/images -t nfs -o rsize=8k,wsize=8k,noatime,hard,_netdev mount XXXXXXX:/mnt/ofsan1/store1/proxmox-templates/ /var/lib/vz/template -t nfs -o...
  7. Z

    Proxmox VE 0.9beta2 released!

    So, which release can we expect the VZ live migration to function? I noticed its completely disabled now.
  8. Z

    open-iscsi automount, with a NFS on top

    Having some issues regarding setting up an ISCSI target for /var/lib/vz. The problem is, open-iscsi tries to start before vmbr0 is up and running, therefor fails, and leaves the node stuck at a Maintenance prompt. Unfortunately, im not well enough versed in Debian to figure out how to get...
  9. Z

    server doesn't shut down

    Interesting, perhaps Ill dist-upgrade one of my nodes to test this.. Ideally the KVM would shutdown via ACPI, but I can deal with a 3 minute killall for now if this does the trick.
  10. Z

    Beta Testing

    Yes, you've got the idea there.. Sounds all too like my VMware cluster (which I love, and hate $$$). However, from experience OpenVZ VE's do not play nice on NFS (quotas), so I think as of right now the only shared storage would be for the QEMU images using NFS (as you suggested). I wonder if...
  11. Z

    server doesn't shut down

    I was speaking of the QEMU config that tells the QEMU server to wait X minutes before killing all active KVMs. However, my Win2k3 KVM also didn't shutdown, so it looks like something else might be up with QEMU (obviously nothing wrong with PVE, though). Ill see if theres other QEMUers that...
  12. Z

    server doesn't shut down

    Is that "kill vm" variable set somewhere? I watched my machine this morning for over 5 minutes, it was indeed waiting forever. And I also tested a CentOS VM (Trixbox) on it and it was responding normally for the entire 5 minutes I waited, so the ACPI shutdown seemed to be ineffective.
  13. Z

    server doesn't shut down

    I witnessed the same behavior on my cluster.. While I don't think its a problem with PVE, its probably something specific to how PVE configures QEMU. I hope someone can find a fix for that, because right now my only work around is a remote reboot strip... which is less than acceptable for a...
  14. Z

    Beta Testing

    What ISCSI level are you looking for? ISCSI within the VE/VM, or storing the NODES data and root on ISCSI? I setup one of my nodes to use ISCSI (open-iscsi) so all the VE/VM's are stored on my NAS, which just gives me the benefit of not relying on cheap harddrives in my NODES.. I do hope to...
  15. Z

    Perfect setup?

    I agree, I used OpenVZ prior to using ProxMox, so I am familiar with it, and its shortcomings over NFS, so I am looking forward to seeing what 1.0 is going to offer.
  16. Z

    Perfect setup?

    Ok disregard the migration of the KVM's because I just did a simple scp between two nodes and that did the trick.. I thought there was more to it, but wow, that was simple... So.. in KVM land, using NFS to store the QEMU disks would be ok.. the only problem I forsee is that each node would...
  17. Z

    Perfect setup?

    We currently use a large production VMware ESX cluster, and just recently I started toying around with ProxMox VE, and I have a few questions.. Any roadmaps heading towards auto-failover support, where nodes can share the same filesystems, and takeover processing for VM's or VE's if the current...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!