PDA

View Full Version : Things that needs to change



selund
05-18-2010, 08:09 PM
Hello, I'm using pve on a couple of my own servers, but I'm about to put pve into production at work. There are a couple of things that I would liked to be different.

1) If you have a cluster with shared storage, it should be possible to start a "missing" guest on another host if one of the hosts go down.

2) It would help allot to be able to add a vlan(and a bridge) without having to restart a host (yes I know it can be done manually, but it that's not a nice way to do it). I can understand that removing a vlan can be a bit more tricky.

3) For our deployment of hosts we use FAI, and the installation is done in an isolated vlan. One nice thing would be to be able to change (againg via the gui) which vmbr/vlan a nic on a virtual server is connected to.

4) The last thing I would like to see, is the ability to allocate a logical volume to a OpenVZ guest. As far as I can understand, this isn't a pve restriction, rather a OpenVZ problem.


I know this isn't something that's easy to do, nor do I expect it to be done. This is just my thoughts

tom
05-18-2010, 09:23 PM
Hello, I'm using pve on a couple of my own servers, but I'm about to put pve into production at work. There are a couple of things that I would liked to be different.

1) If you have a cluster with shared storage, it should be possible to start a "missing" guest on another host if one of the hosts go down.

see http://pve.proxmox.com/wiki/Roadmap


2) It would help allot to be able to add a vlan(and a bridge) without having to restart a host (yes I know it can be done manually, but it that's not a nice way to do it). I can understand that removing a vlan can be a bit more tricky.

its already requested by others, we take a look after 2.x is released.


3) For our deployment of hosts we use FAI, and the installation is done in an isolated vlan. One nice thing would be to be able to change (againg via the gui) which vmbr/vlan a nic on a virtual server is connected to.

this works. see http://pve.proxmox.com/wiki/Network_Model
whats the issue on your side?


4) The last thing I would like to see, is the ability to allocate a logical volume to a OpenVZ guest. As far as I can understand, this isn't a pve restriction, rather a OpenVZ problem.

yes, openvz does not support multiple storage types. we expect a working container solution in the mainline kernel and then we can use multiple storage's also here.


I know this isn't something that's easy to do, nor do I expect it to be done. This is just my thoughts

thanks for feedback!

selund
05-18-2010, 09:55 PM
this works. see http://pve.proxmox.com/wiki/Network_Model
whats the issue on your side?


The issue is that I can't change which vmbr a nic should be connected to. I have to delete the old nic, then add a new one in the new vmbr.

For instance, fai is on vmbr129 and I want the server I've just installed to be in vlan 404 which is in vmbr404. Then I have to delete the interface connected to vmbr129, and add a new nic connected to vmbr404.

selund
05-18-2010, 10:49 PM
Found a way to start "missing" guests manually. Keeping a copy of /etc/qemu-server/ on the other server, and copy the files into /etc/qemu-server/ if one of the hosts should become unavailable. I'm aware that I can't delete the vm instances form the webinterface when the failed host returns. That would delete the storage for the server also.