I wonder if Proxmox supports shared SAS storage? I would like to use a storage like DELL MD3200 that can connect up to 4 hosts directly using SAS HBA's in a multipath setup (up to eight without multipath). It works great with solutions like VMware and Citrix Xenserver a shared storage instead of...
How can that be inflexible? When you still can have the option to "break" the global data center config if you would like to and configure network on specific hosts.
Just an idea for the future.
One of the very nice features of Virtual Iron was that you ran the hypervisors completely disk less. They had a management server which would act as a DHCP/PXE server over a managed network. So when you wanted a new node to run VM's you simply PXE booted a physical...
Yes that is the basic concept but when I do have 4-6 servers in a cluster with identical hardware it would be very nice to have a network configuration on the data center level that made sure that all the nodes in the cluster shared the same network settings. So if I set up a bridge or a vlan I...
I would like to see a change in naming of VM's so that the actual VM name is displayed first and then the VM number in parenthesizes. That way the naming scheme will sort all my VM's for me in a correct way and it is much easier to manage when you have lots of VM's. I usually use a name scheme...
Got the same problem here running Ubuntu 11.10 and Kubuntu 11.10. Also the character encoding is totally wrong in VNC console so it is very hard to use. And yes I have set the keyboard under datacenter-options- keyboard (have to use Finnish since there is no Swedish). Or maybe it just that the...
Will this behavior be addressed in final release?
If running
#pvecm e 1
where to solve the problem. Then the final node should be able to run this by it self when doing a clean shutdown. Surely the host must be aware of that it is the last running node in cluster and there by also know that it...
Just a simple way to configure this should be enough. Seems like there is no "golden setting" here. But you should be able to easily change these settings so that you can tune for your cluster/setup along with an short explanation to why. I am sure that many others would also prefer if their VM...
Yes, just as we have proposed earlier...
As e100 suggested earlier. Add it to the manager where you could easily have some options to change this your self.
Options that says.
1. If you are using a secure management/storage network then maybe rsync (with possibility to set your own options)
2...
As I mentioned, for unsecure networks this might be a good option to sacrifice performance for security. But why use it in a secure environment where you really need all the performance you can get. And by using the older protocol MD4 (which rsync used prior to v. 3) you do get the benefit of...
Ok, now I get it. The template/iso directory structure is on the actual NFS server. Makes sense when you think about it since you now also can specify a storage to host all kinds of files, images, isos etc...
Thanks for clarifying this. :)
When shutting down the whole cluster (all nodes) the last node always hungs at the line
Deactivating VG ::
on the console of that node/host.
Hope that this is something that can easily be fixed in later builds.
Not even after manually mounting the nfs iso share to /mnt/pve/template/iso the iso files does not show up in the web UI. Or editing the /etc/pve/storage.cfg to mount in the locations.
Ok, so how do I do this from the web gui? I cant find a way to manually set the mount point to be /mnt/pve/template/iso
I guess and hope that this will be fixed in next released build of the v. 2 ISO.
Not secure enough? But why would you that extra security in transferring data on a secure management/storage network? None of the MDx are that secure anyway so should that really matter when we the goal should be to optimize this for this specific solution/product.
If you where to rsync over...