Thanks for your reply.
I do understand your point. Though I'm sure there must be a way to eventualy sponsor some feature development. That said, the question remains if you are open to the development of this kind of feature, or stick stricktly to the layed-out roadmap. But I guess there are...
We think the design concept of Proxmox is very nice. The clustered management is a key distinguising feature compared to other solutions. It's open source, enabling us to quickly identify issues and work around / fix blocking situations.
We are evaluating proxmox as enterprise virtualisation...
Allright, thanks for the info.
Currently, logging in to a cluster node using the gui, one node is 'marked' green and the other red.
Selecting the green node allows us to manage the nodesetting.
Selecting the red node pops up the login dialog, and credentials are not accepted. A 'data load...
can someone provide some pointers on how to setup a quorum disk?
We have a san, two connected PVE 2.0 beta installs, but get
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
I'm encountering the same after a clean install, when initializing the cluster:
root@yamu:~# pvecm create pvetestcluster
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
As I mentioned before, the module doesn't compile against the kernel-headers (several assembler header files are referenced in the c-sources, and these aren't included in the sources, causing compilation to fail)
as I'm faced with regular problems with the e1000-driver for my 82541PI Gigabit Ethernet Controller, I need to use a recent driver from the e1000 dev team, to be able to pass some extra parameters to the module when it loads.
I'm using 2.6.32-1-pve kernel. The included e1000 module is...
Re: [solved] import existing virtual disks, lvm
for the sake of completeness, this is what I did to solve the problem:
512 lvchange --addtag pve-vm-102 /dev/SAS/vm-102-disk-1
513 lvchange --addtag pve-vm-102 /dev/SAS/vm-102-disk-2
514 lvchange --addtag pve-vm-102...
I did define the volume-groups:
server1:/etc/pve# cat storage.cfg
I have installed proxmox on a system where I used to run libvirt + kvm.
server1:/etc# pveversion -v
pve-manager: 1.5-1 (pve-manager/1.5/4561)
running kernel: 2.6.32-1-pve
is Proxmox already capable of dealing with different volume groups? As I have SAS and SATA raid containers on my current system, I'd like to be flexible enough to put parts of the guest systems (like swap, var and system filesystems) on SAS while keeping mass file storage on SATA. I'm...