partitioning planning suggestions needed

  • Thread starter Thread starter bnapalm
  • Start date Start date
B

bnapalm

Guest
Hello! First of all, let me say thank you for such a great product!

I have to install ProxMox on a number of servers and when I first tried to install it, I got an error "unable to create data volume". After a bit of searching, I found out that this was because of too big RAID volume, which was over 2TB, meaning I have to change the RAID configuration. This got me thinking about the best way to do it though, so I would really appreciate it if someone gave me some suggestions, since I'm pretty new at virtualization stuff.

Here is some information that I need to consider:

  • There are 12 DELL PowerEdge servers in total, all with 2 CPU sockets, each CPU quad-core, each server has 24 gigs of RAM;
  • Each server, except one, has internal hard disks, ranging from two 2TB HDD, up to 6x2TB HDD. The one exception has no internal HDDs, but has external HDD array (24TB) wired to it, so basically it acts as if it was an internal HDD;
  • We plan to run KVM VMs only;
  • About half of the servers currently run vSphere (free edition) and are planned to migrate to ProxMox someday. Others are yet not used, but are planned to have ProxMox;
  • We use RAID5 when possible;
I understand that for virtualization it would be better to have a shared storage, but, unfortunately, that is not possible now due to financial and other reasons. We plan to bypass some of ProxMox storage management and store the data partition(s) for the VMs as a local NFS storage, for better possibilities of migrating them elsewhere if needed.


Any thoughts of initial partitioning or any other suggestions would be more than welcome!

Thank you in advance.

edit: one thing I would like to know is - how much space would ProxMox VE need for itself, considering we only plan to run KVM? I guess it would make sense to install ProxMox on a partition slightly larger than needed, and dedicate the rest for VMs and data.
 
Last edited by a moderator:
Hello again!

Unfortunately, noone has replied yet, so I configured the raid drives as to have one drive 100GB and the rest (2TB+) as another drive. Can anyone tell me if this configuration could have any shortcomings if using KVM exclusively?

Thank you in advance!
 
Hello again!

Unfortunately, noone has replied yet, so I configured the raid drives as to have one drive 100GB and the rest (2TB+) as another drive. Can anyone tell me if this configuration could have any shortcomings if using KVM exclusively?

Thank you in advance!
Hi,
100GB is enough for an pve-installation (and you have enough space for iso-images too).

Because raid-5: IO is often an bottleneck with virtualisation - perhaps you should try raid-10.

Virtualisation is fun with shared storage (and live-migration). Perhaps you should try drbd? (AFAIK but only fast with 10GB-Ethernet or Infiniband).

Udo
 
Hi, udo, thanks for your answer!

Yes, RAID-10 would be good, but unfortunately I am not sure it's possible, since it effectively halves the storage space available, making 6x2TB array only a 6TB array. We might actually need more space than that on one server.

I had taken a look on DRBD, but that has the same problem as with RAID-10 - it makes 2 servers into 1 server, which we cannot afford.


To say the truth, I am still not quite sure what advantages I do have and what potential advantages I lose if I am running several ProxMox servers in a cluster, but with local (not shared) storage.
If I understand it correctly, I am still able to migrate VMs from one machine to another, but I guess I wouldn't have live-migration abilities (I will have to power off the VM I want to move). What I also don't understand - will the whole VM container be moved across the devices when migrating? Or will the VM run on one servers CPU/RAM and still be stored over the network on the original storage? (I'm guessing not but I want to make sure)

Thanks!
 
Hi, udo, thanks for your answer!

Yes, RAID-10 would be good, but unfortunately I am not sure it's possible, since it effectively halves the storage space available, making 6x2TB array only a 6TB array. We might actually need more space than that on one server.

I had taken a look on DRBD, but that has the same problem as with RAID-10 - it makes 2 servers into 1 server, which we cannot afford.


To say the truth, I am still not quite sure what advantages I do have and what potential advantages I lose if I am running several ProxMox servers in a cluster, but with local (not shared) storage.
If I understand it correctly, I am still able to migrate VMs from one machine to another, but I guess I wouldn't have live-migration abilities (I will have to power off the VM I want to move). What I also don't understand - will the whole VM container be moved across the devices when migrating? Or will the VM run on one servers CPU/RAM and still be stored over the network on the original storage? (I'm guessing not but I want to make sure)

Thanks!
Hi,
in short - a cluster without shared storage is "only" for better administration (one interface shows all VM). OK, a little bit more (synced iso images...) and you can offline move VMs (rsync the data).
With shared storage you can move the VMs online - much faster (kvm-VM). OpenVZ will also rsync because of local storage. An kvm-vm will sync only the memory and prezessor-state - take only some seconds/minutes (depend on ram-usage, how busy the vm is and how fast the connection/cpu is).

The best thing is to play with an testsystem to see the advantages.


Udo
 
Thanks again for the reply, Udo, really helps me understand how ProxMox works.

Unfortunately, I have run into another problem. I have defined local LVM storage containers on each node and tried to offline-migrate a VM. Unfortunately, it doesn't work and just says this:

Feb 20 13:55:15 Failed to sync data - can't migrate 'containers:vm-101-disk-1' - storagy type 'lvm' not supported

Does this mean that offline-migration doesn't work when using LVM volume groups? Both hosts have identically named LVM group, although they both are of different size (I understand that shouldn't matter). Is there any work-around or something to migrate VMs using LVM? Or should I choose a different storage model?


Thanks in advance!