Yep, 120GB is enough if you are using it only for journal. Well, i don't have any Windows VMs on the cluster.
If your cache pool and storage pool OSDs are mixed on the same hosts you'll have a nightmare to manage your crushmap so it's better to separate them at the host level (well, if you are...
Well, you should test the 4 spinner 1 SSD journal setup it might work with a decent datacenter SSD. I've used Samsung 830 and 843 series but they wouldn't last half a year under our load so i've switched over to the new 843 Datacenter series and have high hopes for it. Right now the 3 spinner 1...
As the ceph documentation says (and also my experience) you should use only 3 spinners with 1 ssd for journal. During recovery more spinners will literally kill that one ssd and that would be a huge performance impact for your cluster. Also if one ssd goes down all the spinners belonging to it...
Nope, all of our servers are Intel based. Anyway i've tried qemu64 as the CPU type without any success so far. I've also tried different versions of SeaBIOS (1.7, 1.7.2, 1.7.3, 1.7.4) without any luck either.
Do you mean the guests should be 32bit versions?
UPDATE: yes, it works with a 32bit guest. Although it's not an option for me, i'm migrating already running guests over to the new Proxmox cluster and they are 64bit systems. Reinstalling them is not an option. :(
Hi,
since upgrading our Proxmox cluster to the latest version (see info below) i'm unable to boot FreeBSD guest VMs (8.4, 9.2 and 10). It hangs while trying to boot the kernel after the boot prompt. OpenBSD and NetBSD guests are running fine. I've found some info related to a seabios bug so...
Changing the CPU type from host to kvm64 seems to solve the problem, though i have an identical infrastructure and migration was working around version 3.0. Anyway, i'm glad it is stable again. :)
I'm experiencing the issue with guests using kernel 3.11.3. I've made 15 migrations in a row with an OpenBSD 5.3 guest and it went just fine, except that i lost network connection 15 minutes later when i've left the VM as is. I had to down/up the interfaces to get the network up again.
Actually my 5 node cluster is identical in hardware and when the issue occurs the related kvm process' cpu utilisation is looking normal. With the investigation i was able to more clearly experience the issue. When it happens a can ssh into the vm, though the login process is a bit slow. I can...
I've done some further investigation. So far it seems to be related to the guest kernel. Using 3.8 or 3.10 in the guest the issue appears usually right after the first migration, though i was able to do 7 migrations in a row then on the 8th it happened. Using kernel 3.2 i've done 15 migrations...
Dear Community,
I'm running 3.1-14 on 5 nodes and using ceph 0.67.3 as the storage backend for VMs. Doing live migration with an Ubuntu 10.04 (kernel 3.0.0 backport) or Ubuntu 12.04 (kernel 3.8.0 backport) or Debian 7 (kernel 3.10 backport) guest i'm experiencing VM freezing or service...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.