Hello Alwin,
Please pay attention to my post, I use an external ceph cluster which is up 2 date, and the same ceph cluster which I used (but on different pool) on pve 4.x branch before the 5.x upgrade.
The ceph version on the cluster is 10.2.10-0ubuntu0.16.04.1.
Regarding the pve' the system...
Hi Guys,
I just reinstalled a server after some disk changes, with proxmox, updated to latest version, rejoined the cluster and try to migrate some disk's VM from an external ceph cluster to the new server which is also configured to share a raid volume over nfs to all other servers.
I have...
1. 1 socket and 10 cores - that is the correct way
2. enable numa
3. I am talking about the hypervisor settings and not VMs.
4. be sure to have latest updates of qemu-kvm versions from pve repository
Hi,
I always run on virtio drivers because this is the kind of driver that delivers the max iops on the virtual system, at least if this is what you're hunting for.
My guess is to go for the 4.2.6-1 kernel branch, I think I just recently upgrade to latest 33 release of the 6 minor version and...
dietmar, let me give you an example of use case for such requirements.
Consider the following case - 5 proxmox nodes HA cluster setup, each with 1 x SSD Kingston V300 @ 120G for journal and 1 x Intel 750 400G NVMe drives.
The cluster is still under performance benchmark and testing and testing...
Might I make a comment on the vm.dirty_background_ratio and vm.dirty_ratio ?
I would see these value not being very appropriate for a hypervisor host, but more like vm.dirty_background_ratio = 10 and vm.dirty_ratio =5 as data should be flushed much faster to disk source and avoiding the...
Solution can be found at the following thread post: http://forum.proxmox.com/threads/20372-Linux-guest-problems-on-new-Haswell-EP-processors/page2?
This is for anybody having the nerves to read all this.
@ e100 - tested every possible setting, even ISCSI, hot plug disabled, all same results. But while some are still frying the fish, other stubborn people prefer to have a solid solution to this bug and use virtio (the best performance) without any issue. :D
As a tested solution to this issue...
@ Spirit:
I can't change nor test this on to this systems because are production envs and my main concern is not to have sleepless nights due to this stupid bug, which I already had plenty.
Separately, I have a different env running on Core i7 socket 2011 v1 CPUs and never had encountered...
As said, I returned with more info on the topic in order to shed some light on the gathered research I've manage to do so far.
Considering my last post, I have focused on the way the internal disk scheduler is set, from default values towards changing it to deadline on all VMs. This has...
Yes, All up to date, bios, firmwares, HW raid firmwares.
How can I send a PM message to you on this forum? I can't locate the PM button to contact you on a quick chat/call on private.
I wouldn't post into this forum like chat message, to spam it, and return to the thread once I get some...
Thank you for sharing this very useful information !
My servers are running H730 Mini (MegaRAID SAS-3 3108 [Invader] (rev 02)), which is based, if not mistaking, on the 93xx LSI branch cards.
Although, it doesn't apply. As I described I am running 2 setup, different locations, one with local...
Still, that doesn't mean the issue doesn't exist ;)
There is a sure difference between my 730xd's (running also on local and remote storage) and yours r630's in terms of chipset and used CPUs, but I am having a hard time believing that the CPUs instruction set on my 2630's v3 is so new that it...
Where in the world did you find backports pve kernel ?
Besides the rest, I've got these 2 in source.list:
deb http://download.proxmox.com/debian wheezy pve-no-subscription
deb http://http.debian.net/debian wheezy-backports main
Are you running on the bare metal server directly the stock...
@ spirit - Yes, it is the default value set, hotplug enabled (Disk Network and USB), are you suggesting something ? :D
@ robhost - I said I am not using OpenVZ, as pointed out in that separate link I provided, fully describing the issue. But as on the other hand I am not running only debian to...
I can confirm this, it still happens with exactly the same synoptic that e100 described.
I have 2 sites running dell r730xd servers with 2 x E5-2630 v3 processors and this issue still manifests on high loaded VMs. Numa is enabled, drive and network is set to Virtio and SCSI controller type to...
+1 to Observium !
if you think of expanding your knowledge over the time by not only passing only through "quick install guide" of it and actually really spending time to configure and understand all it's components as well as getting the agent monitoring apps or even write custom you'll get to...
Hi guys,
This is my first post so go easy on me please.
Great work around here, but grab yourself a cup of coffee because this is not going to be a short post...
I'm heavily using Prox (prod&dev) for quite a few years after dumping the complicated and beamy Ovirt project and I've been...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.