i would like to retrieve ceph read/write statistics per rbd image on my proxmox cluster.
I am trying to configure the admin socket for client.libvirt like in this url http://docs.ceph.com/docs/luminous/rbd/libvirt/#configuring-ceph
but I never manage to get the admin sockets created.
I have a Proxmox Cluster Version 5.0.32 running ceph luminous 12.2
Now I have a problem with apt dependencies.
When I try: apt-get install pve-qemu-kvm i get the error:
pve-qemu-kvm : Depends: libsnappy1v5 but it is not going to be installed
Okay but when i try to install...
since I updated my proxmox from 4.4 to 5.0.30 the performance counters apply/commit in the ceph gui under osd always show 0 0 instead of true values.
The command line equivalent ceph osd perf shows realistic values.
I have cleared brwoser cache.
I am using the original ceph luminous...
i am running ceph lumious on a proxmox 4.4 cluster and i am very satisfied with the original 12.2 ceph packages from ceph.com .
Now we are planning to upgrade to proxmox 5.0. The proxmox upgrade guide explicitly says:
Replace ceph.com repositories with proxmox.com ceph...
the latency of ceph io is quite high under kvm.
The guys in the ceph mailing list say latency can be improved by preloading jemalloc to kvm binary.
Is there a nice and elegant way to prepend spmething LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 to the kvm call in proxmox ...
we have the newest pve packages from pve-no-subscription installed and the newest kernels bootet.
Since the last update of proxmox packages a few days ago we notoce that pve-cluster daemon stops working on different machines in our cluster.
In syslogs then we finde the following...
i am using proxmox and I would like to implement a fully automatic installation of Debian KVM VMs.
I just want to define a hostname, an ip-adress and a disk size ant the machine should be provisioned automatically.
First part is easy: Find a free vm number, call qm create and then boot up...
i am using Ceph Storage with a MTU of 9000 and VLANs and bonds
So my eth cards have MTU 9000 and the bonds also have MTU 9000
on the different vlans i have different mtu ranging from 1500 to 9000
My vmbr0 has a fixed mtu of 1500 defined in /etc/network/interfaces
Now, when I stop and...
We have issues with proxmox kernel and Ceph which lead to kernel panics and high load The Problem is described here: https://firstname.lastname@example.org/msg30347.html
Is the problem already known and what about a patch ?
I am running most current proxmox release 4.1.34 with new GUI.
From some Screenshots in this Forum I can see that CPU Stats Graph should also show IO Delay Statistics.
My VMs show no IO Delay Statstics.
Do I have to enable this statistics somehow ? Or maybe should I recreate the rrds ...
we updated yesterday to pve-kernel-4.2.8-1-pve_4.2.8-41_amd64.deb and rebooted.
Now we cannot start rate-limited vms where network speed is limited with tc tool.
Reason is that kernel module sch_htp cannot be loaded.
Dmesg says: sch_htb: disagrees about version of symbol module_layout...
i have a proxmox cluster with 10 machines. All machines same software proxmox 4.1-2 newest release from pve-no-subscription repository.
3 of the 10 nodes are red in the Proxmox Webinterface instead of green. When I restart pvestatd on these machines they become green for some minutes...
i am running proxmox 4.0 in a cluster with kernel 4.2.2-1-pve
My virtual machines use traffic shapping like: net0: virtio=96:89:34:1C:AC:C6,bridge=vmbr0,rate=12
The network is quite fast/10 Gigabit
I see strange effects that live migrations sometimes fail, sometimes they work...
I am using a Cluster with proxmox 3.4.9
The kernel on the proxmox hosts is 2.6.32-37-pve
I store the vm images in a ceph-hammer cluster
Recently I changed some Linux Machines to virtio-scsi driver from pure virtio driver so I can use the fstrim feature from time to time on the vms.
we recently switched pve-kernel-3.10.0-5-pve from pve-no-subscription repository because ceph documentation states that more recent kernels are better for ceph-performance.
Unfortunately live migration is not reliable when migrating vms between proxmox hosts running...
we still have big solaris 10 performance problems after upgrading proxmox kvm version
On older KVM Binaries the machines perform much better.
Is there an elegant way or workaround to run some vms under another kvm hypervisor version/binary than the rest in proxmox ?
we ran some solaris hosts since proxmox 3.0 with an ok performance for more than 1 year.
We never rebootet them
Yesterday we restarted the solaris boxes under proxmox 3.3 and since then we have a very bad performance. Booting takes 20 Minutes.
What changed in the meantime might be the kvm...
I want to write a script that stops and starts a vm in a proxmox cluster
When I use qm start I have to know on which of the cluster the vm xx is defined and running
Is there a kind of cluster "qm" command that lets me stop and start a vm when i do not know on which cluster menber it is...