i would like to retrieve ceph read/write statistics per rbd image on my proxmox cluster.
I am trying to configure the admin socket for client.libvirt like in this url http://docs.ceph.com/docs/luminous/rbd/libvirt/#configuring-ceph
but I never manage to get the admin sockets created.
yes, I am using repos from ceph.com and for now I am not willing to change original and up to date ceph packages back to older versions compiled by proxmox staff.
Also I do not see a good reason to package a custom build ceph with proxmox. This made sense before ceph released the official...
I have a Proxmox Cluster Version 5.0.32 running ceph luminous 12.2
Now I have a problem with apt dependencies.
When I try: apt-get install pve-qemu-kvm i get the error:
pve-qemu-kvm : Depends: libsnappy1v5 but it is not going to be installed
Okay but when i try to install...
since I updated my proxmox from 4.4 to 5.0.30 the performance counters apply/commit in the ceph gui under osd always show 0 0 instead of true values.
The command line equivalent ceph osd perf shows realistic values.
I have cleared brwoser cache.
I am using the original ceph luminous...
i am running ceph lumious on a proxmox 4.4 cluster and i am very satisfied with the original 12.2 ceph packages from ceph.com .
Now we are planning to upgrade to proxmox 5.0. The proxmox upgrade guide explicitly says:
Replace ceph.com repositories with proxmox.com ceph...
the latency of ceph io is quite high under kvm.
The guys in the ceph mailing list say latency can be improved by preloading jemalloc to kvm binary.
Is there a nice and elegant way to prepend spmething LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 to the kvm call in proxmox ...
Okay, now i was able to capture a trace of the pmxcfs process when dying.
I think the reason for crashing is a corrupted cluster database ?
I do not remember we did anything bad. We did not add or remove hosts from the cluster. The problem is since the update from proxmox 4.2 to proxmox 4.3...
We still have the issue.
The daemon pmxcfs stops running randomly on our machines.
We found the following more debug information:
pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
Active: failed (Result: signal)...
we have the newest pve packages from pve-no-subscription installed and the newest kernels bootet.
Since the last update of proxmox packages a few days ago we notoce that pve-cluster daemon stops working on different machines in our cluster.
In syslogs then we finde the following...
i am using proxmox and I would like to implement a fully automatic installation of Debian KVM VMs.
I just want to define a hostname, an ip-adress and a disk size ant the machine should be provisioned automatically.
First part is easy: Find a free vm number, call qm create and then boot up...
i am using Ceph Storage with a MTU of 9000 and VLANs and bonds
So my eth cards have MTU 9000 and the bonds also have MTU 9000
on the different vlans i have different mtu ranging from 1500 to 9000
My vmbr0 has a fixed mtu of 1500 defined in /etc/network/interfaces
Now, when I stop and...
We have issues with proxmox kernel and Ceph which lead to kernel panics and high load The Problem is described here: https://email@example.com/msg30347.html
Is the problem already known and what about a patch ?