Hi Mira,
One thing I notice which I may have missed last night is:
kvm: warning: TSC frequency mismatch between VM (2399997 kHz) and host (2099998 kHz), and TSC scaling unavailable on the destination node.
pveversions -v
cat /etc/pve/ha/resources.cfg
qm config
task logs
There are no special options set on the VM, though there may be some other issue with it. It's an upgraded radius server, I've pulled the configs as developed on it and will redeploy on a new VM.
Output of pveversion -v:
proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve)
pve-manager: 6.4-13...
Hi Fabian,
I'm having multiple issues, some of which are in other posts.
I have one VM running on this particular host, which take 3-5 seconds to complete a write operation (that said, when I migrated it to another host the write issues did not improve). I also have issues migrating VMs to this...
Certain VMs fail to migrate to a particular host in my cluster.
The migration managed by HA appears to be successful with a Start on the destination but the the following message is recorded in syslog and the VM is migrated away again (not always to original source)
Jan 25 23:41:52 HaPVEamax4...
I'm having issues with VMs on one of my cluster nodes and one thing I am unsure of is that the LVM containing the PVE VG is 93% full:
root@HaPVEamax4:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdg3 pve lvm2 a-- <223.07g <16.00g
root@HaPVEamax4:~# vgs
VG #PV #LV #SN Attr...
Hi all,
I've had a HA cluster running for a few years now and I'm looking for pointers since I'm sure a lot of things have changed and there's probably better practice than I used when I built it originally. The original configuration is from around 2012, the current nodes were slotted in to...
Apologies for re-opening an old thread but I am trying to follow https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous and I find that the ceph-luminous Jessie repository no longer contains the PVE ceph binaries.
I've just noticed that my OS drive has hit 83%
Most of the space seems to be consumed by files such as
`/var/lib/ceph/osd/ceph-0/current/1.3e_head/DIR_E/DIR_3/DIR_0`
I'm unsure what these are since I should have my journals on another SSD.
Any help appreciated
I know this is a little old but just to confirm with my experience:
"pvecm delnode *nameofnode*" removed the node from corosync.conf and from the GUI.
I had been trying "pvecm delnode *idofnode*" which wasn't doing anything for me.
I had already done "pvecm expected 3" to correct the quorum...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.