Search results

  1. Migration to another host fails

    Ah! I missed that setting for the longest time... Obviously it was HA migrations which were the common factor. Thanks for the pointer :) Barry
  2. Migration to another host fails

    Hi Mira, One thing I notice which I may have missed last night is: kvm: warning: TSC frequency mismatch between VM (2399997 kHz) and host (2099998 kHz), and TSC scaling unavailable on the destination node. pveversions -v cat /etc/pve/ha/resources.cfg qm config task logs
  3. LVM size

    There are no special options set on the VM, though there may be some other issue with it. It's an upgraded radius server, I've pulled the configs as developed on it and will redeploy on a new VM. Output of pveversion -v: proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve) pve-manager: 6.4-13...
  4. LVM size

    Hi Fabian, I'm having multiple issues, some of which are in other posts. I have one VM running on this particular host, which take 3-5 seconds to complete a write operation (that said, when I migrated it to another host the write issues did not improve). I also have issues migrating VMs to this...
  5. Migration to another host fails

    Certain VMs fail to migrate to a particular host in my cluster. The migration managed by HA appears to be successful with a Start on the destination but the the following message is recorded in syslog and the VM is migrated away again (not always to original source) Jan 25 23:41:52 HaPVEamax4...
  6. LVM size

    I'm having issues with VMs on one of my cluster nodes and one thing I am unsure of is that the LVM containing the PVE VG is 93% full: root@HaPVEamax4:~# pvs PV VG Fmt Attr PSize PFree /dev/sdg3 pve lvm2 a-- <223.07g <16.00g root@HaPVEamax4:~# vgs VG #PV #LV #SN Attr...
  7. Cluster configuration / resource starvation issues

    Hi all, I've had a HA cluster running for a few years now and I'm looking for pointers since I'm sure a lot of things have changed and there's probably better practice than I used when I built it originally. The original configuration is from around 2012, the current nodes were slotted in to...
  8. Trouble with ceph on PVE 5

    Apologies for re-opening an old thread but I am trying to follow and I find that the ceph-luminous Jessie repository no longer contains the PVE ceph binaries.
  9. Proxmox disk filling up

    I've just noticed that my OS drive has hit 83% Most of the space seems to be consumed by files such as `/var/lib/ceph/osd/ceph-0/current/1.3e_head/DIR_E/DIR_3/DIR_0` I'm unsure what these are since I should have my journals on another SSD. Any help appreciated
  10. Manually delete old nodes from cluster?

    I know this is a little old but just to confirm with my experience: "pvecm delnode *nameofnode*" removed the node from corosync.conf and from the GUI. I had been trying "pvecm delnode *idofnode*" which wasn't doing anything for me. I had already done "pvecm expected 3" to correct the quorum...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!