Recent content by lightnet-barry

  1. L

    Hyperconverged cluster VMs become inaccessible when one node is down

    Thanks for the input everyone, I should just read my own documentation! I've somehow mangled "noout nobackfill norecover", into "nodown noout" in my node reboot process!
  2. L

    Hyperconverged cluster VMs become inaccessible when one node is down

    I think the issue is to do with the status of OSDs when I power down one of my nodes. If I plan the reboot: set nodown, noout mark OSDs as out, then down my OSDs still show as up. root@xxxx-proxmox1:~# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0...
  3. L

    Hyperconverged cluster VMs become inaccessible when one node is down

    This is my crush map. I'd appreciate any opinions though I 'm not sure this is the the issue (see next comment) # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1...
  4. L

    Hyperconverged cluster VMs become inaccessible when one node is down

    This is from when node C was down: # pvecm status Cluster information ------------------- Name: xxx-proxmox Config Version: 3 Transport: knet Secure auth: on Quorum information ------------------ Date: Thu May 18 07:58:20 2023 Quorum provider...
  5. L

    Hyperconverged cluster VMs become inaccessible when one node is down

    It happens with each node in turn as I power them down, A, B & C (3 node cluster). pvecm showed cluster as quorate with 2 out 3 (expected)
  6. L

    Hyperconverged cluster VMs become inaccessible when one node is down

    I have had a strange situation occur with a new cluster I have built. In order to update the BMC firmware I needed to cold reset each node in turn. I migrated all VMs from node A to node B, checked they were all running, confirmed services on VMs were available as expected and then powered down...
  7. L

    pvedaemon rbd monitor list out of date

    I am trying to find the configuration file which pvedaemon or rbdmap use to create the list of monitors used for Ceph RBD storage. I originally set up a PVE cluster with Ceph RBD storage in 2016, this had 2 hybrid compute/storage (172.x.x.101 & 172.x.x.102) nodes and a temporary (172.x.x.199)...
  8. L

    Migration to another host fails

    Ah! I missed that setting for the longest time... Obviously it was HA migrations which were the common factor. Thanks for the pointer :) Barry
  9. L

    Migration to another host fails

    Hi Mira, One thing I notice which I may have missed last night is: kvm: warning: TSC frequency mismatch between VM (2399997 kHz) and host (2099998 kHz), and TSC scaling unavailable on the destination node. pveversions -v cat /etc/pve/ha/resources.cfg qm config task logs
  10. L

    LVM size

    There are no special options set on the VM, though there may be some other issue with it. It's an upgraded radius server, I've pulled the configs as developed on it and will redeploy on a new VM. Output of pveversion -v: proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve) pve-manager: 6.4-13...
  11. L

    LVM size

    Hi Fabian, I'm having multiple issues, some of which are in other posts. I have one VM running on this particular host, which take 3-5 seconds to complete a write operation (that said, when I migrated it to another host the write issues did not improve). I also have issues migrating VMs to this...
  12. L

    Migration to another host fails

    Certain VMs fail to migrate to a particular host in my cluster. The migration managed by HA appears to be successful with a Start on the destination but the the following message is recorded in syslog and the VM is migrated away again (not always to original source) Jan 25 23:41:52 HaPVEamax4...
  13. L

    LVM size

    I'm having issues with VMs on one of my cluster nodes and one thing I am unsure of is that the LVM containing the PVE VG is 93% full: root@HaPVEamax4:~# pvs PV VG Fmt Attr PSize PFree /dev/sdg3 pve lvm2 a-- <223.07g <16.00g root@HaPVEamax4:~# vgs VG #PV #LV #SN Attr...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!