I see, no worries, pacemaker is not uninstalled.
By the way I've managed to log in to the webUI by disabling the firewall. Probably some misconfiguration.
root@sofx1010pve3302.home.lan:~# dpkg -l |grep pacemaker
root@sofx1010pve3302.home.lan:~# systemctl -a |grep pacemaker...
I'm not sure what do you mean with "please remove this before continuing with anything else."
I have checked pacemaker and it complaints that it cannot connect to corosync. I think this is normal because of uninitialized cluster config.
root@sofx1010pve3302.home.lan:~# pcs status corosync...
Here are the outputs only on the nodes which are failing to start the UI:
root@sofx1010pve3302.home.lan:~# systemctl status pveproxy.service pvedaemon.service
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled)...
Thanks for the fast replay, here is all the info you require,
(because the post went so long I also have it in pastebin: https://pastebin.com/eaM4jxND)
root@sofx1010pve3302.home.lan:~# pvecm status
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster...
More interesting part is that although I have re-created the cluster in the WebUI of the "master" node I still see the old nodes with their VMs,
root@sofx1010pve3307:~# pvecm nodes
Membership information
----------------------
Nodeid Votes Name
1 1 sofx1010pve3307...
I tried to rename two of my nodes in my lab, and as I expected bricked the cluster configuration.
Then I removed the cluster configs and created a new one, unfortunately I can't join to it..... what could be the reason for that?
root@sofx1010pve3302.home.lan:~# pvecm add 192.168.30.7 -use_ssh...
I found it, I have to disable boot ability from the SATA/SAS controller.
It is mentioned here: https://www.reddit.com/r/homelab/comments/6l2mgf/kvm_hba_passthrough_will_not_boot_form_virtual/
and here...
Hi there,
I'm doing something quite trivial I have this controller and 4 HDD disks attached to it:
root@sofx1010pve3307:~# dmesg |grep 2008
[ 0.256490] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap d2008c40660462 ecap f050da
[ 3.309755] mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00)...
I'm doing ZFS replication between the nodes. All good, this is not HA but fail over.
Few weeks ago I have deployed a Kubernetes cluster. As you may know ZFS is a local filesystem, my VMs using it and perfom very well, but how to deal with Kubernetes?
I mean there is no sense of clustered...
quadcube sorry, can you describe what you've removed? I have almost the same issue (no connection at all) I'm doing passthrough from TrueNAS Scale system to a VM running Proxmox.
Hi there, I come across this message today after the latest update:
[ 0.062391] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.15.104-1-pve root=/dev/mapper/Root--VG-Root--LV ro net.ifnames=0 systemd.unified_cgroup_hierarchy=0
[ 0.062455] Unknown kernel command line parameters...
Ups forgot to mention that I see this only on latest kernel versions like the one blow:
5.15.35-2-pve
5.15.35-3-pve
I have switched to 5.13.19-6-pve - no issues at all.
Ah, sorry, no issue at all. Just wondering if this is the right way to do the things.
I'm 100% sure when using VM, because this is completely separated/emulated process from the host OS, but what about CT's - it seems to work.
Just a matter of discussion. If you want, I can close the topic in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.