And I found the error, as part of the discussion here. Had to run pvecm updatecerts on all of the PVE nodes, and everything worked flawlessly. Maybe that should be added to the documentation/wiki in regards to people getting the Host key verification failed. errors (there seems to be more than...
I use two Proxmox-nodes (node1 + node2)
I use a Raspberry Pi as the third device (ext1)
All three nodes run Buster
I can SSH without issues from node1 to node2 using IP, FQDN and just hostname. I get root shell and no SSH client warnings.
I can SSH without issues from node2 to node1 using IP...
No other ideas?
What logic lies behind pvecm qdevice setup? Is there some manual steps that can be done to set it up, rather than relying on that command?
Tried that, but still the same;
root@gridlock:~# pvecm qdevice setup 2000:123:123:123::210
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed...
Yes, there are multiple keys there (for other logins). Not sure how that is relevant? The SSH-key is present, and SSH-key based login from the two PVE nodes works just fine...
Yes, that works just fine. If you look at the logs in my first post, you can see that ssh-copy-id "complains" that the SSH-key already exists on the target system (impling that it a) can log in, and b) the SSH-key is already present). Doing a manual login using password also still works.
Hi,
No QDevice is configured;
root@gridlock:~# pvecm status
Cluster information
-------------------
Name: pve-cluster1
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Wed May 5 13:14:43 2021
Quorum provider...
Hi,
Trying to set up QDevice for a 2-node PVE cluster.
I've installed corosync-qnetd on a Raspberry Pi, and corosync-qdevice on both PVE nodes.
When trying to run the configuration from one of the PVE nodes, it fails;
root@gridlock:~# pvecm qdevice setup 2001:123:123:123::123...
Did you encounter the compile errors caused by actual errors in the code? (see below). Proxmox 5.3 with the 3.11 driver from Chelsio.
##################################################
# Building Network Drivers #
##################################################...
Nuking the MBRs/whatnot didn't help. After doing that, followed by a new reinstall, we still got the "wait 30-40 minutes before GRUB" scenario.
In all cases (both initially, and now during "testing"), rpool was a raidz2 vdev consisting of 6 drives. The server used to have other drives, but they...
So, we did a fresh/clean reinstall of the latest Proxmox (using the same disks). After rebooting, it's stuck in the same "wait 30-40 minutes before GRUB" scenario.
We're going to nuke the MBRs/whatnot, and try again, as we suspect that's whats causing the issue (which would explain why...
I can understand that. The long boot is one thing (which may or may not be directly related to whatever was done going from pve4 to pve5), but the "unknown filesystem" should be somewhat more of a "trivial" thing to figure out why goes wrong?
After BIOS, before GRUB menu. We just see the...
The linked traces was done with 2.02-pve4 (which we downgraded to). This is the version where "unknown filesystem" happens. This was done since others had success of downgrading to 2.02-pve4 (in order to resolve the long boot time), but since we encountered the "unknown filesystem" when doing a...
Couldn't paste the output of 'grub-probe -vv --device /dev/sdb2' here on the forums (too big/long text), so I made it available here; http://files.jocke.no/b/20170406-kakko_grub-probe.debug.txt
I looked at the lines after 'grub-core/kern/fs.c:56: Detecting zfs...', and thought maybe all the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.