Search results

  1. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    For future human purposes: I created a list of commands containing all the nodes in it, and executed this on every node via CLI. THIS IS NOT THE IDEAL WAY TO DO THIS AND I KNOW IT IS A BAD PRACTICE BUT FOR NOW THIS IS GOOD ENOUGH: ssh -o StrictHostKeyChecking=accept-new -t HostName1 'exit' ssh...
  2. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    Okay I actually need to ssh FROM every node in the cluster TO every node in the cluster to generate the known_hosts trust. guh this is a cluster-truck.
  3. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    Feel free to move along and not even post if you don't see value in participating. I was simply asking for the proper process to do something that wasn't documented. And you're here to do... what exactly? Lecture me because I didn't say exactly what you wanted to hear? Why are you even bothering...
  4. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    Are you just married to trying to lecture me? I'm going to ignore you because you seem more fixated on me fitting into your box of "help" than trying to actually get anything accomplished done. Go away troll, I have actual work to do, not placate your pedantic ego stroking.
  5. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    omfg and that method doesn't even actually solve the problem, this is just such a shit show... I just wanted to add two new PVE Nodes to this cluster and I'm burning way too much time on this SSH BS that should've been solved months ago >:|
  6. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    So I have two new PVE Nodes that were recently provisioned with PVE v8.2, but were later updated to latest as of today (v8.2.4?). Anyways, I'm now having the silly SSH Key management junk that's been going around with the PVE V8.1/v8.2 changes. The other cluster nodes are on older versions of...
  7. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    Alright well the issues I've been facing were MTU related (but not in my realm as the path between nodes I'm not responsible for looks to have been messing with MTUs). Still would be nice to have documentation outline SSH key cycling for PVE... Thanks anyways folks.
  8. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    Look I have a lot of different errors here that I'm working through and I really would appreciate just having the instructions needed to properly cycle the SSH Keys for a single node to the rest of the cluster. I'm asking because I could not find this sufficiently outlined in the Proxmox VE...
  9. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    /etc/hostname was updated to the correct hostname when I renamed the node /etc/hosts was updated to the correct hostname when I renamed the node, and was not automatically populated with the other PVE Nodes when it joined the cluster (I guess that's not a thing though) /etc/network/interfaces...
  10. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    So I have a PVE server that I've been working on that I just joined to an existing PVE Cluster. Another person did the initial install of the PVE OS on the system, and I followed the Proxmox VE Documentation for renaming said node ( https://pve.proxmox.com/wiki/Renaming_a_PVE_node ) to the...
  11. B

    Best way to convert OS storage from single disk -> RAID1?

    Alright well yeah sure looks like I'll need to rebuild these nodes. I'm not seeing any realistic way to extend the existing OS storage to a RAID1/equivalent configuration. I'll probably go with ZFS Mirror. Thanks for your time everyone! I'm a bit grumpy Proxmox VE doesn't have a webGUI way to...
  12. B

    Best way to convert OS storage from single disk -> RAID1?

    1. All the VM/LXC storage is _not_ on the OS storage device. It's on Ceph or other forms of shared storage. 2. Backups are already healthy, lots of them, and are stored elsewhere (not on the PVE Cluster itself). 3. Why is wiping each node and reinstalling preferable to just converting the...
  13. B

    Best way to convert OS storage from single disk -> RAID1?

    For LVM is there a Proxmox VE webGUI section that is capable of converting the LVM to a Mirror? If so please point me to it as I can't find such yet. I'm not scared of the CLI more trying to drink the kool-aid where I can so to say ;P I'm quite confident the OS storage is not on ZFS in this...
  14. B

    Best way to convert OS storage from single disk -> RAID1?

    I'm still collecting info, but for one of the environments I am responsible for there are a lot of Proxmox VE Nodes with the OS installed on a single disk. I want to help convert the OS storage on each node to a logical RAID1 configuration. I don't yet know which nodes are using LVM for the OS...
  15. B

    Redundant Ring Protocol/equivalent for _some_ PVE Nodes in a cluster and not others

    Alright so this is more asking ahead for future me problems... I've been recently digging into options for certain forms of fail-over and bonding, most specifically methods that span switches. I found documentation (old and new) mentioning that each PVE Node in a cluster can have additional...
  16. B

    Test LACP, 1x TrueNAS + 2x Proxmox VE nodes = LACP y u no brrrtt?

    1. The last octet of the relevant nodes are almost sequential, not the actual numbers but the are along the lines of .111 and .114 (adjusted for security reasons), but they are on the same CIDR so the first three octets are the same. 2. I'm of the understanding NFS v4.x multipathing doesn't do...
  17. B

    Test LACP, 1x TrueNAS + 2x Proxmox VE nodes = LACP y u no brrrtt?

    1. I'm not sure if my switch supports layer3+4 LACP, it's pretty crusty and the documentation for it is unclear (Avaya ERS4000, it's what I "got" for now). 2. Changing the LACP config on the two Proxmox nodes is less than ideal right now.
  18. B

    Test LACP, 1x TrueNAS + 2x Proxmox VE nodes = LACP y u no brrrtt?

    I measure the reported inbound and outbound traffic in bits (mbps) by TrueNAS at its level. As in, I watch what metrics it reports (TrueNAS) as the VMs spin up. So far it has not been able to exceed 1gbps in either direction (inbound/outbound).
  19. B

    Test LACP, 1x TrueNAS + 2x Proxmox VE nodes = LACP y u no brrrtt?

    But there are 2x PVE Nodes (physically separate computers running Proxmox VE on them), not 1x. And I was spinning up VMs on both nodes at the same time from the same storage endpoint (NFS). From what I read that should at a minimum be at least two connections.