So I have two new PVE Nodes that were recently provisioned with PVE v8.2, but were later updated to latest as of today (v8.2.4?).
Anyways, I'm now having the silly SSH Key management junk that's been going around with the PVE V8.1/v8.2 changes. The other cluster nodes are on older versions of...
Alright well the issues I've been facing were MTU related (but not in my realm as the path between nodes I'm not responsible for looks to have been messing with MTUs). Still would be nice to have documentation outline SSH key cycling for PVE... Thanks anyways folks.
Look I have a lot of different errors here that I'm working through and I really would appreciate just having the instructions needed to properly cycle the SSH Keys for a single node to the rest of the cluster. I'm asking because I could not find this sufficiently outlined in the Proxmox VE...
/etc/hostname was updated to the correct hostname when I renamed the node
/etc/hosts was updated to the correct hostname when I renamed the node, and was not automatically populated with the other PVE Nodes when it joined the cluster (I guess that's not a thing though)
/etc/network/interfaces...
So I have a PVE server that I've been working on that I just joined to an existing PVE Cluster. Another person did the initial install of the PVE OS on the system, and I followed the Proxmox VE Documentation for renaming said node ( https://pve.proxmox.com/wiki/Renaming_a_PVE_node ) to the...
Alright well yeah sure looks like I'll need to rebuild these nodes. I'm not seeing any realistic way to extend the existing OS storage to a RAID1/equivalent configuration. I'll probably go with ZFS Mirror. Thanks for your time everyone! I'm a bit grumpy Proxmox VE doesn't have a webGUI way to...
1. All the VM/LXC storage is _not_ on the OS storage device. It's on Ceph or other forms of shared storage.
2. Backups are already healthy, lots of them, and are stored elsewhere (not on the PVE Cluster itself).
3. Why is wiping each node and reinstalling preferable to just converting the...
For LVM is there a Proxmox VE webGUI section that is capable of converting the LVM to a Mirror? If so please point me to it as I can't find such yet.
I'm not scared of the CLI more trying to drink the kool-aid where I can so to say ;P I'm quite confident the OS storage is not on ZFS in this...
I'm still collecting info, but for one of the environments I am responsible for there are a lot of Proxmox VE Nodes with the OS installed on a single disk. I want to help convert the OS storage on each node to a logical RAID1 configuration. I don't yet know which nodes are using LVM for the OS...
Alright so this is more asking ahead for future me problems...
I've been recently digging into options for certain forms of fail-over and bonding, most specifically methods that span switches.
I found documentation (old and new) mentioning that each PVE Node in a cluster can have additional...
1. The last octet of the relevant nodes are almost sequential, not the actual numbers but the are along the lines of .111 and .114 (adjusted for security reasons), but they are on the same CIDR so the first three octets are the same.
2. I'm of the understanding NFS v4.x multipathing doesn't do...
1. I'm not sure if my switch supports layer3+4 LACP, it's pretty crusty and the documentation for it is unclear (Avaya ERS4000, it's what I "got" for now).
2. Changing the LACP config on the two Proxmox nodes is less than ideal right now.
I measure the reported inbound and outbound traffic in bits (mbps) by TrueNAS at its level. As in, I watch what metrics it reports (TrueNAS) as the VMs spin up. So far it has not been able to exceed 1gbps in either direction (inbound/outbound).
But there are 2x PVE Nodes (physically separate computers running Proxmox VE on them), not 1x. And I was spinning up VMs on both nodes at the same time from the same storage endpoint (NFS). From what I read that should at a minimum be at least two connections.
Putting aside the silliness of the title, I do not really see why the LACP bonding in the scenario I'm about to describe does _not_ result in >1gig combined throughput.
In this scenario my Proxmox VE cluster is 2x physical nodes. Each node having 2x1gig LACP bond (layer2+3) and the ports on the...
At times I need to hunt down which systems are on certain SDNs within Proxmox VE, whether it's a VM or LXC. And I'm not seeing a way to list all the "things" that are currently configured to use a specific SDN.
Can anyone point me to a method to do this? Or is this something that should be a...
Yeah I can only really speak to the statistics I "mostly" know ;P I have no real idea on specific demographics for Proxmox VE at all, hence the touch of nuance.
I really see a basic starter function set here being very useful. And maybe even building on it over time to fill the achievable gaps...
Well what about it being a "Beta, only turn on if you know what you're doing" kind of feature? Whereby it is written with a very limited set of features (en-US + en-GB, US104/qwerty, for example) that are explicitly spelled out, so the aspects you speak to are "accounted" for by it being an...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.