After some initial switch issues, I was able to get one NIC working, I then installed the other NIC in a separate machine and followed the same steps. Only the first NIC is working correctly. Details below. Anyone know what the issue could be?
Proxmox 7.4 on both machines
I'm able to assign an...
I take a slightly different approach but I think it will result in something similar to what you're looking for.
I use LXCs instead of VMs, and this is so I can mount ZFS datasets on the host directly in the LXC. The result is that you get nearly native performance of the underlying storage...
Just recording in case anyone else has this happen.
I got the same error in the UI, then I ran this: lxc-start -n 178 -F -lDEBUG -o lxc-178.log
Inspected the log and saw a "disk quota exceeded" message, aka the provisioned disk was full. I expanded the attached root storage and it started right up.
I got it - had to make sure the old host key was gone from all of the known host files on all of the nodes, which i thought i had done.
ran ssh-keygen -f "/etc/pve/priv/known_hosts" -R "starhawk" and ssh-keygen -f "/root/.ssh/known_hosts" -R "starhawk" on each node, then connected manually with...
the error only seems to occur when i'm initiating a transfer from one server to another (of a vm or lxc).
i get this message..
022-05-13 00:14:00 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
2022-05-13 00:14:00 @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @...
Alternatively - I'm trying to figure out if I should switch to UEFI boot. I'm not totally sure why this system uses legacy bios boot as it's a newer board and its bios is UEFI. Maybe because the boot drives are SATA and not NVMe?
I have not seen that - thanks for sending.
Going through the guide - I'm pretty sure I started this server on 6.4, so proxmox-boot-tool is set up and looks to be configured. (at least the purchase dates line up to indicate that 6.4 was available before I purchased the hardware).
I haven't run...
i've upgraded 2 of the 4 nodes in my cluster already without issues, but on upgrading this current node, grub failed to update.
here are the relevant output lines from when the failure occurred.
Setting up pve-docs (7.1-2) ...
Setting up libpython2.7-stdlib:amd64 (2.7.18-8) ...
Setting up...
I should have added more info to the post. The server isn't completely new - it's some hardware I had laying around that I got a "new" (also old) motherboard for. i7 7700K, 32GB of 2400mhz memory, 4 Kingston SSDs in RAID10, a 10 G NIC and an ASRock Rack board. It's for some services I wanted to...
Same thing here. I thought my new server crashed, until I logged into another node and saw the IO delay spike.
Are individuals able to contribute to the Proxmox source code? I'd love to fix things, add features :)
yep - was just about to follow up on that. networking was the issue. all of the nodes are on a 10g switch - so on that switch i disabled IGMP snooping and on the nodes i ran echo 0 >/sys/class/net/vmbr0/bridge/multicast_snooping
and then on the existing node ran:
service pve-cluster restart...
there are definitely a bunch of errors in the syslog of the machine i'm having trouble with, but I'm not sure which one is causing the problem. i've tried searching for a few of them, but i haven't found anything that's helped yet. the issue with /etc/pve/local/pve-ssl.key seems circular to me...
ah, sorry. here is the failing node.
root@phoenix:~# pvecm status
Cluster information
-------------------
Name: rebel
Config Version: 32
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue May 25 05:19:35 2021
Quorum provider...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.