I'm not sure if or how this was working for me, but the 'like' I put above at some point in the past indicates I've clearly been here before! :-/
I was using a key on my cephfs but I just ripped out ceph a month or so ago and forgot about the key! Cert update failures began today, lol.
I...
FWIW I have the same LM218 or whatever it is chipset in my Dell Precision T5810 Xeon tower. I was shocked and amazed to discover that. I'll hopefully be replacing it with a PCI-E NIC with SR-IOV support soon though and then just turning it off entirely. Unreal that Intel would let an issue...
I fixed this back in ~December by doing the following -
systemctl mask ifupdown2-pre
and I said to myself "I should do that on all my other boxes right now before I forget what I did!"
I guess I'm glad I didn't as it seems from looking at this thread that it's only my PN50s that should suffer...
Hey sorry for the delay!
Let me see ... I create a loopback bridge with no physical interface on each Proxmox box -
/etc/network/interfaces
auto vmbr1
iface vmbr1 inet manual
address 10.50.250.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
then on the VM I...
Hello all,
I've spent several days reading everything I could find online and trying everything I could think of, and I'm finally at the end of my rope.
When I first built my PVE cluster way back when, I could live migrate here, there and everywhere with no issues. I've somewhat recently...
I had a similar issue and had to remove 'export' from the front of the variables specified in the acme.sh docs ...
e.g.
NSUPDATE_KEY=/mnt/pve/cephfs/nsupdate.key
instead of -
export NSUPDATE_KEY=/mnt/pve/cephfs/nsupdate.key
I changed and tried a few things in a short period (all suggested...
No, I did not mean the patch ... patch is awesome, thanks - everything is working great for me now.
I meant changing the link MTU to 1450 on the Proxmox box just to fix this reporting issue ... that is the "that" that I was saying could likely have additional (and very difficult and confusing...
Won't that make the box drop any packets that come in between 1450 - 1500? Which could be important cluster traffic, or important VM stuff (NFS, iSCSI) ... etc.?
I think unless one is willing to make their entire network 1450, probably best to wait if that is the only fix, no?
Edit: I just...
My setup matches almost exactly, that's funny (and maybe the root of our problem!), and yes, same deal with the MTUs in the container, interesting -
bash-5.0# ip a | grep -i mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
8863: eth1@if8864...
root@NUC10i3FNH-2:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP mode...
Re-bump.
Spent two days rebuilding boxes to get all this supported and mounting. Everything is *mounting* fine, but the directory appears empty on all my Alpine Linux and K3OS (also Alpine Linux based I believe) VMs! I do the exact same thing on an Ubuntu VM and I can see the contents of the...
I'll let you guess which one does not have a bond, lol -
I'm going to try disabling the untagged VLAN 1 on the member ports of one of the other units and then try another reboot tonight to see if that fixes it.
Alright, just had to reboot the router due to flaky internet, so I took the opportunity to test this theory and sadly, all four boxes again have 1 minute of uptime when I get back online :-/
One of them has a bond but only one link even (for easy switch to dual links when I get around to...
I'm not having a good time with my USB-C GigE adapters. I'm not sure if it's the adapters (probably) or the NUCs, but the adapters keep disappearing entirely. I've got them paired with the onboard in a LACP LAG (not a supported configuration from what I read, but it works) as that's about all...
Asus RT-AC5300 w/ Merlin WRT 384.19
Yeah, but that shouldn't make Debian/Proxmox crash and burn should it?
Oooo, this sounds like a very good test. I'll give it a try as soon as I can and report back!
Ok, so I'm getting a flood of this same error, and then a Proxmox reboot, on all three of my nodes when I reboot my router?!??
I tried ifupdown2 and the problem continued. I added the sleeps to ifenslave (which helped my bonds pick up the interfaces that were being left out) - and I've...
I just picked up some USB-C to GigE adapters for my NUCs. Trying everything mentioned in this thread didn't seem to be resolving it for me. I then ended up leaving both links up in a LACP LAG (not a supported configuration, but I've had no issues) with my VLAN interfaces bound to the bond, so...
I know this post is kinda old, so it's my own damn fault for sure for just taking something from the internet and adding it to my server to try to shut up a warning message - but I would NOT advise making these changes - doing so left me with a very broken cluster that took me more than three...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.