@Dark Angel [gEb] @openaspace
just had the same and just resolved it using hints from above. Use Super-Grub2 to boot:
https://www.reddit.com/r/Proxmox/comments/vy33ho/stuck_at_grub_rescue_after_an_update_and_reboot/
- boot from a Super_grub_disk2 rescue iso -> https://www.supergrubdisk.org/...
okay, so I just figured out my issue and thought process: you need to specify a bridge, to specify VLAN...but I don't want traffic to exit through an interface/bridge since I want VLAN only...do I just create a bridge with no physical interfaces then, and make it VLAN aware?
thanks! That's what I assumed but it wasn't clear.
Do the bridges impact throughput much vs doing pass through? This is "just" gigabit, not 10 Gbe or anything.
I have a couple of VMs I want to communicate by themselves on a single LAN segment, using a new VLAN ID.
Will the PVE host act like a VLAN aware switch without having any other external networking for this VLAN?
I'm trying to wrap my head around the CPU instruction sets exposed by Qemu (KVM?). It seems that the default selected cpu of kvm64 seems like a poor choice for a remotely modern CPU, and now causes issues with RHEL 9 OSes:
https://www.qemu.org/docs/master/system/i386/cpu.html...
Yes it definitely is. I just tried the management IPSET and it worked, using an alias for my management PC IPv6 address. That same alias is used for the node firewall on port 22.
Hi, thanks for the reply. Here it is. I wasn't sure if that was base64 for a hash for something, so I removed it. Also hid IP6 prefix, but it's the /64 that I manage from. I do have a firewall added to the node (and it was there when I dumped this).
root@pve:~# pve-firewall compile | grep...
I was reading here when I was deciding to configure IPv6 on my pve server:
https://pve.proxmox.com/wiki/Firewall
And there is this: "If you enable the firewall, traffic to all hosts is blocked by default. Only exceptions is WebGUI(8006) and ssh(22) from your local network.". However, it seems...
thanks! Yes, I tried doing "modprobe sit" and it loaded right up and the sit interface commands worked inside the container. I was just a bit nervous using modprobe as I've done very little poking at the kernel, even things as simple as this.
For reference, this is the recommended "script" from...
I've been trying to configure a site tunnel to HE.net on an Almalinux 9.1 container. I get the following message:
modprobe: FATAL: Module ipv6 not found in directory /lib/modules/5.15.83-1-pve
add tunnel "sit0" failed: No such device
will loading with "modprobe sit" load the appropriate...
Oh...I definitely didn't change it. Yes that fixes it. I assume this isn't automatic since there could be multiple network interfaces that don't correspond to the management LAN/interface?
I changed my node IP address yesterday, and while poking around today I noticed the datacenter and node network config displays differ. The old address was 192.168.10.180, and the new is 192.168.10.40. I did not reboot yet and I suspect that this will resolve with a reboot. I haven't attempted...
Just got an email today from the server. If you are using this modification, a recent patch reverts the changes for this fix. If you aren't using the example script to check and restore the fix, you might need to check your server.
oh really? See any documentation I could refer to on this? not doubting, I just want to understand any conditions of it. For my VMs, yes I do have QEMU Guest Agent.
What about LXC containers? I'm pretty new to containers, but I guess they already exist in a consistent state on the host machine...
I'm having trouble with maintaining two backup routines. My original goal was to have two plans: one backs up all VMs/containers nightly using "snapshot" mode so that any users are not directly impacted on a daily basis. I also wanted to run a plan weekly using "stop" mode so that I know I have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.