Ich glaub ich habe die Lösung.
Habe gerade mal bei der VM die MTU auf 1500 begrenzt und siehe da, ich komme mit allen Protokollen auf alle PVE/PBS. Bleibt also die Frage, was hat sich vor einem halben Jahr geändert?
@garfield2008,
Der tcpdump verrät es: Der TCP-Handshake klappt einwandfrei (SYN → SYN-ACK → ACK), aber danach kommt keine Daten-Übertragung zustande — der Server schickt nach 5 Sekunden ein FIN.
Das ist ein klassisches MTU-Problem. Schau dir...
Installing the test repository provides access to pve-firmware version 3.18-1, built on firmware-linux 20260110, along with the newer proxmox-kernel-6.17.13-1-pve, which may meet your needs.
Hello,
I recently did an in place upgrade to PVE 9, everything seems to be working fine but now the network card seems to be slow. I run pfSense in a VM with a couple of network interfaces passed through. I have not made any changes to the VM...
Strix appears to require a more recent kernel. One user reported that My Proxmox 6.19.1 pve test kernel works well for them—this version is available for testing at...
The Proxmox kernel is based on Ubuntu’s kernel. Since Ubuntu’s kernel version 6.19 includes CONFIG_RUST enabled by default, and unless the Proxmox team explicitly disables it in their next major release, CONFIG_RUST should remain enabled. As a...
I'm glad I could help! Every few days, I check for a newer kernel at the Ubuntu kernel repository on Launchpad, then pull and compile it with the Proxmox magic. If it works, I update my GitHub.
It looks like you have the latest version — so far...
If you're using EC (6,2), that's about 75% storage efficiency. If you are using replicated rule with the same eight nodes, and assuming each node contributes 1 OSD, then you have only about 12.5% storage efficiency.
12.5% < 75% no?
Can't.
Said...
just wanted to say thanks. I did install the latest 6.19.1 on my proxmox runnnig amd strix halo (framework desktop), and that kernel actually fixed the problem in my immich lxc container running onnx models. Before i was running 6.17 from proxmox...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...
But doesn't replication yield lower storage efficiency?
I am currently using EC (2,1) for my "simple" three node Proxmox HA cluster which serves my DNS, Windows AD DC, and AdGuardHome and where the LXC/VM disks reside on the distributed EC Ceph...
OK, OK.. talking to myself again it seems, well should anybody be future searching maybe they'll find this post helpful (hopefully)
Here's a quick run down of my testing..
FIRST and I know this is a slight pain, but check your downloaded...
if you have talking about the vxlan tunnels themself or the bgp peers, they are simply using the route to reach to remote peers ips.
so you can make simple routes on your host if needed.
or do you want PBR specifically for the vxlan udp port...
AMD microcode is installed.
I guess I can find a keyboard/monitor to go poke at the BIOS, but if those things are set wrong, why doesn't 6.14.8-2 have a problem with it?
Found something else while googling, someone having similar issues in...
The stuff about setting a slightly higher voltage and/or lower frequency in BIOS seems relevant though. It might also pay to look at what c-states are enabled and whether you have the AMD microcode installed.
Other than that I got nuthin'.
I didn't find that much, but I did discover that that message is cut off. Should be "software wrote 0xE to reset control register 0xCF9"
When you google that, yes, it starts to get more interesting, but most of what I'm finding so far is about...