Das passiert bei CTs leider üblicherweise nicht. Zumindestens nicht vor Beendigung.
Bitte nicht. Snapshots ersetzen ein richtiges Backup genau so wenig wie RAID. Ich empfehle immer beides. Schau dir auch mal cv4pve-autosnap an
Schau auch mal den...
Ich glaub ich habe die Lösung.
Habe gerade mal bei der VM die MTU auf 1500 begrenzt und siehe da, ich komme mit allen Protokollen auf alle PVE/PBS. Bleibt also die Frage, was hat sich vor einem halben Jahr geändert?
Hi there,
we are running an Proxmoy PVE Ceph cluster. Current configuration is:
3 Nodes
Each 2 x 10GBit/s LACP to 4 switches (only for ceph)
2 x Intel Xeon E5-2640 2,6GHzs
192GB RAM
Each 5 Crucial SATA SSD CT2000MX500SSD1 on HBA
But currently...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...
Ich glaub ich habe die Lösung.
Habe gerade mal bei der VM die MTU auf 1500 begrenzt und siehe da, ich komme mit allen Protokollen auf alle PVE/PBS. Bleibt also die Frage, was hat sich vor einem halben Jahr geändert?
@garfield2008,
Der tcpdump verrät es: Der TCP-Handshake klappt einwandfrei (SYN → SYN-ACK → ACK), aber danach kommt keine Daten-Übertragung zustande — der Server schickt nach 5 Sekunden ein FIN.
Das ist ein klassisches MTU-Problem. Schau dir...
Installing the test repository provides access to pve-firmware version 3.18-1, built on firmware-linux 20260110, along with the newer proxmox-kernel-6.17.13-1-pve, which may meet your needs.
Hello,
I recently did an in place upgrade to PVE 9, everything seems to be working fine but now the network card seems to be slow. I run pfSense in a VM with a couple of network interfaces passed through. I have not made any changes to the VM...
Strix appears to require a more recent kernel. One user reported that My Proxmox 6.19.1 pve test kernel works well for them—this version is available for testing at...
The Proxmox kernel is based on Ubuntu’s kernel. Since Ubuntu’s kernel version 6.19 includes CONFIG_RUST enabled by default, and unless the Proxmox team explicitly disables it in their next major release, CONFIG_RUST should remain enabled. As a...
I'm glad I could help! Every few days, I check for a newer kernel at the Ubuntu kernel repository on Launchpad, then pull and compile it with the Proxmox magic. If it works, I update my GitHub.
It looks like you have the latest version — so far...
If you're using EC (6,2), that's about 75% storage efficiency. If you are using replicated rule with the same eight nodes, and assuming each node contributes 1 OSD, then you have only about 12.5% storage efficiency.
12.5% < 75% no?
Can't.
Said...
just wanted to say thanks. I did install the latest 6.19.1 on my proxmox runnnig amd strix halo (framework desktop), and that kernel actually fixed the problem in my immich lxc container running onnx models. Before i was running 6.17 from proxmox...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...
But doesn't replication yield lower storage efficiency?
I am currently using EC (2,1) for my "simple" three node Proxmox HA cluster which serves my DNS, Windows AD DC, and AdGuardHome and where the LXC/VM disks reside on the distributed EC Ceph...