Thanks for the heads-up! I've noticed that myself in the meantime.
It might be interesting to understand why I didn't see any speed difference in my tests:
This was probably due to sshuttle, which in this configuration (standard buffer or no...
I've recently started testing the upgrade of our v8.4 cluster to v9.1 (patched using the nosub repository today). While troubleshooting issues with our EVPN configuration, I found that the previous method for disabling reverse path filtering...
Und noch ein Nachtrag - und wohl die Lösung
Ich habe die "--no-latency-control" Option wieder aus dem sshuttle Befehl entfernt und die sshuttle verbindung neu gestartet, seitdem läuft der job mit einer Geschwindigkeit von ca. 40 Mbits (bzw...
So it's not just me with this issue. Well that's good to know I guess?
I thought the single hdd in my server was dying because it'd randomly disappear and crash any containers that used the NFS mount for it when there was heavy read/write to...
Nachtrag:
Per rsync von derselben PBS Maschine (mit dem ich auch Dateibackups offsite sichere) kann ich die Bandbreite sauber nach oben und unten bestimmen - es liegt also nicht an irgendwelchen anderen Netzwerkbeschränkungen. Rsync uns Sshuttle...
My solution was to define a new CPU model based on x86-64-v2-AES that also exposes AVX
New models can be defined here cluster wide:
/etc/pve/virtual-guest/cpu-models.conf
Add this definition to the /etc/pve/virtual-guest/cpu-models.conf file...
Few weeks ago we just completed migration of 20-ish VMs (mostly windows, few linux) from ESX/Veeam to Proxmox PVE/PBS using similar environment as yours (2 HPE Proliant hosts and MSA 2060 FC as shared storage). Everything works perfect. You...
Was mich wundert warum sollte HA hier die Hosts rebooten? Wenn 2 verbleibende Hosts aktiv sind und einer fehlt, dann dann müssen doch die 2 verbleibenden merken, dass sie aktiv sind und einer gestorben ist.
Mar 06 20:04:28 pve01 corosync[1858]...
To explain what the problem here is, now that it has been found (400 Bad Request on raspberry pi). The raspi 5 has by default a kernel with 16k pagesize, this probably leads to a problem when writing or creating the fixed index, which leads to...
One of the smaller VMs is off so I ran rbd sparsify and it worked. The syntax I found was a bit different, rbd sparsify --pool poolname diskname, but it did immediately reduce the rbd du usage from 11 GB to 8.8 GB, and only took a few seconds...
I have a node with hardware failure, its resurrection has been deferred repeatedly now (soon a year). HA votes are set to 0, quorum and capacity is OK with the remaining odd number of nodes. The rest of the nodes have been receiving updates as...
Solved!
1) Visit the target PVE.
2) Select Datacenter.
3) Select Permission->API Token.
4) You will then find the API Token that PDM created. Delete it.
I have 2 new releases. One is proxmox-kernel 6.19.2 (unofficial) — Linux 6.19.2 + OpenZFS 2.4.1, which is based on Ubuntu 6.19.0-6.6. I made it an official release, as The Resolute Raccoon has it as a stable release. I have updated it to have...
Bieten Sie die Vorteile von Berliner vs. Pfannkuchen vs. Krapfen
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Stimmt, wenn beide VMs die gleiche MAC haben fällt das ARP-Thema komplett weg, guter Move. Dann vergiss den arping-Tipp.
Einzige Sache: falls durch nen Bug oder Race Condition doch mal beide VMs kurz gleichzeitig laufen (start/shutdown ist ja...
Prüft diesen Plan bitte gründlich. Wie schon geschrieben bin ich eher auf den Nachfolgern (3par) zu Hause, aber da gibt es folgende Fallen:
Die Platten haben eine spezielle Formatierung (520 statt 512 Byte/Sektor)
Die Platten brauchen spezielle...
Die "1 MiB Cluster = Ursache" Story stimmt so nicht. Cluster-Overhead ist real, aber das sind ein paar Prozent, keine 2+ TiB. Das Problem bleibt der Controller der die UNMAPs nicht umsetzt, das ist unabhängig vom Filesystem.
Was @Johannes S zum...