Echt? Ich sehe das fundamental anders. Einige interessante Features fehlen deinem Raid6 nämlich. Auch wenn der Augenmerk dort auf "kleine Systeme" liegt...
Thanks for the heads-up! I've noticed that myself in the meantime.
It might be interesting to understand why I didn't see any speed difference in my tests:
This was probably due to sshuttle, which in this configuration (standard buffer or no...
I was having the same problem, after editing the IP address from 192.168.1.8 to 192.168.1.10, I was getting an error where I couldn't login to the web interface, or I can get in but none of the GUI was working. I previously had another pve server...
Hi all, long time lurker, first time poster.
Please bear with me a bit, I am seeking assistance to regain access to a proxmox instance - I am a noob as far as Linux and proxmox goes, just playing around with a home lab.
I have 2 old PCs...
I wanted to migrate my OpenClaw LXC to a VM and asked Perplexity for advice. The first method that came up was your script. It took only a few minutes, and the VM was ready and running without any issues. Well done!
A robust Bash toolkit that converts Proxmox LXC containers into fully bootable QEMU/KVM virtual machines — directly on your Proxmox VE host. Includes a companion disk-shrink script and handles disk creation, filesystem copy, kernel/GRUB...
Thanks for the heads-up! I've noticed that myself in the meantime.
It might be interesting to understand why I didn't see any speed difference in my tests:
This was probably due to sshuttle, which in this configuration (standard buffer or no...
I've recently started testing the upgrade of our v8.4 cluster to v9.1 (patched using the nosub repository today). While troubleshooting issues with our EVPN configuration, I found that the previous method for disabling reverse path filtering...
Und noch ein Nachtrag - und wohl die Lösung
Ich habe die "--no-latency-control" Option wieder aus dem sshuttle Befehl entfernt und die sshuttle verbindung neu gestartet, seitdem läuft der job mit einer Geschwindigkeit von ca. 40 Mbits (bzw...
So it's not just me with this issue. Well that's good to know I guess?
I thought the single hdd in my server was dying because it'd randomly disappear and crash any containers that used the NFS mount for it when there was heavy read/write to...
Nachtrag:
Per rsync von derselben PBS Maschine (mit dem ich auch Dateibackups offsite sichere) kann ich die Bandbreite sauber nach oben und unten bestimmen - es liegt also nicht an irgendwelchen anderen Netzwerkbeschränkungen. Rsync uns Sshuttle...
My solution was to define a new CPU model based on x86-64-v2-AES that also exposes AVX
New models can be defined here cluster wide:
/etc/pve/virtual-guest/cpu-models.conf
Add this definition to the /etc/pve/virtual-guest/cpu-models.conf file...
Few weeks ago we just completed migration of 20-ish VMs (mostly windows, few linux) from ESX/Veeam to Proxmox PVE/PBS using similar environment as yours (2 HPE Proliant hosts and MSA 2060 FC as shared storage). Everything works perfect. You...
Was mich wundert warum sollte HA hier die Hosts rebooten? Wenn 2 verbleibende Hosts aktiv sind und einer fehlt, dann dann müssen doch die 2 verbleibenden merken, dass sie aktiv sind und einer gestorben ist.
Mar 06 20:04:28 pve01 corosync[1858]...
To explain what the problem here is, now that it has been found (400 Bad Request on raspberry pi). The raspi 5 has by default a kernel with 16k pagesize, this probably leads to a problem when writing or creating the fixed index, which leads to...
One of the smaller VMs is off so I ran rbd sparsify and it worked. The syntax I found was a bit different, rbd sparsify --pool poolname diskname, but it did immediately reduce the rbd du usage from 11 GB to 8.8 GB, and only took a few seconds...