Ich muss halt mit einem brauchbaren Pfad aufkreuzen.
Die wollen das SAN nicht tauschen, sind aber bereit, Platten dazu zu stecken.
Wir haben welche im Regal, von anderen Servern, da muss ich noch prüfen.
Also denke ich an irgendeinen...
Vielen Dank für eure Hilfe, ich habe es wieder ans Laufen gebracht.
Fragt bitte nicht wo der Fehler lag, ich weiß es nicht.
Aber ich bin sehr froh, dass es wieder läuft.
Nochmals vielen Dank für eure Unterstützung.
Ich muss halt mit einem brauchbaren Pfad aufkreuzen.
Die wollen das SAN nicht tauschen, sind aber bereit, Platten dazu zu stecken.
Wir haben welche im Regal, von anderen Servern, da muss ich noch prüfen.
Also denke ich an irgendeinen...
Hi @all,
I get a blank screen when visiting the WebUI (user, not admin). WebUI is running behind nginx reverse proxy.
ii pmg-api 9.0.6 all Proxmox Mailgateway API Server...
Echt? Ich sehe das fundamental anders. Einige interessante Features fehlen deinem Raid6 nämlich. Auch wenn der Augenmerk dort auf "kleine Systeme" liegt...
I still have issues with SATA Links since the upgrade do 6.17.
I already replaced SATA cables and tried different disks. Same issue.
Could this be a kernel issue?
https://bugzilla.kernel.org/show_bug.cgi?id=220693
[ 4744.506222] ata1.00...
If you are still struggling with the direct import wizard or encountering "failed to stat" errors, you might want to try a more traditional but highly robust method.
The most reliable approach, especially for very large VMs or when the network...
Before committing, I would first run:
qm importovf <new_vmid> <path_to_ovf_file> <target_storage> --dryrun
to check that the OVF manifest is correctly populated. This is advisable as different systems produce different manifests.
I got it to work, thank you. I switched from vmbr1 to vmbr2. They both have the same configurations even on the switch. Not sure why vmbr1 didn't work though.
also grundsätzlich brauchst du bei diesem szenario immer eine ungerade anzahl an stimmen, also 3, 5, 7 usw und eine davon (ein qdevice ist hier resourcensparend und kann mehr oder weniger überall, also auch in der cloud, laufen) muss zwingend an...
My issue was that I had upgrade the driver on the host, which worked perfectly fine, but when I was creating the lxc I was installing an old driver which was creating a driver mismatch, what I did is not install the driver on the lxc and then the...
@TheMat556 Were you able to resolve your issue?
I suspect this might be related to a problem with the 6.17 kernel:
https://bugzilla.kernel.org/show_bug.cgi?id=220693
https://bbs.archlinux.org/viewtopic.php?id=310008
Perhaps the Proxmox team...
577 PGs.
PVE Datacenter "Ceph" view usage shows: 4.48 TiB of 18.38 TiB
Each node's storage entry shows: Usage: 26.41% (1.63 TB of 6.18 TB)
(which is 1/3 of the Datacenter view)
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW...
that is exactly what i see.
1 running backup per node in the cluster.
i have 3 nodes, so i get 3 simultaneous backups.
if you have only 1 node, you are stuck at 1 backup at a time though.
That's a very fair point. While adding a full PVE node might offer the minor convenience of seeing the arbiter's status directly in the WebUI, the administrative overhead and potential complexities you mentioned—especially regarding storage...
Another thing to consider: As soon as you add the qdevice the cluster members can login as root on the qdevice via ssh without additional authentification. So you really shouldn't use the qdevice VM for anything else. This is especially important...
Thank you so much for sharing your experience—this is truly valuable. It helps me better understand the lower limits of what a Q-device requires to function effectively.
Your insights further confirm how flexible the Q-device setup can be...