Hi, ja irgendwie habe ich das ganze noch nicht richtig verstanden. Wie errreiche ich dann den Server von außen? Also ich hab den Server gerade nochmal neu aufgestockt und fang von 0 an. Voreingestellt ist das so wie auf dem Bild zu sehen.
Ich...
Hi,
eine Storage-ID muss eindeutig sein und nur einen Eintrag haben. Für Node-spezifische Konfigurationen/Mappings gibt es initiale Patches:
https://bugzilla.proxmox.com/show_bug.cgi?id=7200...
I agree adding the route of the listening interface to the mynetworks is a sane default. But I think there should be an option to disable this default, since it is completely reasonable to want to configure something else, which is currently...
Hi,
not entirely sure if noserverino also helps in your cause, but I'd give it a try:
https://forum.proxmox.com/threads/smb-cifs-mount-point-stale-file-handle.124105/
Yeah, it's not that bad, it's indeed the route that drives it to think it has a /8 network. Code is not wrong, but it fails to select the correct (more precise) route, but that may be by design. In src/PMG/Utils.pm it does a proper route check...
Hi,
it is a racy bug that's is fixed in qemu-server >= 9.1.3 with:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=b82c2578e7a452dd5119ca31b8847f75f22fe842...
Good morning Gilou,
Please find the info below.
storage.cfg:
nfs: NAS_NFS
export /mnt/sphere/Backups/proxmox
path /mnt/pve/NAS_NFS
server 192.168.13.22
content snippets,rootdir,iso,images,vztmpl,backup,import...
It seems like a different issue than the original one in this thread. For you, the storage types are compatible and the disk migration progresses a while. Does it always fail at the same offset? Could you share the system logs/journal from both...
Hi,
Very interested in this cause SQL server is used a lot.
If I understand your problem correctly; You have an old VMWare server that is faster than your newer server where you are using PVE 9.1?
Can you provide specs from both servers; CPU...
Hi,
in the past we used ha-manager add $VMID --group $groupname to be able to assign a VM to what then was a "rule" (HA Groups). With PVE 9 HA Groups have been removed in favor of affinity rules which do offer more functionality - But I am...
How is your interface configured still?
EDIT:
I'm guessing, properly, as in, as a /16 or /24, I can reproduce it instantly whatever the size of the network configured, pmg guesses "class A! What else could it be!" when using 10.x subnets. I'd say...
An enhancement by @Robert Obkircher to warn about this landed in pve-container=6.0.19:
pve-container (6.0.19) trixie; urgency=medium
...
* fix #6897: warn that enabling cgroup nesting may be required for systemd.
While the nesting is greyed...
I need to check.. and it may be a "bug" in the way it determines it's own net.. not totally wrong because 10/8 certainly includes 10.10/16 or what not, but not exactly clever. I don't remember how this is computed, but I think that is the issue..
We have had this same issue since replacing our intel based nodes with amd ones. Lately we have unexpected reboots at least weekly on one or more nodes.
For us this always happens during the backup window (lucky?) and we see high IO delay right...