I want to make the default storage layout of Proxmox VE a little bit more clear to you. It depends how you configured your hard drive setup during the installation, but by default Proxmox VE would create the following on your NVMe drive:
- a ~1 MB BIOS boot partition,
- a 512 to 1024 MB EFI ESP...
Thank you for the information on the issue. Do you currently experience the issue? Because I couldn't tell from the logs as your cluster is quorate.
The only thing that is standing out is the "Temporary failure in name resolution", which is probably trying to reach your PBS server. That is...
It seems like you experience some connectivity issues between your hosts and it would be interesting if you have setup a separate cluster network for corosync. Could you check the status of your cluster with pvecm status when this happens and post the syslog from journalctl -b -u pvestatd -u...
Grundsätzlich ist Proxmox eher auf Desktops und Server spezialisiert, jedoch könntest du versuchen im BIOS eine Einstellung zu finden, womit sich beim Zuklappen des Laptops zwar der Bildschirm ausschalten lässt aber nicht der Laptop selbst in Sleep/Hibernate versetzt wird. Eine andere Option...
This seems like there are issues with the network card driver. It would be helpful to know what kernel modules and what NIC you have (e.g. with lspci -nnk | grep -A2 Ethernet) and what kernel version you are running (uname -r).
Willkommen im Proxmox Forum, Martin!
Ich gehe davon aus, dass du dein Notebook über Ethernet mit dem gleichen Router verbindet hast, über welchen du später mit WLAN von einem anderen PC darauf zugreifen willst. Die beiden Netzwerkkarten (NICs), die du aufgelistest sind jeweils einmal für...
That seems strange indeed. Have you checked that the VirtIO drivers and the guest tools have been installed correctly? Does it say anywhere in your VM summary that "guest agent not running" (usually at the IP addresses section)? There's also a Windows 11/2022 best practices section in the...
You can check whether the folder /sys/firmware/efi exists on the system to check whether it was booted with UEFI or in Legacy mode. cat /sys/firmware/efi/fw_platform_size should output 64 in the console on the PVE host. According to your mainboard's manual the options to change that would be in...
No, the restore functionality will recreate the backup that you provided and then either reuse or create a new VM, which depends if you have changed the vmid when restoring it. That is, it will not do any checks about the differences between the current state of a VM and a backup of it, as the...
Hey there!
First, to answer your first question:
Have you installed the qemu guest agent on the Windows VMs and enabled memory ballooning? If not, neither Proxmox VE nor Qemu have any knowledge about how much RAM the guest is actually using and therefore the guest agent is needed for proper...
Could you provide us with more information if there are any issues? Does pveceph status/ceph -s report any warnings/errors? Otherwise this log message could have a variety of reasons, e.g. network issues, firewall settings, different node kernel/ceph versions, etc.
Edit: You could also look at...
Are there any hardware differences between those servers? Which mainboard do you use on the testing server? Have you setup your BIOS to use UEFI instead of Legacy Mode? Otherwise, you could try some known workarounds from here [1] with this issue.
[1]...
Another idea that just crossed my mind, if it's possible: Have you tried to restart the ESXi hosts? If it's not possible, have you tried to restart the management agents on the ESXi hosts as described in [1]?
[1]...
Thank you for your thorough description of your problem. Unfortunately I could not recreate your problem (in respect to the amount of VMs on ESXi) on a local setup. I would be interested if there is something of interest in your syslog for the ESXi fuse mount point (or PVE storage and...
When a guest uses GPU passthrough (or any PCI passthrough for that matter), it will allocate main memory for the IOMMU group that was assigned to it so that the virtual machine can directly communicate with the device as if it was physically connected. But since you use a P4-1Q vGPU profile it...
Do you have physical access to the server itself? It would also be helpful to know more about the network configuration as it seems that they were misconfigured in the first place when Server 2 could not reach Server 4.
ifup and ifdown (de)configure network interfaces, which are defined in /etc/network/interfaces) only temporarily. If vmbr1 is the network interface which is used to hold your ssh connection, then it was cut as soon as it was deactivated by the ifdown vmbr1 command. It's advisable to do changes...
Hello Esa!
If I have understood you correctly, you have created two backups of a virtual machine with Proxmox Backup Server and then restored said VM to the first backup, is that correct? If so, then the behavior is as expected, as restoring a VM means that you want to get back a specific state...
Das ist richtig, so wie du schon bereits erkannt hast liegt das Problem beim Kompilieren des Netzwerkkartentreibers während der Installation des neuen Proxmox Kernel package. Ich konnte aus deinem vorigen Post herauslesen, dass du an einem Punkt das r8168-dkms package installiert hast, welches...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.