I was trying to figure out the cause of high IO Delay on my server and found the function fio to test the drive. It worked great as I found the cause of the IO delay was the drive was only functioning at 6 IOPS. After a quick reboot, my VM storage was missing. Apparently, fio wiped the drive and...
Could you also enlighten me on the connection between snapshots and a Windows VM TPM storage location? For example, I have an NFS share for VM disks. If the TPM storage is located on the share, I can't take a snapshot. But if I move the TPM storage to a local disk, snapshots become available.
I'm looking on recommendations on how you guys have your drives setup. What configuration would I use to enable: redundant drives for VMs that allow snapshots and have fast transfer rates?
I currently have 4 SSDs in RaidZ2 for my VMs and then a couple other random SSDs for ISOs and data. Some...
My Hero!
This was the culprit. I used apt to update the system. I guess that is not recommended?
Luckily, there was a cached version of ifupdown on the system so a quick reinstall got me back up and running.
I appreciate the reply. Hopefully you see something in here, cause it's not apparent to me.
pveversion -v
proxmox-ve: 7.3-1 (running kernel: 6.1.10-1-pve)
pve-manager: not correctly installed (running version: 7.3-6/723bb6ec)
pve-kernel-6.1: 7.3-4
pve-kernel-helper: 7.3-4
pve-kernel-5.15: 7.3-2...
My networks seems to be jammed up after I did an update and rebooted.
ip a:
1: lo <LOOPBACK, UP, LOWER_UP> mtu 65536 qdisc nonqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever...
You were right. Originally, the system was booting Grub and then started using systemd-boot after the update.
Systemd-boot
The kernel commandline needs to be placed as one line in /etc/kernel/cmdline. To apply your changes, run proxmox-boot-tool refresh, which sets it as the option line for...
Does systemd-boot need to be installed separately?
I don't seem to have a current .conf file in the /boot/loader/entries/ directory.
EDIT:
It's installed, but I don't have a loader folder within boot
systemd is already the newest version (247.3-7+deb11u1)
Thanks for the reply!
VT-d is on.
cat /proc/cmdline says:
initrd=\EFI\proxmox\5.19.7-1-pve\initrd.img-5.19.7-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
I did add it to /etc/default/grub though.
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`...
I've had GPU passthorugh working on this system with PVE 7.2.3 + 5.13 Kernel. When I updated to 5.15 Kernel, I couldn't get passthrough working without a script posted in the forum.
I just updated to PVE 5.2.11 and GPU passthrough stopped working all together. When I try to start the VM, I get...
As the title says, I'm unable to reach the PVE GUI after I add a PCI card to my system. I believe it has to do with my NIC changing positions but I'm unsure what to do about it.
When I run lspci -v without the PCI card, I get:
03:00.0 Ethernet controller: Intel Corporation Ethernet Controller...
I just did a test with XCP-NG and I'm able to successfully virtualize this card in Windows 10. Would that lead to a BIOS setting or maybe memory configuration in Proxmox VM?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.