I'm facing similar, if not identical, problem here, except with newer versions. Figured I'd not necro the old thread, but if it is preferred by the moderation team, please merge as you deem fit.
Here's my pveversion output:
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)...
I've followed the tutorial on the wiki and I'm still having nothing but Code 43.
I've tried OVMF BIOS with PCI passthrough, as well as with PCIe passthrough, neither worked.
I've also tried SeaBIOS with PCI passthrough, as well as with PCIe passthrough, neither worked.
For each of the above...
I've recently upgraded from 5.3 to 5.4 using:
sudo apt-get update
sudo apt-get upgrade
After rebooting, my PVE-manager isn't coming back online, though PVE daemon seems to be running.
# service pvedaemon status
● pvedaemon.service - PVE API Daemon
Loaded: loaded...
I tried to run:
apt-get upgrade
It results in failure for these packages:
Errors were encountered while processing:
pve-firewall
qemu-server
pve-manager
proxmox-ve
pve-ha-manager
pve-container
When I try to run the debug command, I get this:
$ journalctl -xn
-- Logs begin at Tue 2015-12-08...
I have a web application that is about to see some traffic. Here are a few approaches I've thought of, but I don't know which one is best:
Multiple VMs each with own files, with a Varnish front-end load balancing them
Multiple VMs with a shared NFS on separate VM, with a Varnish front-end load...
Whenever I do fdisk -l, I get that message on the host node. Here are the outputs; I've bolded the things I'm not sure about:
root@hostnode:/var/lib/vz/images# fdisk -l
Disk /dev/sda: 584.7 GB, 584652423168 bytes
255 heads, 63 sectors/track, 71079 cylinders
Units = cylinders of 16065 * 512 =...
I use DRAC's remote console (vkvm) to manage my server, as it is thousands of KM away in a datacenter. Before installing Proxmox, the resolution of this was handled by X11's configuration on the OS. After installing Proxmox, it made the window too high for my monitor (macbook, 1280x800, but top...
After tanking my way through the physical to virtual conversion, I'm down to the final stretch of getting the VMs connected properly. However, banging my head at it for almost an entire day now made me realized how utterly useless I am when it comes to virtual networking without the pretty GUI's...
I've been fighting with this for almost 12 hours now, and the server is still not accessible. Much of the time is lost from dd'ing from the backup disk to a .raw image (even with bs=16M, I was only getting 32MB/s on a 150G device), as well as converting the said .raw image to a compressed qcow2...
First post, not even sure where to start :confused:
I have 5 drives in hardware RAID 6 right now. I ordered a 6th drive today, of same capacity, with intention to do a P2V conversion. Here are the steps I am planning to take for the process, can you guys please point stuff out if I am messing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.