Alright, meeting somewhere in the middle here. My Proxmox install was on a separate drive compared to where I store all of my VM disks. So, I have copied all of my configuration files from my old install drive. Currently copying them to the new drive. Hopefully this will allow me to easily get...
Ok, so setting this up from scratch is proving to be an even bigger nightmare that I imagined. I had a decent amount of virtual machines (including my pbs server) that run critical infrastructure (routers).
I think I would rather try and fix the old installation. Any idea on where to get a copy...
I have started this process. I have a spare boot drive, so I took out my old one and put in the new one.
I installed Proxmox 8.x on it and it boots fine. I’ve noticed that it is running a different kernel than my upgraded install. Whatever moving on.
One question:
I dont have any notes on...
I have rebooted quite a few times now and it seems to be relatively random where the boot process stops.
I can boot into recovery mode if that means anything. Biggest difference I can see is IOMMU is disabled in recovery mode.
After removing quiet from linux kernel arguments, these are the last two lines output:
mounted sys-kernel-config.mount - kernel configuration file system
vfio_pci: add [8086:150e[ffffffff:ffffffff]] class 0x000000/00000000
I have spent a couple hours on this already and haven't made it too far.
I ran through the upgrade procedure and ultimately ended up with an error with apt. I figured I would reboot (I knew it was a bad idea). My server would not boot.
There wasn't an entry in grub for the new 6.8.8-2-pve...
My proxmox server has been backing up numerous virtual machines for a substantial amount of time without issue. About a month ago one of my VMs started to fail backing up. As of right now I don't have much to go on. I have run fsck on the disks partitions that it would work on and nothing came...
I happened to build another i9 12900k machine in that past month. I found the time over that past few days to run another Ubuntu setup on it. This time on bare metal.
If you have a PCIE graphics card installed, the i915 drivers will not load. Took me a while to muster up the desire to crack...
Just checking to show the success I am having with this setup. I have restarted the server just a couple times since I set this up. I have intentionally tried to not restart it. The Plex server is now a solid 30 days of uptime. This server gets used daily by at least 2 users if not more...
I don’t blame you guys for doubting. When I had this setup originally it looked as though it was working, and then days later I would be upset by my server crashing. Look back at my posts in this thread, you can see the ups and downs I had.
That being said, my server has yet to crash with this...
I just started 6 streams locally while some other users were already streaming. A few 4K HEVC transcoded down to 720p and 1080p and one 1080p h264 transcoded down to 720p. The remote streams were direct playing at the time. I let these play for 10-20 minutes and saw no issues. I also attached...
If you can return the NUC, get your money back.
I’ll do your stress test to see what’s going on from my end.
Did you compile and install the three drivers I mentioned? They happen to relate to exactly what you think is wrong, VAAPI being one of them.
It looks like you have the i915 drivers enabled on your host with your kernel Cmdline arguments. The host cannot have these drivers enabled otherwise you will see issues. There are two grub configurations, one for your host, and one for your guest. Make sure you have them setup according to my...
Both of those setups are different than mine, one is a 13th gen raptor lake, and the other is using a bleeding edge kernel version (5.19xxx)
No error messages on my VM, running sudo dmesg | grep HUNG yields no results. Also manually searching for any error message at all and only came across an...
It’s really not working for you.
I just checked the uptime on my Plex server and it has been running for 18 days with the setup I described above and hardware transcoding has been functional. Whereas before my updates with hardware transcoding enabled, the server might run for 15 minutes...
The only special configuration options were passing through the correct PCIE device, OVMF Bios, Q35 machine type, and display none.
Edit: Follow up, this virtual machine has been running without a reboot since I made the post about it working. It is very stable and hardware transcoding is...
Coming back to report the success I had last night. I let this sit in my bucket of "wait for developers to fix" for quite some time. Last night I mustered up the strength to want to try again. It seems as though I have got it to work.
Proxmox Host Setup:
I am not sure how much of this is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.