When you had Plex running inside of the OMV VM, Plex was effectively accessing the local drive(s).
Now to access those drives from outside of the VM, you will have to share disks which hold the library via SMB or NFS share, so you'll need to...
This seems to be a general error.
I had PDM working with my cluster until the last update, then I got the same api error.
Strange enough, the issues does not effect the 2x PBS which the PDM is also monitoring.
It's an Aoostar WTR MAX NAS system. Been running 4 months now, getting the GPU passthrough was a pain, but really no other problems.
Sorry, I can't advise on that, I have 7 nodes in my Proxmox cluster, so it's difficult to separate the power...
You don't need to passthrough the NIC on Proxmox, you can setup a bridge NIC which the TrueNAS VM can use, and if required a 2nd Bridge using the onboard NIC for the other traffic.
I have a similar setup as one of the nodes in my Proxmox home lab. Mine has an AMD Ryzen 7 PRO 8845HS w/ Radeon 780M Graphics & 96GB RAM
Currently running is a Windows 11 VM with the iGPU passthrough, really only used for Handbrake transcoding...
Most of the cluster requirements are there for a reason. However, you might be interested in PDM [1] if you want a common management plane for multiple PVE instances without clustering them.
[1]...
Maybe you could try running the 'Pulse' monitoring application, it can be run in an LXC container and provides an 'at a glance' wealth information about the Proxmox environment, with docker and proxmox backup etc.
Installation script is...
The mediated iGPU is virtualized. It can be used for rendering and graphics acceleration, but as far as I am aware it won't be able to communicate with the hardware of the HDMI or display ports.
What are you seeing when a monitor is connected...
As I understand it, mediated iGPU doesn't provide any video output.
But I could be wrong about that as I've never got the mediated GPU to function correctly.
PBS can run in a x86 VM on QNAP, I have installed it on a TS-453 pro unit before.
The problem is that running with HDD will make the system extremely sluggish (PBS is very i/o intensive).
If you can upgrade the RAM to 4GB and install SSD for...
I think I see the default picture. Basically the original setup that I had by default, had the Proxmox root drive on the local 'partition' and any VM's or Containers could have been setup on the local-thin 'partition.' Then I could have setup my...
I'm no expert, so maybe others will chip in with comments.
I have a 7 node cluster, and all of the nodes use split local / lvm-thin of the boot SSD. The issues of running VM / CT on the boot drive is not so important in a home lab. One of my...
You seem yo be suffering from the QEMU 10.0.2 issue (a part of the Proxmox 9 upgrade) which is effecting a number of device passthrough
See this thread for more details, it is discussing GPU passthrough but it's likely the same problem. In...
The description of your problem is not at all clear.
You don't explain what the 1TB drive is (HDD or SSD), or how you connected it to Proxmox.
Can you see 2 individual drives under the 'Proxmox node' -> 'Disks' menu.?
What can be stored on the...
This is a place where the assistance is provided on a voluntary basis, I have provided what advise I can, but I cannot help you with something I have no experience with.
I hope that you'll bear that in mind and adjust your attitude.
"Could you please share the conf file for the VM?"
My conf file is specific for a Windows VM. If you follow that you are likely going to confuse yourself and go around in circles.
"Did you use that hook script? If yes did it work for linux VM...
@DerekG - Thank you for your suggestions! Historically it seems that the original .enc - file is lost with the host where that VM was initially backed up. The VM was moved to another PVE host in the cluster and the former PVE Host was removed...
@GioTr,
Happy to hear that you eventually got the passthrough working there.
Please mark this thread as 'Solved' so that others who can find the solution in these threads.