Hi there,
I have a 3-node NUC10 cluster, I've had a hardware failure on the first node, and my plan is to replace it. All CT and VMs have been moved onto the other nodes using backups. But I do not have a backup of the failed Proxmox node itself. My thinking was to just re-install Proxmox on...
I followed the steps I outlined in Post#5 of this thread. I believe the error shown when you run "vainfo" from proxmox host is an error that means vainfo cannot work with quicksync CPUs. Proxmox is still able to pass the quicksync capabilities to your LXC or VM. I have followed those steps on...
All sorted. Easier than I expected. I found the command below failed because as each node only has a single disk, the zfs volume (called zfs-core) was no longer available on the cluster node in question. So I did the following:
1. Shutdown and replace the failed SSD/HDD. Start the Cluster node...
Hi there,
I'm running Proxmox 7, across a bunch of Intel NUC's, each with 2 drives installed. The first drive is an NVME, hosting the proxmox install itself. The secondary is a SATA SSD which is configured for a ZFS volume called zfs-core. This is the ZFS volume configuration across all nodes...
ok - it looks like the NUC10 GPU is not supported by "vainfo". I had stopped work at this point, believing that meant it meant nothing else was going to work. @n1nj4888 - following your point, I pushed on anyway (using the steps below), and to my surprise - I'm getting GPU pass through! So it...
Hi @Alwin. I do intend to run graphics intensive applications (LXC). I need to process H.265 video streams which can be offloaded to the Intel GPU. I'm not running an X11 desktop, but I do need to be able to offload some processes to the GPU.
I'm running Proxmox VE 6.3-3. Both this link...
Hi folks,
I've one last niggle with my proxmox setup, and I'm hoping someone here can guide me to a solution. I have a cluster of 3 x NUC10 boxes, these are Frost Canyon CPUs with Intel UHD Graphics. I want to pass the Intel GPU down to one of my LXC containers. Proxmox is running the latest...
Hi folks. Is anyone running a NUC10 who can verify they have vainfo running ok?? I've a clean proxmox install, and have run the following two commands:
apt-get install i965-va-driver
apt-get install vainfo
Once I run vainfo to verify the GPU support is enabled on the host, below is what I get...
I take that back. Whilst it looks like everything is ok, it's still now. On the 'old' primary node I have a osd which is orphaned, and I can't find a way to remove it. On the other nodes, within /var/lib/ceph/osd/ I see each of the nodes listed. Whereas on the 'old' primary node, it only shows...
ok - just to follow up on this. I have managed to bring ceph back to a fully working state without a re-install. As simple as it sounds, I just needed to re-create the two folders below on the node with the issue. Adding Manager and Monitors via the CLI or UI then created the sub-folders...
I'm struggling with I think the same issue now. I have torn down my Ceph storage configuration, with a view to then rebuilding it so I get to know the process. Everything looked to be removed ok. I then ran the ceph setup again, and I was able to configure two out of the three nodes. But the old...
Thanks @Alwin ,
I appreciate the help (and links which I've taken a look through). I've tested the configuration by cutting off a node from the 3-node cluster. I'm really impressed with Ceph. It responded to the node failure and brought up services on one of the remaining nodes. But at the same...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.