Recent content by chrisj2020

  1. C

    SOLVED - 3-node cluster, node failed

    Hi there, I have a 3-node NUC10 cluster, I've had a hardware failure on the first node, and my plan is to replace it. All CT and VMs have been moved onto the other nodes using backups. But I do not have a backup of the failed Proxmox node itself. My thinking was to just re-install Proxmox on...
  2. C

    [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    I followed the steps I outlined in Post#5 of this thread. I believe the error shown when you run "vainfo" from proxmox host is an error that means vainfo cannot work with quicksync CPUs. Proxmox is still able to pass the quicksync capabilities to your LXC or VM. I have followed those steps on...
  3. C

    [SOLVED] - ZFS Single Disk Failure

    All sorted. Easier than I expected. I found the command below failed because as each node only has a single disk, the zfs volume (called zfs-core) was no longer available on the cluster node in question. So I did the following: 1. Shutdown and replace the failed SSD/HDD. Start the Cluster node...
  4. C

    [SOLVED] - ZFS Single Disk Failure

    I have not configured it so far. The same zfs volume is defined at the cluster level. But I don’t have any vms being replicated yet
  5. C

    [SOLVED] - ZFS Single Disk Failure

    Hi there, I'm running Proxmox 7, across a bunch of Intel NUC's, each with 2 drives installed. The first drive is an NVME, hosting the proxmox install itself. The secondary is a SATA SSD which is configured for a ZFS volume called zfs-core. This is the ZFS volume configuration across all nodes...
  6. C

    [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    ok - it looks like the NUC10 GPU is not supported by "vainfo". I had stopped work at this point, believing that meant it meant nothing else was going to work. @n1nj4888 - following your point, I pushed on anyway (using the steps below), and to my surprise - I'm getting GPU pass through! So it...
  7. C

    [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    Hi @Alwin. I do intend to run graphics intensive applications (LXC). I need to process H.265 video streams which can be offloaded to the Intel GPU. I'm not running an X11 desktop, but I do need to be able to offload some processes to the GPU. I'm running Proxmox VE 6.3-3. Both this link...
  8. C

    [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    Hi folks, I've one last niggle with my proxmox setup, and I'm hoping someone here can guide me to a solution. I have a cluster of 3 x NUC10 boxes, these are Frost Canyon CPUs with Intel UHD Graphics. I want to pass the Intel GPU down to one of my LXC containers. Proxmox is running the latest...
  9. C

    Coffee lake Xeon E-2176 and P630 Support

    Hi folks. Is anyone running a NUC10 who can verify they have vainfo running ok?? I've a clean proxmox install, and have run the following two commands: apt-get install i965-va-driver apt-get install vainfo Once I run vainfo to verify the GPU support is enabled on the host, below is what I get...
  10. C

    Ceph reinstallation issues

    I take that back. Whilst it looks like everything is ok, it's still now. On the 'old' primary node I have a osd which is orphaned, and I can't find a way to remove it. On the other nodes, within /var/lib/ceph/osd/ I see each of the nodes listed. Whereas on the 'old' primary node, it only shows...
  11. C

    Ceph reinstallation issues

    ok - just to follow up on this. I have managed to bring ceph back to a fully working state without a re-install. As simple as it sounds, I just needed to re-create the two folders below on the node with the issue. Adding Manager and Monitors via the CLI or UI then created the sub-folders...
  12. C

    Ceph reinstallation issues

    I'm struggling with I think the same issue now. I have torn down my Ceph storage configuration, with a view to then rebuilding it so I get to know the process. Everything looked to be removed ok. I then ran the ceph setup again, and I was able to configure two out of the three nodes. But the old...
  13. C

    Ceph Pool and CephFS setup question [NUC homelab]

    Thanks @Alwin , I appreciate the help (and links which I've taken a look through). I've tested the configuration by cutting off a node from the 3-node cluster. I'm really impressed with Ceph. It responded to the node failure and brought up services on one of the remaining nodes. But at the same...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!