Search results

  1. D

    VirGL hardware accelerated h264/h265

    Im gonna bump this up because I would love to see this work for plex and blueiris!
  2. D

    Help Moving Storage to ZFS // Docker not working!

    I have the same issue, is there any resolution to this?
  3. D

    Raidz0 configuration

    This exactly what I needed thank you for the help!
  4. D

    Raidz0 configuration

    Hey all, hopefully a quick answer. I am looking to get 3 SSD's into raidz0, this will be a vm pool to reduce the ridiculous io delay I have now. I do have a solid redundancy strategy and backup system already in place. Would someone be able to point me to the cli commands or gui setup to make...
  5. D

    PCI nuc

    So I was thinking something like this https://www.fsplifestyle.com/en/product/TwinsPRO500W.html.
  6. D

    PCI nuc

    So, Ive been thinking about this too. The one solution I found was https://www.fsplifestyle.com/en/product/TwinsPRO500W.html . The reality is that this is the most redundancy that I need. If Anything else goes it can be replaced more economically than keeping more machines going.
  7. D

    PCI nuc

    Maybe it would clarify if I explained what I’m working with now to give a scaling idea I have 2 4u rosewill cases. System 1 is a Xeon server with a disk array. System 2 is a ryzen server with a 3080. System 1 handles all website, application and service needs and system two is the rdp and...
  8. D

    PCI nuc

    By small low-power boxes do you mean within existing server additions or another u unit?
  9. D

    PCI nuc

    Great reference! Potential wake on pci could enable the cu (potentially). Is there any reason nuc’s couldn’t be used as “prod” server or proxmox backup server?
  10. D

    PCI nuc

    isn’t 5v need for x16 or is that a signal line?
  11. D

    PCI nuc

    Oh I see what you mean, from a functionally point though if I’m adding just one node I have 5v and 3.3v on an empty slot might worth doing that over creating and replacing an entire server
  12. D

    PCI nuc

    So we are sorta back to burning pci ports for power, which I mean if you need multiple nodes that works. I wonder if a board could be created specific to this idea where you could isolate pci ports to compute nodes and have graphics cards or whatever linked
  13. D

    PCI nuc

    But how would that work communication with pci? Could you mix nics, gpu and cu? Not to split but dedicate between different cu?
  14. D

    PCI nuc

    Do you think there is a better way to do something similar? The idea is to save rack space, increase density and keep costs down
  15. D

    PCI nuc

    Funny you say that, I found this video -https://youtu.be/pB-zBSExMS4 which shows taping off some pins to be able to work in a pci slot. Though throwing away a slot essentially but it increases the amount of computer possible within the chassis itself
  16. D

    PCI nuc

    Hey all, so I recent became aware of the pci compute element intel nuc. If these function like i hope this would add a lot of compute density to my 4u server. Everything I’m thinking is in proxmox but I do not want to have to virtualize this within a copy of proxmox but instead act as if it were...
  17. D

    PCI passthrough machines (error code 1)

    I appreciate all the help so far! As for the board, it does have integrated graphics, I'm pretty sure it is an Aspeed 2000. For the vfio-pci do you mean dropping that into the /etc/modules or would else where?
  18. D

    PCI passthrough machines (error code 1)

    All good! Here is the information for the down machine that did work until driver updates ubuntu 20.04 agent: 1 balloon: 0 bios: ovmf boot: order=scsi0;net0 cores: 3 efidisk0: vmstore:113/vm-113-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K hostpci1: 0000:03:00,pcie=1,x-vga=1 machine...
  19. D

    PCI passthrough machines (error code 1)

    It is an intel system just to clarify, I have been able to get an ubuntu instance running now with PCI passthrough, but the graphics card does not populate in the 03:00.0 or 03:00.1 by name. When I try hwinfo in terminal, it sees a VGA adapter. RDP worked until I tried to download the nvdia...
  20. D

    PCI passthrough machines (error code 1)

    Hey all, back for some more help! I have two VM's I am trying to spin up with PCI passthrough. On both, I am getting QEMU failed with exit code 1. VM-win agent: 1 args: -cpu 'host, +kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off,kernel_irqchip=on' balloon: 0 bios: ovmf boot...