Hey all, hopefully a quick answer. I am looking to get 3 SSD's into raidz0, this will be a vm pool to reduce the ridiculous io delay I have now. I do have a solid redundancy strategy and backup system already in place. Would someone be able to point me to the cli commands or gui setup to make...
So, Ive been thinking about this too. The one solution I found was https://www.fsplifestyle.com/en/product/TwinsPRO500W.html . The reality is that this is the most redundancy that I need. If Anything else goes it can be replaced more economically than keeping more machines going.
Maybe it would clarify if I explained what I’m working with now to give a scaling idea I have 2 4u rosewill cases. System 1 is a Xeon server with a disk array. System 2 is a ryzen server with a 3080. System 1 handles all website, application and service needs and system two is the rdp and...
Great reference! Potential wake on pci could enable the cu (potentially). Is there any reason nuc’s couldn’t be used as “prod” server or proxmox backup server?
Oh I see what you mean, from a functionally point though if I’m adding just one node I have 5v and 3.3v on an empty slot might worth doing that over creating and replacing an entire server
So we are sorta back to burning pci ports for power, which I mean if you need multiple nodes that works. I wonder if a board could be created specific to this idea where you could isolate pci ports to compute nodes and have graphics cards or whatever linked
Funny you say that, I found this video -https://youtu.be/pB-zBSExMS4 which shows taping off some pins to be able to work in a pci slot. Though throwing away a slot essentially but it increases the amount of computer possible within the chassis itself
Hey all, so I recent became aware of the pci compute element intel nuc. If these function like i hope this would add a lot of compute density to my 4u server. Everything I’m thinking is in proxmox but I do not want to have to virtualize this within a copy of proxmox but instead act as if it were...
I appreciate all the help so far! As for the board, it does have integrated graphics, I'm pretty sure it is an Aspeed 2000. For the vfio-pci do you mean dropping that into the /etc/modules or would else where?
All good! Here is the information for the down machine that did work until driver updates
ubuntu 20.04
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;net0
cores: 3
efidisk0: vmstore:113/vm-113-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hostpci1: 0000:03:00,pcie=1,x-vga=1
machine...
It is an intel system just to clarify, I have been able to get an ubuntu instance running now with PCI passthrough, but the graphics card does not populate in the 03:00.0 or 03:00.1 by name. When I try hwinfo in terminal, it sees a VGA adapter. RDP worked until I tried to download the nvdia...
Hey all, back for some more help!
I have two VM's I am trying to spin up with PCI passthrough. On both, I am getting QEMU failed with exit code 1.
VM-win
agent: 1
args: -cpu 'host, +kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off,kernel_irqchip=on'
balloon: 0
bios: ovmf
boot...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.