PCI nuc

dabaers

Member
Oct 9, 2021
35
0
6
28
Hey all, so I recent became aware of the pci compute element intel nuc. If these function like i hope this would add a lot of compute density to my 4u server. Everything I’m thinking is in proxmox but I do not want to have to virtualize this within a copy of proxmox but instead act as if it were two servers on the same rack. I have some big ideas but a lot of questions. First can these be dropped into x16 slots on my server? If so can they act as an independent compute unit for HA or as a backup server or whatever else we can think of? Can pci devices on that same mother board be shared either to be exclusive or shared? Is there any reason that this is a bad idea? Just to make sure we are talking about the same thing here is the link!

https://www.intel.ca/content/www/ca/en/products/details/nuc/elements/compute.html

Thank you all!
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
Hey all, so I recent became aware of the pci compute element intel nuc. If these function like i hope this would add a lot of compute density to my 4u server. Everything I’m thinking is in proxmox but I do not want to have to virtualize this within a copy of proxmox but instead act as if it were two servers on the same rack. I have some big ideas but a lot of questions. First can these be dropped into x16 slots on my server? If so can they act as an independent compute unit for HA or as a backup server or whatever else we can think of? Can pci devices on that same mother board be shared either to be exclusive or shared? Is there any reason that this is a bad idea? Just to make sure we are talking about the same thing here is the link!

https://www.intel.ca/content/www/ca/en/products/details/nuc/elements/compute.html

Thank you all!
If I remember right I saw a LTT review video of the NUCs using these compute units and I think they told that these compute modules are only for NUC boards and are not compatible with PCIe slots of normal mainboards.

So you might wantsto check that first.
 

dabaers

Member
Oct 9, 2021
35
0
6
28
If I remember right I saw a LTT review video of the NUCs using these compute units and I think they told that these compute modules are only for NUC boards and are not compatible with PCIe slots of normal mainboards.

So you might wantsto check that first.
Funny you say that, I found this video -https://youtu.be/pB-zBSExMS4 which shows taping off some pins to be able to work in a pci slot. Though throwing away a slot essentially but it increases the amount of computer possible within the chassis itself
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
Jup, then that 16x PCIe slot is basically just used for supplying 3.3V power. Still a bit hacky for me to use such a combination in production.
 

dabaers

Member
Oct 9, 2021
35
0
6
28
Jup, then that 16x PCIe slot is basically just used for supplying 3.3V power. Still a bit hacky for me to use such a combination in production.
Do you think there is a better way to do something similar? The idea is to save rack space, increase density and keep costs down
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
Something like a GPU mining rack but use NUC CUs and not GPUs and don't connect the PCIe risers to anything except power so you don't need to waste any PCIe slots (external power for these risers usually comes from some kind of power cable and not from PCIe)? There you could put in alot of NUC CUs in the front of the case + a optionally normal mainboard in the back:
fd46e65c-5b2d-47a5-9692-a382467af025
 
Last edited:

dabaers

Member
Oct 9, 2021
35
0
6
28
Something like a GPU mining rack but use NUC CUs and not GPUs and don't connect the PCIe risers to anything except power so you don't need to waste any PCIe slots (external power for these risers usually comes from some kind of power cable and not from PCIe)? There you could put in alot of NUC CUs in the front of the case + a optionally normal mainboard in the back:
fd46e65c-5b2d-47a5-9692-a382467af025
But how would that work communication with pci? Could you mix nics, gpu and cu? Not to split but dedicate between different cu?
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
But how would that work communication with pci?
It wouldn't at all. No of the NUC CUs would be in anyway be connected via PCIe to the mainboard in the back. NUCs and mainboard would run completely independent except for the shared PSU and case.

As far as I undestand he taped the pins so the NUC CU won't initialize the PCIe interface at all because the NUC CU thinks it is not put into any PCIe slot (but it is so it still gets the 3,3V power from the PCIe slot it needs). If the PCIe of the NUC CU isn't used at all except for 3.3V power I would just put it into a PCIe riser that isn't connected to any real PCIe slot at all. The hard part could be to find a riser card that not only got 12V power externally but also 3,3V from a SATA power port.
 
Last edited:

dabaers

Member
Oct 9, 2021
35
0
6
28
It wouldn't at all. No of the NUC CUs would be in anyway be connected via PCIe to the mainboard in the back. NUCs and mainboard would run completely independent except for the shared PSU and case.

As far as I undestand he taped the pins so the NUC CU won't initialize the PCIe interface at all because the NUC CU thinks it is not put into any PCIe slot (but it is so it still gets the 3,3V power from the PCIe slot it needs). If the PCIe of the NUC CU isn't used at all except for 3.3V power I would just put it into a PCIe riser that isn't connected to any real PCIe slot at all. The hard part could be to find a riser card that not only got 12V power externally but also 3,3V from a SATA power port.
So we are sorta back to burning pci ports for power, which I mean if you need multiple nodes that works. I wonder if a board could be created specific to this idea where you could isolate pci ports to compute nodes and have graphics cards or whatever linked
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
You aren't buring a PCIe Port for power. None of the mainboards PCIe slot would be used so you still got 7 free PCIe slots.
You just put that NUCs into PCIe risers that aren't connected to anything except for something like SATA power plug for power.
Just just put some of those into the front of the case:
brainzap-1-12-stueck-pci-e-pcie-express-riser-karte-adapter-x1-x16-usb-3-0-mining-ver009s-1-thumb-1024.jpg


If all you need is a PCIe socket providing some 3.3V power such a riser board would do the job too (as long as there is 12V to 3.3V converter on the riser or a external 3.3V power connector (like a SATA power connector could offer). You just skip the part where you connect the riser to a PCIe slot of the mainboard as the NUC won't communicate over PCIe with the mainboard anyway (if it would communicate the NUC CU wouldn't work, thats why you tape it).
 
Last edited:

dabaers

Member
Oct 9, 2021
35
0
6
28
You aren't buring a PCIe Port for power. None of the mainboards PCIe slot would be used so you still got 7 free PCIe slots.
You just put that NUCs into PCIe risers that aren't connected to anything except for something like SATA power plug for power.
Just just put some of those into the front of the case:
brainzap-1-12-stueck-pci-e-pcie-express-riser-karte-adapter-x1-x16-usb-3-0-mining-ver009s-1-thumb-1024.jpg
Oh I see what you mean, from a functionally point though if I’m adding just one node I have 5v and 3.3v on an empty slot might worth doing that over creating and replacing an entire server
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
Oh I see what you mean, from a functionally point though if I’m adding just one node I have 5v and 3.3v on an empty slot might worth doing that over creating and replacing an entire server
As far as I understand you don't even need 5V. The NUC CU gets 12V from the power connector directly from the PSU and that board in the video just needs to provide 3.3V over PCIe.
 

dabaers

Member
Oct 9, 2021
35
0
6
28
As far as I understand you don't even need 5V. The NUC CU gets 12V from the power connector directly from the PSU and that board in the video just needs to provide 3.3V over PCIe.
isn’t 5v need for x16 or is that a signal line?
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
isn’t 5v need for x16 or is that a signal line?
You can look at the pinout. There is no 5V specified for PCIe: https://pinoutguide.com/Slots/pci_express_pinout.shtml
Just 3.3V and 12V and not sure how that NUC CM is designed but usually 12V inputs should be connected internally and in that case 12V wouldn't be needed from the PCIe slot when powering the NUC CM directly from the PSU via cable.
 

dabaers

Member
Oct 9, 2021
35
0
6
28
You can look at the pinout. There is no 5V specified for PCIe: https://pinoutguide.com/Slots/pci_express_pinout.shtml
Just 3.3V and 12V and not sure how that NUC CM is designed but usually 12V inputs should be connected internally and in that case 12V wouldn't be needed from the PCIe slot when powering the NUC CM directly from the PSU via cable.
Great reference! Potential wake on pci could enable the cu (potentially). Is there any reason nuc’s couldn’t be used as “prod” server or proxmox backup server?
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
Depends on how serious you are about a production server. Not much space for most good SSDs (like U.2), I think no ECC RAM support, not great for PCI passthrough because of missing space and slots, many NUCs only got 1x M.2 + 1x 2,5" SATA slot so not great for redundant storages, no BMC, you often read about problems with drivers. There are some nice professional small low-power boxes that offer ECC and should be more reliable. All the big server manufacturers got some. But alot of people here are running NUCs (not CMs) as low-power homeservers. So it really depends on your budget and how much you are willing to spend for data integrity and relibility.
 
Last edited:

dabaers

Member
Oct 9, 2021
35
0
6
28
Depends on how serious you are about a production server. Not much space for most good SSDs (like U.2), I think no ECC RAM support, not great for PCI passthrough because of missing space and slots, many NUCs only got 1x M.2 + 1x 2,5" SATA slot so not great for redundant storages, no BMC, you often read about problems with drivers. There are some nice professional small low-power boxes that offer ECC and should be more reliable. All the big server manufacturers got some. But alot of people here are running NUCs (not CMs) as low-power homeservers. So it really depends on your budget and how much you are willing to spend for data integrity and relibility.
By small low-power boxes do you mean within existing server additions or another u unit?
 

PigLover

Well-Known Member
Apr 8, 2013
119
36
48
Depends on how serious you are about a production server. Not much space for most good SSDs (like U.2), I think no ECC RAM support, not great for PCI passthrough because of missing space and slots, many NUCs only got 1x M.2 + 1x 2,5" SATA slot so not great for redundant storages, no BMC, you often read about problems with drivers. There are some nice professional small low-power boxes that offer ECC and should be more reliable. All the big server manufacturers got some. But alot of people here are running NUCs (not CMs) as low-power homeservers. So it really depends on your budget and how much you are willing to spend for data integrity and relibility.
It definitely not an enterprise class device, but the NUC 12 PCIe card is no slouch either.

I9-12900. 3x m.2 slots (PCIe gen4 slots), 10gbe+2.5gbe LAN. 2x Thunderbolt 4. If I was building a business I'd probably use traditional servers. But you could build one heck of a cluster out of these little boards.

Caveat: they are also pretty expensive. Trying to put 8-10 of them in a box like the OP proposed would not cost out well against a nor traditional server approach.
 

dabaers

Member
Oct 9, 2021
35
0
6
28
It definitely not an enterprise class device, but the NUC 12 PCIe card is no slouch either.

I9-12900. 3x m.2 slots (PCIe gen4 slots), 10gbe+2.5gbe LAN. 2x Thunderbolt 4. If I was building a business I'd probably use traditional servers. But you could build one heck of a cluster out of these little boards.

Caveat: they are also pretty expensive. Trying to put 8-10 of them in a box like the OP proposed would not cost out well against a nor traditional server approach.
Maybe it would clarify if I explained what I’m working with now to give a scaling idea I have 2 4u rosewill cases. System 1 is a Xeon server with a disk array. System 2 is a ryzen server with a 3080. System 1 handles all website, application and service needs and system two is the rdp and compute work machine which is running multiple users. My idea was to throw a nuc cu in system 2 and have it as a ha available unit to support system1’a work load and as failover if need be.
 

Dunuin

Famous Member
Jun 30, 2020
6,031
1,399
144
Germany
Not sure if it would be usefull as a third node for a HA cluster. In case the PSU would die you would loose both the server and the NUC CU and the last remaining node would lock because there wouldn't be a quorum. So a dedicated PSU for the NUC would be recommended in case that server got no redundant PSU so a normal NUC box might be a better choice.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!