Hi,
I am new in the field of virtualization and I would like to use Proxmox on a server.
Already I read a lot, but I am not sure how to design the system in a good way.
This is the basic hardware:
Supermicro server with 2x 16 Cores with 2,3 GHz with VROC enabled
Two M.2 NVMe disks for Proxmox and the VMs
A Raid Controller MegaRAID 9460-16i and several NVMe disks attached to it to be used by a VM as SAN / NAS
Several Nvidia GPUs to be used by individual VMs
The following I would like to achieve:
The two M.2 NVMe disks shall be mirrored and be used for Proxmox installation and the VMs. Here I am not sure whether Proxmox will work with Intel VROC or whether I can easily create a RAID with Proxmox.
Furthermore, I would like to have one VM which shall act as SAN / NAS (with user management) and other VMs shall access its storage. This VM shall use the other NVMe disks which are connected with the Raid Controller. Here I thought about maybe making a PCIE-pass through of the RaidController to the VM and setting the RaidController to JBOD-mode and to use FreeNAS with ZFS.
In general, I would like to achieve a low latency connection between the SAN / NAS VM and the other VMs. Here I thought about a virtual network between the VMs and using iSCSI for data transfer.
What do you think? Would this be a good way?
Would you maybe use GlusterFS or CEPH instead of FreeNAS?
The other VMs would mostly be Linux VMs which should access the GPUs. As I understood, I need to make a PCIE-pass through for each GPU to a single VM and I cannot use one PCIE-device with multiple VMs at the same time. Is there also the way to make the pass through dynamic? So if some GPUs are not used for some time, could they be allocated automatically to another VM which asks for them somehow?
Thank you very much!
I am new in the field of virtualization and I would like to use Proxmox on a server.
Already I read a lot, but I am not sure how to design the system in a good way.
This is the basic hardware:
Supermicro server with 2x 16 Cores with 2,3 GHz with VROC enabled
Two M.2 NVMe disks for Proxmox and the VMs
A Raid Controller MegaRAID 9460-16i and several NVMe disks attached to it to be used by a VM as SAN / NAS
Several Nvidia GPUs to be used by individual VMs
The following I would like to achieve:
The two M.2 NVMe disks shall be mirrored and be used for Proxmox installation and the VMs. Here I am not sure whether Proxmox will work with Intel VROC or whether I can easily create a RAID with Proxmox.
Furthermore, I would like to have one VM which shall act as SAN / NAS (with user management) and other VMs shall access its storage. This VM shall use the other NVMe disks which are connected with the Raid Controller. Here I thought about maybe making a PCIE-pass through of the RaidController to the VM and setting the RaidController to JBOD-mode and to use FreeNAS with ZFS.
In general, I would like to achieve a low latency connection between the SAN / NAS VM and the other VMs. Here I thought about a virtual network between the VMs and using iSCSI for data transfer.
What do you think? Would this be a good way?
Would you maybe use GlusterFS or CEPH instead of FreeNAS?
The other VMs would mostly be Linux VMs which should access the GPUs. As I understood, I need to make a PCIE-pass through for each GPU to a single VM and I cannot use one PCIE-device with multiple VMs at the same time. Is there also the way to make the pass through dynamic? So if some GPUs are not used for some time, could they be allocated automatically to another VM which asks for them somehow?
Thank you very much!