I got impatient and installed proxmox 7
It went happily through the installation and I selected as installation options the first 2 samsung ssd's in zfs raid1. (just like last time (I did see the other 4 wd black ssds though but in the other thread it came to my attention to better seperate vm...
ahh my how brain surgery is not going well ;(
I ended up placing 2 nvm'e in the 2 left over m.2 mobo slots.
I also reconfigured the pcie configuration with slot one being 4x4x4x4 (as otherwise I will not see 4 disks on the ACI adaptor)
Added a gigabyte GPU on slot 2
A evga GPU on slot 3 (and...
thank you @Dunuin once again an elaborate explanation to help me get in the right direction.
I'll keep the boot nvme's seperate and will evenetualy move the vm disk locations.
after I have installed 2 extra nvme's in the mobo m2 slots and also will be including the pci adaptor to house (for...
hahhah you're quick to jump to conclusions ability is even more advanced than that ;)
Anyway @ph0x thank you for contributing non the less. I really mean that. It helps one stay on course. Even though in this instance it was kinda besides the point.
Understood, I have to make ends meet and deal with what I have. The misses is already on my rear end for having such expensive toys to begin with so I have to make due with what I have. Sure I can persuade the budget to squeeze in an extra 8 x 32gb ECC 3200MHZ memory but I need to do it...
Ok, I am now fully back onboard the ECC camp. Not that I ever left it but I was considering going for extra speed. Non ECC is now off the table for good. ECC for the win
what would be more optimal in terms of random read/writes?
simply adding new nvme's to the mirror-0 or create extra sets of mirrors each containing 2 nvme's?
Also will it matter for either scenario if the nvme’s are not the same brand and model? They are the same size, in terms of what is...
current zpool status
zpool status
pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer...
@Dunuin thank you for your detailed explanation. Much appreciated.
Currently the Samsung SSD's have a wearout of 4 and 3 % after about a year of operation. Are there statistcs available on how many writes have been done in that time?
The workload up until this time has been (I'll edit the...
Please forgive my ignorance. What made one think I was looking to do server stuff? (my thread opening post already edited to reflect this)
I am looking for a non server workload. Just freakishly fast performance with the stuff that is available to me
OK, so coming back to the disk aspect of it all.
One would say that having a single PCIe ssd (consisting of 4 NVMe's in raid 10(mirrored striped)) is better than putting in those same 4 NVMe's in the slots on the Mobo and have proxmox deal with the raid aspect?
Ahh Before I make and edit to...
Hyper-V on Windows Server 2019 comes with it's own set of problems. It is simply not able yet (please believe me I have tried for 2 months before I setteled down to proxmox) to do proper GPU passthrough for AMD ryzen, RTX 2060+ setups
EDIT: and also the SATA controller passthrough of at least 5...
also, when using PCIe ssds, one sacrifices a PCIe slot. Slots that could be used for GPU pass through.
What I failed to mention is that I would like to have a dedicated VM to pass GPU's to so that I can donate to the folding at home or other distributed science projects
I am not trying to be a d*&k but if the mentality remains that proxmox is not for non server workloads then perhaps installing a windows (yeah I know it's evil) machine and go from there with virtualbox?
I'd rather prevent that route as I have spend more than a year getting to know proxmox and...
ok understood now. So ZFS, although I said it was not a requirement, then now it is. Never I intended to suggest that ZFS was off limits. Sorry for the confusion.
That particular document I have read indeed. I also did not find much regarding NVMe's. I did see an Intel Optane on the top of the...
or with 256 GB of ECC memory. perhaps load an entire VM in ramfs?
that would mean that loading a VM is no longer fast but for one or two specific VM's that would be no problem
for example do I use the AORUS Gen4 AIC Adaptor (1 PCI device for 4 x NVMe SSD) with 4 x WD black SSD's in raid0 mode?
Or put them in the 4 xNVMe slots of the mobo and then see if I can raid 0 them?
understood. Important data on a VM goes to a TrueNas.
The VM it self gets backed up to TrueNas.
So I can always roll back to a previous snapshot if need be.
wow, I am now considering your drives. Is there any benchmark I can see?
All I found and use was something along the lines of...
I'd like to setup proxmox 7 to be as fast as it can possibly be with the hardware I have and that I am considering to get;
EDIT: (this will be for non critical, non server related workloads)
Edit 2:
I would like to have a dedicated VM to pass GPU's to so that I can donate to the folding at home...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.