They are Intel DC S3700 100GB SATA (SSDSC2BA100G3) from 2012. Writes up to 200MB/s reads up to 500MB/s.
Bought 13 of them in different sizes.
Bought 13 of them in different sizes.
Last edited:
any of these comparable? i have selected for durabilityThey are Intel DC S3700 100GB SATA (SSDSC2BA100G3) from 2012. Writes up to 200MB/s reads up to 500MB/s.
Bought 13 of them in different sizes.
zpool status
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.002538b9015063be-part3 ONLINE 0 0 0
nvme-eui.002538b9015063e1-part3 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b492a6acc-part3 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b49df8c91-part3 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b49df8f93-part3 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b49df19b5-part3 ONLINE 0 0 0
errors: No known data errors
root@pve-trx:~# pveperf
CPU BOGOMIPS: 742467.84
REGEX/SECOND: 3958451
HD SIZE: 899.00 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 523.78
pveperf
CPU BOGOMIPS: 742462.72
REGEX/SECOND: 4060522
HD SIZE: 899.00 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 532.78
zpool status
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.002538b9015063be-part3 ONLINE 0 0 0
nvme-eui.002538b9015063e1-part3 ONLINE 0 0 0
errors: No known data errors
pool: vmpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
vmpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-WDS100T1X0E-00AFY0_204540802523 ONLINE 0 0 0
nvme-WDS100T1X0E-00AFY0_204540802590 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
nvme-WDS100T1X0E-00AFY0_20465F800961 ONLINE 0 0 0
nvme-WDS100T1X0E-00AFY0_204540802025 ONLINE 0 0 0
fio --ioengine=psync --filename=/Pool/test.file --size=10G --time_based --name=psync_random_write --runtime=600 --direct=1 --sync=1 --iodepth=1 --rw=randwrite --bs=4k --numjobs=1
@bobmc , can you please elaborate? i.e. Did you mean SR-IOV? Multiple VM's running on the same host with the same card passed through but not turned on at the same time? Or more in general had multiple successes over several proxmox setups?I'm just wondering.... have you tested and got GPU passthrough running successfully on multiple VM's before?
NOTE: A PCI device can only ever be attached to a single VM.
I also got a single GT710 that two Win10 VMs are using. Of cause only one VM at a time. Also thought about buying a second GT710 but I got no empty PCIe slots and my RAM is also always full so it would be hard to run both VMs at the same time because Win10 is so ressource hungry and a passthrough will pin the complete RAM.One can have multiple VM's make use of the same PCIe device. Just don't try to run those vm's at the same time.
you can try taming your windows commit memory hungry VMs by usingI also got a single GT710 that two Win10 VMs are using. Of cause only one VM at a time. Also thought about buying a second GT710 but I got no empty PCIe slots and my RAM is also always full so it would be hard to run both VMs at the same time because Win10 is so ressource hungry and a passthrough will pin the complete RAM.
But one really annoing thing with sharing a VM is that automated backups won't work. If one VM is running and the backup task starts, then the VM that wasn't running can't be backuped because the GPU is already in use and PVE need to shortly start the VM to take that backup.
Perhaps automated snapshots can help if one have automated zfs send to a backup locationBut one really annoing thing with sharing a VM is that automated backups won't work. If one VM is running and the backup task starts, then the VM that wasn't running can't be backuped because the GPU is already in use and PVE need to shortly start the VM to take that backup.
I stopped using ZFS snapshots because they consume more space than the same amount of PBS backups. And snapshots cant replace a backup, so everything would consume atleast double the space.Perhaps automated snapshots can help if one have automated zfs send to a backup location