nvme

  1. S

    NVMe performance inside the guest OS

    Hello. I assembled mdadm raid0 from two Samsung 970 Evo plus NVMe SSDs, created an LVM VG on it and gave the thick LV as a virtual machine disk based on Centos8. On a hypervisor, this RAID delivers about 7GB/s read performance. When I test inside the guest OS with: fio --readonly --name=onessd...
  2. S

    MDADM create with NVMe failed - Floating point exception

    Good day everybody, we would like to subsequently set up a software raid 1 on our Proxmox system with mdadm. Proxmox itself is currently installed on an NVMe with ext4, which is now to be mirrored on another NVMe. However, the following error occurs. "Floating point exception" root@pr-001:~#...
  3. G

    Creating KVM VM with nvme device in proxmox 7

    Hi, I'm using proxmox 7.1-7 as a VM host for testing linux installations. For the emulated TPM & SecureBoot it's working great and is definitely a step up from 6 (which I use at home). However, the target machines for these installs all have nvme drives, so I was hoping to be able to create kvm...
  4. H

    Cannot passthrough a nvme ssd to vm

    Hi, I'm running a pve 7.1-7 and have a vm which is openmediavault. I want to passthrough a nvme ssd by passing pcie. The ssd can be loaded normally if I do not passthrough it. But if I passthrough it, it can not be loaded by the vm. It can be seen in lspci but not in lsblk. Thank you for your...
  5. P

    Proxmox install on nmve ssd connected with pcie adapter on dell r720xd.

    I want to use Proxmox VE on my Dell R720xd Server. So I booted on the USB stick and installed Proxmox on the pcie nvme ssd.Unfortunately when I boot the Bios doesnt detect my ssd, because it doesnt support pcie connected ssds. So I installed Ubuntu on a USB stick with Grub in hope to first boot...
  6. B

    5 m.2 nvme ssd just mount 2 pieces

    Hi I use 256G ssd to install OS and want to use 4*2TB SSDs to build cephs. but pve just can see my OS any another 1 SSD. They can see in bios any Windows, but PVE can't
  7. F

    NVMe SSD. Slow disk speed inside VM

    Hi there! I faced a problem with slow disk speed performance inside a VM (when a VM has a disk on host NVMe SSD). For instance, when I copy 10GB file from one pve to another pve (LAN 10Gbit/s): rsync -P -avz file10G 10.1.123.1: sending incremental file list file10G 10,737,418,240 100%...
  8. V

    [SOLVED] intel DC P4510 passthrough failure

    Hello, I'm trying to passtrought a pcie nvme ssd to virtual machine. I'm not using a HBA, but each ssd is directly connected to pcie connector. I followed up: pcie passthrough guide. IOMMU is enabled and working, all required modules are also enabled. Also the nvme drive is using the latest...
  9. P

    OMG I mistakenly pass-thru NVME chip on M/B

    Hi, Yesterday I added a new PCIE board with NVME SSD to proxmox machine, and tried to pass-thru that to a new VM. However, I mistakenly chose vendor-ID that is onboard M/B NVME chipset, not the PCIE NVME board. I remember it because I have once successfully pass-thru PCIE device before. After...
  10. D

    ZFS best practice für Enterprise NVMEs

    Moin zusammen. Ich stelle mir gerade die Hardware für einen neuen PVE zusammen. AMD Epyc mit 512GB RAM, 2x Samsung PM1643 (RAID1 für das OS) und 4x Samsung PM1735 3,6TB. Auf dem PVE sollen lediglich 2x Windows Server VMs laufen, die jedoch relativ viel R/W Leistung benötigen (DMS und SQL)...
  11. J

    [SOLVED] Welche Optionen zur Schonung von SSDs hat ZFS ?

    Ich habe 2x die P2 2000GB von Crucial als NVMe SSD. Der Pool den ich erstellt habe heißt whirl-pool. Muss ich noch was ändern ? ich habe etwas von ashift=off gehört... wie mache ich das? macht das Sinn?
  12. S

    command '/sbin/zpool create -o 'ashift=12' nvme mirror failed: exit code 1

    Hi, I installed Proxmox 7, and I am trying to create a new ZFS using two 1TB NVME drives via the GUI. However, I get the below error: command '/sbin/zpool create -o 'ashift=12' nvme mirror /dev/disk/by-id/nvme-Sabrent_1765071310FD00048263...
  13. A

    Excessive writes to NVMe on ZFS

    Hi Guys I'm running Proxmox 6.4.13 and recently installed a Corsair MP600 1TB NVMe using a PCIe riser card. The NVMe is set up using ZFS (Single Disk, Compression On, ashift 12) I am seeing a concerning amount of writes and I do not know why. I am not running any serious workloads. Just...
  14. T

    Ceph configuration recommendations?

    Hello all, setting up a new 5 node cluster with the following identical specs for each node. Been using proxmox for many years but am new to ceph. I spun up a test environment and it has been working perfectly for a couple months. Now looking to make sure we are moving the right direction with...
  15. C

    Nvmes, SSDs, HDDs, Oh My!

    So, I'm trying to plan out a new Proxmox server (or two) using a bunch of spare parts that are lying around. Whether I go with a single or two Proxmox servers (it's deciding whether or not to have an Internal server, for Media, Backups, Git/Jenkins, and a separate External Server for Web, DBs...
  16. C

    Schlechte Random Read/Write Performance Raid10 ZFS 4x WD Black SN750 500 GB

    Hallo Leute, ich bin komplett neu in die Proxmox Materie eingetaucht. Bisher hatte ich einen Home Server mit Hyper V am laufen und bin nun auf Proxmox umgestiegen. Da ich nur einen kleinen 2HE Mini Server habe, habe ich 4 Nvmes verwendet (WD Black SN750 mit 500 GB PCIe 3.0x4) und auf diesen...
  17. Z

    4 PM9A1 NVME SSD Passthrough to Windows VM

    Hi This week i had some spare time and installed windows Server 2019 on my Proxmox Server (AMD EPYC 7232P, Supermicro H12SSL-CT, 128gb DDR4 ECC RDIMM) Kernel Version Linux 5.4.106-1-pve #1 SMP PVE 5.4.106-1. I intend to use it as a NVME storage server. I installed an Asus Hyper m.2x16 gen 4...
  18. G

    Incorrect NVMe SSD wearout displayed by Proxmox 6

    I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use: Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
  19. R

    Ceph Geschwindigkeit als Pool und als KVM

    Hallo! Wir haben einen Ceph-Cluster mit 2 Pools ( SSDs und NVMe's ). In einem Rados benchmark Test ist wie zu erwarten der NVME Pool viel schneller als der SSD Pool . NVMe Pool: Schreiben: BW 900 MB/s IOPS: 220 Lesen: BW 1400 MB/s IOPS: 350 SSD Pool: Schreiben: BW 190 MB/s IOPS: 50...
  20. P

    Proxmox no longer UEFI booting, UEFI says "Drive not present"

    Topic title pretty much sums it up I have two NVMe (wd/hgst sn200) drives in a zfs mirror and the server no longer boots correctly after a pve-efiboot-tool refresh. If I select either UEFI OS or Linux Boot Manager it just goes back into the uefi setup screen without booting. However, if I go...