nvme

  1. [SOLVED] intel DC P4510 passthrough failure

    Hello, I'm trying to passtrought a pcie nvme ssd to virtual machine. I'm not using a HBA, but each ssd is directly connected to pcie connector. I followed up: pcie passthrough guide. IOMMU is enabled and working, all required modules are also enabled. Also the nvme drive is using the latest...
  2. OMG I mistakenly pass-thru NVME chip on M/B

    Hi, Yesterday I added a new PCIE board with NVME SSD to proxmox machine, and tried to pass-thru that to a new VM. However, I mistakenly chose vendor-ID that is onboard M/B NVME chipset, not the PCIE NVME board. I remember it because I have once successfully pass-thru PCIE device before. After...
  3. ZFS best practice für Enterprise NVMEs

    Moin zusammen. Ich stelle mir gerade die Hardware für einen neuen PVE zusammen. AMD Epyc mit 512GB RAM, 2x Samsung PM1643 (RAID1 für das OS) und 4x Samsung PM1735 3,6TB. Auf dem PVE sollen lediglich 2x Windows Server VMs laufen, die jedoch relativ viel R/W Leistung benötigen (DMS und SQL)...
  4. [SOLVED] Welche Optionen zur Schonung von SSDs hat ZFS ?

    Ich habe 2x die P2 2000GB von Crucial als NVMe SSD. Der Pool den ich erstellt habe heißt whirl-pool. Muss ich noch was ändern ? ich habe etwas von ashift=off gehört... wie mache ich das? macht das Sinn?
  5. command '/sbin/zpool create -o 'ashift=12' nvme mirror failed: exit code 1

    Hi, I installed Proxmox 7, and I am trying to create a new ZFS using two 1TB NVME drives via the GUI. However, I get the below error: command '/sbin/zpool create -o 'ashift=12' nvme mirror /dev/disk/by-id/nvme-Sabrent_1765071310FD00048263...
  6. Excessive writes to NVMe on ZFS

    Hi Guys I'm running Proxmox 6.4.13 and recently installed a Corsair MP600 1TB NVMe using a PCIe riser card. The NVMe is set up using ZFS (Single Disk, Compression On, ashift 12) I am seeing a concerning amount of writes and I do not know why. I am not running any serious workloads. Just...
  7. Ceph configuration recommendations?

    Hello all, setting up a new 5 node cluster with the following identical specs for each node. Been using proxmox for many years but am new to ceph. I spun up a test environment and it has been working perfectly for a couple months. Now looking to make sure we are moving the right direction with...
  8. Nvmes, SSDs, HDDs, Oh My!

    So, I'm trying to plan out a new Proxmox server (or two) using a bunch of spare parts that are lying around. Whether I go with a single or two Proxmox servers (it's deciding whether or not to have an Internal server, for Media, Backups, Git/Jenkins, and a separate External Server for Web, DBs...
  9. Schlechte Random Read/Write Performance Raid10 ZFS 4x WD Black SN750 500 GB

    Hallo Leute, ich bin komplett neu in die Proxmox Materie eingetaucht. Bisher hatte ich einen Home Server mit Hyper V am laufen und bin nun auf Proxmox umgestiegen. Da ich nur einen kleinen 2HE Mini Server habe, habe ich 4 Nvmes verwendet (WD Black SN750 mit 500 GB PCIe 3.0x4) und auf diesen...
  10. 4 PM9A1 NVME SSD Passthrough to Windows VM

    Hi This week i had some spare time and installed windows Server 2019 on my Proxmox Server (AMD EPYC 7232P, Supermicro H12SSL-CT, 128gb DDR4 ECC RDIMM) Kernel Version Linux 5.4.106-1-pve #1 SMP PVE 5.4.106-1. I intend to use it as a NVME storage server. I installed an Asus Hyper m.2x16 gen 4...
  11. Incorrect NVMe SSD wearout displayed by Proxmox 6

    I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use: Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
  12. Ceph Geschwindigkeit als Pool und als KVM

    Hallo! Wir haben einen Ceph-Cluster mit 2 Pools ( SSDs und NVMe's ). In einem Rados benchmark Test ist wie zu erwarten der NVME Pool viel schneller als der SSD Pool . NVMe Pool: Schreiben: BW 900 MB/s IOPS: 220 Lesen: BW 1400 MB/s IOPS: 350 SSD Pool: Schreiben: BW 190 MB/s IOPS: 50...
  13. Proxmox no longer UEFI booting, UEFI says "Drive not present"

    Topic title pretty much sums it up I have two NVMe (wd/hgst sn200) drives in a zfs mirror and the server no longer boots correctly after a pve-efiboot-tool refresh. If I select either UEFI OS or Linux Boot Manager it just goes back into the uefi setup screen without booting. However, if I go...
  14. zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  15. How to do disk mirroring on already running system?

    Greetings, I made a poo poo when installing my server, forgot to turn on mirror on two nvme system drives. Somehow Proxmox GUi shows 100gb of HD space(root). How do I check on which partition OS is installed? I guess its on /dev/nvme0n1p3.. How do I extend this partition to full remaining...
  16. Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  17. Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    I was playing a game on a Windows VM, and it suddenly paused. I checked the Proxmox logs, and saw this: [268690.209099] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [268690.289109] nvme 0000:01:00.0: enabling device (0000 -> 0002) [268690.289234] nvme nvme0...
  18. [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  19. Samsung SSD 970 EVO Plus 500GB PROXMOX RAID-5 (RAIDZ-1)

    Good day to all, I setuped RAID-5 configuration and did some disks performance/efficiency tests. Main idea is to check RAID-5 efficiency with this server configuration: CPU: 48 Cores @ 2.8 GHz RAM: RAM DDR4 256GB 2400 Disk: 4x NVME 500GB (Samsung SSD 970 EVO Plus 500GB) Raid Level: Custom NIC...
  20. [SOLVED] NVME/100GB Ceph network config advice needed

    Hello, I was looking into Proxmox set up with Ceph on my full NVME servers. At first, I was looking into 40GB but that wasn't enough with SSD. Used following documents as guide line but wanted to get some feedback on my setup/settings (not implemented yet)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!