nvme

  1. J

    [SOLVED] Welche Optionen zur Schonung von SSDs hat ZFS ?

    Ich habe 2x die P2 2000GB von Crucial als NVMe SSD. Der Pool den ich erstellt habe heißt whirl-pool. Muss ich noch was ändern ? ich habe etwas von ashift=off gehört... wie mache ich das? macht das Sinn?
  2. S

    command '/sbin/zpool create -o 'ashift=12' nvme mirror failed: exit code 1

    Hi, I installed Proxmox 7, and I am trying to create a new ZFS using two 1TB NVME drives via the GUI. However, I get the below error: command '/sbin/zpool create -o 'ashift=12' nvme mirror /dev/disk/by-id/nvme-Sabrent_1765071310FD00048263...
  3. A

    Excessive writes to NVMe on ZFS

    Hi Guys I'm running Proxmox 6.4.13 and recently installed a Corsair MP600 1TB NVMe using a PCIe riser card. The NVMe is set up using ZFS (Single Disk, Compression On, ashift 12) I am seeing a concerning amount of writes and I do not know why. I am not running any serious workloads. Just...
  4. T

    Ceph configuration recommendations?

    Hello all, setting up a new 5 node cluster with the following identical specs for each node. Been using proxmox for many years but am new to ceph. I spun up a test environment and it has been working perfectly for a couple months. Now looking to make sure we are moving the right direction with...
  5. C

    Nvmes, SSDs, HDDs, Oh My!

    So, I'm trying to plan out a new Proxmox server (or two) using a bunch of spare parts that are lying around. Whether I go with a single or two Proxmox servers (it's deciding whether or not to have an Internal server, for Media, Backups, Git/Jenkins, and a separate External Server for Web, DBs...
  6. C

    Schlechte Random Read/Write Performance Raid10 ZFS 4x WD Black SN750 500 GB

    Hallo Leute, ich bin komplett neu in die Proxmox Materie eingetaucht. Bisher hatte ich einen Home Server mit Hyper V am laufen und bin nun auf Proxmox umgestiegen. Da ich nur einen kleinen 2HE Mini Server habe, habe ich 4 Nvmes verwendet (WD Black SN750 mit 500 GB PCIe 3.0x4) und auf diesen...
  7. Z

    4 PM9A1 NVME SSD Passthrough to Windows VM

    Hi This week i had some spare time and installed windows Server 2019 on my Proxmox Server (AMD EPYC 7232P, Supermicro H12SSL-CT, 128gb DDR4 ECC RDIMM) Kernel Version Linux 5.4.106-1-pve #1 SMP PVE 5.4.106-1. I intend to use it as a NVME storage server. I installed an Asus Hyper m.2x16 gen 4...
  8. G

    Incorrect NVMe SSD wearout displayed by Proxmox 6

    I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use: Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
  9. R

    Ceph Geschwindigkeit als Pool und als KVM

    Hallo! Wir haben einen Ceph-Cluster mit 2 Pools ( SSDs und NVMe's ). In einem Rados benchmark Test ist wie zu erwarten der NVME Pool viel schneller als der SSD Pool . NVMe Pool: Schreiben: BW 900 MB/s IOPS: 220 Lesen: BW 1400 MB/s IOPS: 350 SSD Pool: Schreiben: BW 190 MB/s IOPS: 50...
  10. P

    Proxmox no longer UEFI booting, UEFI says "Drive not present"

    Topic title pretty much sums it up I have two NVMe (wd/hgst sn200) drives in a zfs mirror and the server no longer boots correctly after a pve-efiboot-tool refresh. If I select either UEFI OS or Linux Boot Manager it just goes back into the uefi setup screen without booting. However, if I go...
  11. P

    zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  12. S

    How to do disk mirroring on already running system?

    Greetings, I made a poo poo when installing my server, forgot to turn on mirror on two nvme system drives. Somehow Proxmox GUi shows 100gb of HD space(root). How do I check on which partition OS is installed? I guess its on /dev/nvme0n1p3.. How do I extend this partition to full remaining...
  13. R

    Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  14. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    I was playing a game on a Windows VM, and it suddenly paused. I checked the Proxmox logs, and saw this: [268690.209099] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [268690.289109] nvme 0000:01:00.0: enabling device (0000 -> 0002) [268690.289234] nvme nvme0...
  15. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  16. E

    Samsung SSD 970 EVO Plus 500GB PROXMOX RAID-5 (RAIDZ-1)

    Good day to all, I setuped RAID-5 configuration and did some disks performance/efficiency tests. Main idea is to check RAID-5 efficiency with this server configuration: CPU: 48 Cores @ 2.8 GHz RAM: RAM DDR4 256GB 2400 Disk: 4x NVME 500GB (Samsung SSD 970 EVO Plus 500GB) Raid Level: Custom NIC...
  17. T

    [SOLVED] NVME/100GB Ceph network config advice needed

    Hello, I was looking into Proxmox set up with Ceph on my full NVME servers. At first, I was looking into 40GB but that wasn't enough with SSD. Used following documents as guide line but wanted to get some feedback on my setup/settings (not implemented yet)...
  18. D

    Proxmox slow nvme Speed

    Hello Hardware: Motherboard: Supermicro X10DRi Ram: 128GB DDR4 2133MHz ECC CPU: 2x Intel Xeon E5-2678V3 @ 2.5GHz 12Core PCI-e Card: ASUS Hyper M.2 X16 Card NVME: 4x Crucial P5 500GB SSD: Samsung 830 HDD: WD Red 4TB My Issue is that the nvme Drives are really slow and I dont know why...
  19. T

    New to Proxmox/Ceph - performance question

    I am new to Proxmox/Ceph and looking into some performance issues. 5 OSD nodes and 3 Monitor nodes Cluster vlan - 10.111.40.0/24 OSD nod CPU - AMD EPYC 2144G (64 Cores) Memory - 256GB Storage - Dell 3.2TB NVME x 10 Network - 40 GB for Ceph Cluster Network - 1GB for Proxmox mgmt MON nod CPU -...
  20. J

    ZFS disk device shows but unaible to add to volume

    When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS. When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool. please assist