nvme

  1. T

    Ceph configuration recommendations?

    Hello all, setting up a new 5 node cluster with the following identical specs for each node. Been using proxmox for many years but am new to ceph. I spun up a test environment and it has been working perfectly for a couple months. Now looking to make sure we are moving the right direction with...
  2. C

    Nvmes, SSDs, HDDs, Oh My!

    So, I'm trying to plan out a new Proxmox server (or two) using a bunch of spare parts that are lying around. Whether I go with a single or two Proxmox servers (it's deciding whether or not to have an Internal server, for Media, Backups, Git/Jenkins, and a separate External Server for Web, DBs...
  3. C

    Schlechte Random Read/Write Performance Raid10 ZFS 4x WD Black SN750 500 GB

    Hallo Leute, ich bin komplett neu in die Proxmox Materie eingetaucht. Bisher hatte ich einen Home Server mit Hyper V am laufen und bin nun auf Proxmox umgestiegen. Da ich nur einen kleinen 2HE Mini Server habe, habe ich 4 Nvmes verwendet (WD Black SN750 mit 500 GB PCIe 3.0x4) und auf diesen...
  4. Z

    4 PM9A1 NVME SSD Passthrough to Windows VM

    Hi This week i had some spare time and installed windows Server 2019 on my Proxmox Server (AMD EPYC 7232P, Supermicro H12SSL-CT, 128gb DDR4 ECC RDIMM) Kernel Version Linux 5.4.106-1-pve #1 SMP PVE 5.4.106-1. I intend to use it as a NVME storage server. I installed an Asus Hyper m.2x16 gen 4...
  5. G

    Incorrect NVMe SSD wearout displayed by Proxmox 6

    I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use: Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...
  6. R

    Ceph Geschwindigkeit als Pool und als KVM

    Hallo! Wir haben einen Ceph-Cluster mit 2 Pools ( SSDs und NVMe's ). In einem Rados benchmark Test ist wie zu erwarten der NVME Pool viel schneller als der SSD Pool . NVMe Pool: Schreiben: BW 900 MB/s IOPS: 220 Lesen: BW 1400 MB/s IOPS: 350 SSD Pool: Schreiben: BW 190 MB/s IOPS: 50...
  7. P

    Proxmox no longer UEFI booting, UEFI says "Drive not present"

    Topic title pretty much sums it up I have two NVMe (wd/hgst sn200) drives in a zfs mirror and the server no longer boots correctly after a pve-efiboot-tool refresh. If I select either UEFI OS or Linux Boot Manager it just goes back into the uefi setup screen without booting. However, if I go...
  8. P

    zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  9. S

    How to do disk mirroring on already running system?

    Greetings, I made a poo poo when installing my server, forgot to turn on mirror on two nvme system drives. Somehow Proxmox GUi shows 100gb of HD space(root). How do I check on which partition OS is installed? I guess its on /dev/nvme0n1p3.. How do I extend this partition to full remaining...
  10. R

    Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  11. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    I was playing a game on a Windows VM, and it suddenly paused. I checked the Proxmox logs, and saw this: [268690.209099] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [268690.289109] nvme 0000:01:00.0: enabling device (0000 -> 0002) [268690.289234] nvme nvme0...
  12. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  13. E

    Samsung SSD 970 EVO Plus 500GB PROXMOX RAID-5 (RAIDZ-1)

    Good day to all, I setuped RAID-5 configuration and did some disks performance/efficiency tests. Main idea is to check RAID-5 efficiency with this server configuration: CPU: 48 Cores @ 2.8 GHz RAM: RAM DDR4 256GB 2400 Disk: 4x NVME 500GB (Samsung SSD 970 EVO Plus 500GB) Raid Level: Custom NIC...
  14. T

    [SOLVED] NVME/100GB Ceph network config advice needed

    Hello, I was looking into Proxmox set up with Ceph on my full NVME servers. At first, I was looking into 40GB but that wasn't enough with SSD. Used following documents as guide line but wanted to get some feedback on my setup/settings (not implemented yet)...
  15. D

    Proxmox slow nvme Speed

    Hello Hardware: Motherboard: Supermicro X10DRi Ram: 128GB DDR4 2133MHz ECC CPU: 2x Intel Xeon E5-2678V3 @ 2.5GHz 12Core PCI-e Card: ASUS Hyper M.2 X16 Card NVME: 4x Crucial P5 500GB SSD: Samsung 830 HDD: WD Red 4TB My Issue is that the nvme Drives are really slow and I dont know why...
  16. T

    New to Proxmox/Ceph - performance question

    I am new to Proxmox/Ceph and looking into some performance issues. 5 OSD nodes and 3 Monitor nodes Cluster vlan - 10.111.40.0/24 OSD nod CPU - AMD EPYC 2144G (64 Cores) Memory - 256GB Storage - Dell 3.2TB NVME x 10 Network - 40 GB for Ceph Cluster Network - 1GB for Proxmox mgmt MON nod CPU -...
  17. J

    ZFS disk device shows but unaible to add to volume

    When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS. When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool. please assist
  18. M

    ACPI Error, PVE not found; Auf neuen DELL R7525 Server

    Hallo Leute, vorab zu Beginn von mir: Ich bin nicht nur hier neu, sondern was Proxmox und Linux betrifft bin ich auch komplett ein Neuling. Meine bisherigen Erfahrungen liegen nur auf Windows- sowie Hyper-V Server ebenen. Bezugnehmend auf die im Betreff näher bezeichnete Angelegenheit habe ich...
  19. J

    [SOLVED] a bit lost with ZFS (closed topic)

    Recently added an extra NVME drive to the system, using ZFS. After adding it i'm lost as what is happening. ZFS appears to have 'absorbed' it but i cannot partition it. There appears no way to undo it etc. I've definitely done something wrong but cannot progress, any pointers ?
  20. L

    pbs-restore performance optimization (parallelization)

    Hey guys, thanks for the release of Proxmox Backup Server! PBS looks very promising in regards to what our company needs: Incremental backups of our VMs, e.g. every 15 minutes Flexible retention cycle, e.g. keep last 8, keep 22 hours, ... One pushing PVE client, several backup servers pull...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!