nvme

  1. P

    zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  2. S

    How to do disk mirroring on already running system?

    Greetings, I made a poo poo when installing my server, forgot to turn on mirror on two nvme system drives. Somehow Proxmox GUi shows 100gb of HD space(root). How do I check on which partition OS is installed? I guess its on /dev/nvme0n1p3.. How do I extend this partition to full remaining...
  3. R

    Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  4. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    I was playing a game on a Windows VM, and it suddenly paused. I checked the Proxmox logs, and saw this: [268690.209099] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [268690.289109] nvme 0000:01:00.0: enabling device (0000 -> 0002) [268690.289234] nvme nvme0...
  5. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  6. E

    Samsung SSD 970 EVO Plus 500GB PROXMOX RAID-5 (RAIDZ-1)

    Good day to all, I setuped RAID-5 configuration and did some disks performance/efficiency tests. Main idea is to check RAID-5 efficiency with this server configuration: CPU: 48 Cores @ 2.8 GHz RAM: RAM DDR4 256GB 2400 Disk: 4x NVME 500GB (Samsung SSD 970 EVO Plus 500GB) Raid Level: Custom NIC...
  7. T

    [SOLVED] NVME/100GB Ceph network config advice needed

    Hello, I was looking into Proxmox set up with Ceph on my full NVME servers. At first, I was looking into 40GB but that wasn't enough with SSD. Used following documents as guide line but wanted to get some feedback on my setup/settings (not implemented yet)...
  8. D

    Proxmox slow nvme Speed

    Hello Hardware: Motherboard: Supermicro X10DRi Ram: 128GB DDR4 2133MHz ECC CPU: 2x Intel Xeon E5-2678V3 @ 2.5GHz 12Core PCI-e Card: ASUS Hyper M.2 X16 Card NVME: 4x Crucial P5 500GB SSD: Samsung 830 HDD: WD Red 4TB My Issue is that the nvme Drives are really slow and I dont know why...
  9. T

    New to Proxmox/Ceph - performance question

    I am new to Proxmox/Ceph and looking into some performance issues. 5 OSD nodes and 3 Monitor nodes Cluster vlan - 10.111.40.0/24 OSD nod CPU - AMD EPYC 2144G (64 Cores) Memory - 256GB Storage - Dell 3.2TB NVME x 10 Network - 40 GB for Ceph Cluster Network - 1GB for Proxmox mgmt MON nod CPU -...
  10. J

    ZFS disk device shows but unaible to add to volume

    When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS. When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool. please assist
  11. M

    ACPI Error, PVE not found; Auf neuen DELL R7525 Server

    Hallo Leute, vorab zu Beginn von mir: Ich bin nicht nur hier neu, sondern was Proxmox und Linux betrifft bin ich auch komplett ein Neuling. Meine bisherigen Erfahrungen liegen nur auf Windows- sowie Hyper-V Server ebenen. Bezugnehmend auf die im Betreff näher bezeichnete Angelegenheit habe ich...
  12. J

    [SOLVED] a bit lost with ZFS (closed topic)

    Recently added an extra NVME drive to the system, using ZFS. After adding it i'm lost as what is happening. ZFS appears to have 'absorbed' it but i cannot partition it. There appears no way to undo it etc. I've definitely done something wrong but cannot progress, any pointers ?
  13. L

    pbs-restore performance optimization (parallelization)

    Hey guys, thanks for the release of Proxmox Backup Server! PBS looks very promising in regards to what our company needs: Incremental backups of our VMs, e.g. every 15 minutes Flexible retention cycle, e.g. keep last 8, keep 22 hours, ... One pushing PVE client, several backup servers pull...
  14. G

    QEMU - Can we Add emulated NVME Devices to Guests?

    My servers all run HGST260 enterprise PCIe NVMe drives, mirrored. The drives have great performance, but my guests seem to be limited by queue depth. Are we able to use emulated NVMe devices to increase the parallelization of disk IO, and would this help relative to the standard SCSI devices? I...
  15. S

    Need advice & your kind help - zfs Raid1 - nvme 2drives - Disk performance confusion?

    Hello friends, Please find the 2 Attachments. [LEFT PUTTY = VM1st(ny) & RIGHT PUTTY = VM2nd(555) ] My Leased SERVER DETAILS: it's 6 Cores and 12 Threads intel CPU 32 GB RAM 2 NVME Drive 512GB - PROXMOX NODE Details. - During setup Selected Zfs Raid1 for 2x Nvme drive (mirroring purpose) ARC...
  16. J

    [TUTORIAL] bootable NVME install on old hardware made easy with pcie adapter and clover

    My 1TB nvme is installed into an nvme pcie adapter (these can be bought for like $10) First you simply install proxmox to the NVME, this is straight forward just like installing to a hard drive, the problem comes in to play when you try booting from the NVME, older bios do not support that...
  17. S

    [SOLVED] Zweite Festplatte einbinden und mounten

    Hallo zusammen, ich kenne mich mit Proxmox noch nicht so gut aus und benötige deshalb etwas Hilfe. Ich möchte gerne meine zweite NVMe, die momentan nicht verwendet wird, gerne als Backup Storage nutzen. Nur habe ich leider keine Ahnung, wie ich das genau mache. Eine wirklich gute Anleitung die...
  18. G

    Proxmox 6.2 NVMe Character Appears, Block Devices missing

    I have a long-running server which use 2x ZFS-mirrored 7TB Ultrastar SN200 Series NVMe SSDs running Proxmox 6.2. I bought two more identical devices and setup a second server about a month ago. I could create VMs and migrate to/from local-vmdata (the local ZFS pool on these drives). At some...
  19. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi everbody, we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration. We purchased 8 nodes with the following configuration: - ThomasKrenn 1HE AMD Single-CPU RA1112 - AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB) - 512 GB RAM - 2x 240GB SATA...
  20. R

    NVME over Fabric

    Hi We are using our Proxmox with a storage that connects over fabric using Mellanox driver We also use Nvmesh as our storage management software therefore we can see the volume as local and we are using it as a LOCAL ZFS file system , the problem is that proxmox doesn't see it as shared...