nvme

  1. T

    New to Proxmox/Ceph - performance question

    I am new to Proxmox/Ceph and looking into some performance issues. 5 OSD nodes and 3 Monitor nodes Cluster vlan - 10.111.40.0/24 OSD nod CPU - AMD EPYC 2144G (64 Cores) Memory - 256GB Storage - Dell 3.2TB NVME x 10 Network - 40 GB for Ceph Cluster Network - 1GB for Proxmox mgmt MON nod CPU -...
  2. J

    ZFS disk device shows but unaible to add to volume

    When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS. When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool. please assist
  3. M

    ACPI Error, PVE not found; Auf neuen DELL R7525 Server

    Hallo Leute, vorab zu Beginn von mir: Ich bin nicht nur hier neu, sondern was Proxmox und Linux betrifft bin ich auch komplett ein Neuling. Meine bisherigen Erfahrungen liegen nur auf Windows- sowie Hyper-V Server ebenen. Bezugnehmend auf die im Betreff näher bezeichnete Angelegenheit habe ich...
  4. J

    [SOLVED] a bit lost with ZFS (closed topic)

    Recently added an extra NVME drive to the system, using ZFS. After adding it i'm lost as what is happening. ZFS appears to have 'absorbed' it but i cannot partition it. There appears no way to undo it etc. I've definitely done something wrong but cannot progress, any pointers ?
  5. L

    pbs-restore performance optimization (parallelization)

    Hey guys, thanks for the release of Proxmox Backup Server! PBS looks very promising in regards to what our company needs: Incremental backups of our VMs, e.g. every 15 minutes Flexible retention cycle, e.g. keep last 8, keep 22 hours, ... One pushing PVE client, several backup servers pull...
  6. G

    QEMU - Can we Add emulated NVME Devices to Guests?

    My servers all run HGST260 enterprise PCIe NVMe drives, mirrored. The drives have great performance, but my guests seem to be limited by queue depth. Are we able to use emulated NVMe devices to increase the parallelization of disk IO, and would this help relative to the standard SCSI devices? I...
  7. S

    Need advice & your kind help - zfs Raid1 - nvme 2drives - Disk performance confusion?

    Hello friends, Please find the 2 Attachments. [LEFT PUTTY = VM1st(ny) & RIGHT PUTTY = VM2nd(555) ] My Leased SERVER DETAILS: it's 6 Cores and 12 Threads intel CPU 32 GB RAM 2 NVME Drive 512GB - PROXMOX NODE Details. - During setup Selected Zfs Raid1 for 2x Nvme drive (mirroring purpose) ARC...
  8. J

    [TUTORIAL] bootable NVME install on old hardware made easy with pcie adapter and clover

    My 1TB nvme is installed into an nvme pcie adapter (these can be bought for like $10) First you simply install proxmox to the NVME, this is straight forward just like installing to a hard drive, the problem comes in to play when you try booting from the NVME, older bios do not support that...
  9. S

    [SOLVED] Zweite Festplatte einbinden und mounten

    Hallo zusammen, ich kenne mich mit Proxmox noch nicht so gut aus und benötige deshalb etwas Hilfe. Ich möchte gerne meine zweite NVMe, die momentan nicht verwendet wird, gerne als Backup Storage nutzen. Nur habe ich leider keine Ahnung, wie ich das genau mache. Eine wirklich gute Anleitung die...
  10. G

    Proxmox 6.2 NVMe Character Appears, Block Devices missing

    I have a long-running server which use 2x ZFS-mirrored 7TB Ultrastar SN200 Series NVMe SSDs running Proxmox 6.2. I bought two more identical devices and setup a second server about a month ago. I could create VMs and migrate to/from local-vmdata (the local ZFS pool on these drives). At some...
  11. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi everbody, we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration. We purchased 8 nodes with the following configuration: - ThomasKrenn 1HE AMD Single-CPU RA1112 - AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB) - 512 GB RAM - 2x 240GB SATA...
  12. R

    NVME over Fabric

    Hi We are using our Proxmox with a storage that connects over fabric using Mellanox driver We also use Nvmesh as our storage management software therefore we can see the volume as local and we are using it as a LOCAL ZFS file system , the problem is that proxmox doesn't see it as shared...
  13. I

    ceph qestion, sas3 and nvme on separate bucket

    currently i have one ceph poll consists of mix of sas3 ssds , (different models and sizes , but all in the same performance category ) i thinking of creating another bucket (i dont know if bucket is the right name for what i want to do ) currently we have the default (consists of sas3 ssds)...
  14. T

    Can't initialize NVMe SSD disk with GPT

    I'm getting this error in proxmox when trying to initialize an empty NVMe SSD drive (Samsung 970 Evo plus) - Invalid partition data! TASK ERROR: command '/sbin/sgdisk /dev/nvme0n1 -U R' failed: exit code 2
  15. J

    Threadripper Proxmox build, need suggestions

    [cross posted on STH] I work in a research lab that we recently purchased a Threadripper 3970x workstation from a System Integrator. It is a better better deal than Dell-Intel, which would have cost us twice as much. The role is to run Proxmox as base Hypervisor and run multiple Windows and...
  16. J

    Passthrough nvme SSD on a quad SSD PCIE card

    Hi, I have a workstation with MSI TRX40 motherboad, which comes with a Dual SSD PCIE card to allow two SSDs on the card to split a PCIE x16 lane. Quad card can also be purchased. I was wondering how proxmox will see them when passing through nvme SSDs on the Dual or Quad card. Will they behave...
  17. D

    New proxmox deploy - is this way to go with disks?

    Hey guys, I am building my first production Proxmox - not my first virt mind you, but costs are astronomical with 'other' virtualization options. Anyway. I have specific needs and specific setup - regarding the setup, lets leave that. My question is this - from your experience - is this...
  18. N

    Intel P3600 NVME SSD Passthrough - Poor Performance

    I am passing through an Intel P3600 1.6TB NVME SSD to a fresh install of Ubuntu 19.10 VM. Using Q35 Machine code, 44 cores, 48GB of RAM and the performance is terrible, maxing out around 1000 MB/s but averaging 450 MB/s. It has been previously connected, through passthrough, on an Dell R710...
  19. L

    I/O errors with NVM drives

    Current kernel used: 5.3.13-3-pve I seem to having some ocassionall I/O errors issues with my NVM pci-e 4 drives (Corsair MP600 on a ASUS Pro WS X570-ACE motherboard) that I've come to determine that could be related to some kernel trim support of NVM drives as reflected on kernel bug...
  20. N

    ceph nvme ssd slower than spinning disks16 node 40 gbe ceph cluster

    I am running the latest version of proxmox on a 16 node 40 gbe cluster. Each node has 2 Samsung 960 EVO 250GB NVMe SSDs and 3 Hitachi 2 TB 7200 RPM Ultrastar disks. I am using bluestore for all disks with two crush rules, one for fast nvme and slow for hdd. I have tested bandwidth between all...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!