nvme

  1. N

    Ceph - Which is faster/preferred?

    I am in the process of ordering new servers for our company to set up a 5-node cluster with all NVME. I have a choice of either going with (4) 15.3TB drives or (8) 7.68TB drives. The cost is about the same. Are there any advantages/disadvantages in relation to Proxmox/Ceph performance? I think I...
  2. E

    Proxmox VE 8.3 and 8.4 freeze at “Waiting for /dev to be fully populated” on WRX90E + Threadripper PRO

    Hi everyone, I’m trying to install Proxmox VE on a new workstation/server with the following setup: Motherboard: ASUS Pro WS WRX90E-SAGE SE CPU: AMD Ryzen Threadripper PRO 7965WX (24 cores) RAM: 128 GB DDR5 Storage: 2x Samsung 990 Pro 2TB NVMe + 2x WD Black SN770 1TB NVMe IPMI/BMC enabled...
  3. G

    Ceph on 10GB NiC, which NVME?

    Greetings, I have just created my account here, since I am assembling a HOMELAB Proxmox Cluster with 3 Nodes, each having Dual 10GB NiCs. I want to use Ceph as a backend for the VM storage for learning purposes. While I also wish to migrate my own infrastructure onto Proxmox soon, as I hope it...
  4. S

    NVMe in USB Enclosure (JMicron JMS583) Stalls on Write to LXC Samba Share - UNMAP Errors

    I'm experiencing persistent write stalling issues when copying large files from a Windows client to a Samba share running in an LXC container on my Proxmox VE server. The storage for the LXC is an NVMe SSD in an external USB 3.0 enclosure. Problem Description: Large file copies (e.g., movie...
  5. I

    hardware recommendation for pbs server

    Hi! I have very good user experince with pve + pbs solutions and thanks for the good software! Currently pbs system runs on system like - server: PRIMERGY RX2530 M7S - cpu: 2 x Intel GOLD 6548Y+ (core/threads 32/64 per cpu) - memory: 512 G DDR5 - storage: 10 x 15 TB Micron 7450 nvme devices -...
  6. S

    [SOLVED] No harddisks found

    Hi everyone, I am having issues installing proxmox on my server. Here is my config: - Ryzen 9 9950x - Gigabyte x870 Gaming Wifi 6 - Crucial SATA bx500 (for proxmox) - Sabrent Rocket 1tb nvme (for VMs) - 2x SK hynix platinum P41 nvme (to setup in raid 0 to install games for a windows gaming VM)...
  7. V

    Slow (or unexpected) NVme speed even with ext4

    Hi, it's me again. Dear forum friends, Today I'm here with a question that has me thinking and that I can't seem to make sense of. I have a server with very good hardware specifications. In fact, I posted about it a while ago to see if I should install ZFS in RAID 10 to host my VMs. These are...
  8. A

    Ceph - feasible for Clustered MSSQL?

    Looking to host a clustered MSSQL DB for an enterprise environment using ceph to remove a NAS/SAN as a single point of failure. Curious about performance, requirements are likely not very much and no multi-writer outside the cluster itself. However... as I understand writes can be very bad with...
  9. F

    Kleiner Server mit 2 M.2-SSD als ZFS Raid 1

    Hi! Ich habe einen Proxmox auf einem NUC laufen. Als Platten sind dort zwei Samsung M.2 SSD 990 PRO Heatsink 4TD als ZFS RAID-1 installiert. Es laufen dort einige LXC-Container: - Traefik - piHole - Nextcloud - PaperlessNGX Darüber hinaus sind noch ca. 10 LXC installiert, werden aber nur...
  10. L

    Best Storage Setup for Proxmox VE on a Dedicated Server? (Performance Question)

    Hello everyone, I’m about to rent a dedicated server to host Proxmox VE and deploy several critical company applications. Since performance is key, I want to make sure I choose the best possible storage setup. I have two possible configurations in mind, assuming NVMe SSDs are significantly...
  11. F

    ZFS on RAID 10 with 4 x PCIe5 NVMe Drives: Performance Insights and Questions

    Hello everyone. I recently acquired 2 HPE DL380 GEN11 servers. Each node has 512GB of RAM and 4 PCIe5 NVME drives of 3.2TB (HPE, but manufactured by KIOXIA). Given my use case, where the goal was not to have shared storage but rather a cluster with VM replication, I created a logical volume for...
  12. P

    ZFS pool with mixed speed & size vdevs

    I have a fast 2TB NVME SSD and a slow 4TB SATA SSD inside a RAID0 zfs zpool. I like zfs (over lvm which I had before) for its versatility in managing storage (e.g. one pool to use, including the pve root), however I'm running into the issue that my mixed hardware slows down. With LVM, I could...
  13. G

    Problem with multipath via RoCE

    Hi In my setup we decided to change connection to remote storage from FibreChannel to RoCE. All the time, this is the same remote storage source (the same storage array) but used different underlaying storage technology. The array is made by Huawei. Since that change during higher network...
  14. C

    PVE 8.3.3 | NVMe-Laufwerk fällt regelmäßig aus

    Liebe Mitglieder, Ich habe eine Frage zu PVE auf einem NUC 14 Pro in Kombination mit einer SSD und NVMe. Hardware: ASUS NUC 14 Pro CPU: i7-155H, RAM: 96GB (SO-DIMM, DDR5, 5600 MHz), SSD: 1x 2.5“ SAMSUNG EVO 870 (500GB), 1x M.2 SAMSUNG 990 PRO (2TB) Software: Proxmox Virtual Environment 8.3.3...
  15. A

    ext journal_check_start:84: comm poxcfs: Detected aborted journal

    Hello everyone, Since a couple of days I receive the following error message. It seems like a problem with The Proxmox Cluster file system (“pmxcfs”) and de NVME SSD. [16821.351985] EX14-fs error (device nvme0n1p2): ext journal_check_start:84: comm poxcfs: Detected aborted journal...
  16. H

    CEPH advise

    I want some advice regarding CEPH. I like to use it in the future when I have a 3 node cluster. The idea is to have 2 NVME ssd's per node. 1 1TB ssd for the OS and 1 4TB ssd for the CEPH storage. Is this a good approach? Btw I'm think of WD SN850X or Samsung 990 Pro SSD's
  17. H

    NVME SSD durability under Proxmox

    Probably asked before but I like to know for a homelab how is the NVME SSD durability under Proxmox. Is it for example wise to separate OS from VM storage because of the writes? So two separate NVME SSD's? I do not run CEPH or ZFS.
  18. L

    CEPH '1 PG inconsistent'

    Hi, I have a 3x node Proxmox cluster running ceph with a mixture of NVMe and SAS hard drives built from some used hardware. I've logged into the dashboard this morning and was greeted with an error in the CEPH dashboard saying 'Possible data damage: 1 pg inconsistent'. I've tried a few things...
  19. C

    Updating Proxmox led to NVMe-Bug

    Hi all, I have a server running on Proxmox, which uses four NVMe-Drives in a ZFS-Raid-Z2. Since I have recently updated my Proxmox, since then I have the Issue that the NVMe-Drives periodically go down, and the VMs running on that datastore are crashing. Typically the Issue occurs during a...
  20. D

    [SOLVED] NVME Drive Disappeared from Host After Thinpool Creation

    I'm learning by doing, so apologies if I've missed some basic steps or information. I have a three node cluster made up of Lenovo ThinkCentres (M900 TFF) that I got from a friend after his work decommissioned them. They've been working great, but only came with 128GB SSD SATA drives, which...