nvme

  1. V

    Slow (or unexpected) NVme speed even with ext4

    Hi, it's me again. Dear forum friends, Today I'm here with a question that has me thinking and that I can't seem to make sense of. I have a server with very good hardware specifications. In fact, I posted about it a while ago to see if I should install ZFS in RAID 10 to host my VMs. These are...
  2. A

    Ceph - feasible for Clustered MSSQL?

    Looking to host a clustered MSSQL DB for an enterprise environment using ceph to remove a NAS/SAN as a single point of failure. Curious about performance, requirements are likely not very much and no multi-writer outside the cluster itself. However... as I understand writes can be very bad with...
  3. F

    Kleiner Server mit 2 M.2-SSD als ZFS Raid 1

    Hi! Ich habe einen Proxmox auf einem NUC laufen. Als Platten sind dort zwei Samsung M.2 SSD 990 PRO Heatsink 4TD als ZFS RAID-1 installiert. Es laufen dort einige LXC-Container: - Traefik - piHole - Nextcloud - PaperlessNGX Darüber hinaus sind noch ca. 10 LXC installiert, werden aber nur...
  4. L

    Best Storage Setup for Proxmox VE on a Dedicated Server? (Performance Question)

    Hello everyone, I’m about to rent a dedicated server to host Proxmox VE and deploy several critical company applications. Since performance is key, I want to make sure I choose the best possible storage setup. I have two possible configurations in mind, assuming NVMe SSDs are significantly...
  5. F

    ZFS on RAID 10 with 4 x PCIe5 NVMe Drives: Performance Insights and Questions

    Hello everyone. I recently acquired 2 HPE DL380 GEN11 servers. Each node has 512GB of RAM and 4 PCIe5 NVME drives of 3.2TB (HPE, but manufactured by KIOXIA). Given my use case, where the goal was not to have shared storage but rather a cluster with VM replication, I created a logical volume for...
  6. P

    ZFS pool with mixed speed & size vdevs

    I have a fast 2TB NVME SSD and a slow 4TB SATA SSD inside a RAID0 zfs zpool. I like zfs (over lvm which I had before) for its versatility in managing storage (e.g. one pool to use, including the pve root), however I'm running into the issue that my mixed hardware slows down. With LVM, I could...
  7. G

    Problem with multipath via RoCE

    Hi In my setup we decided to change connection to remote storage from FibreChannel to RoCE. All the time, this is the same remote storage source (the same storage array) but used different underlaying storage technology. The array is made by Huawei. Since that change during higher network...
  8. C

    PVE 8.3.3 | NVMe-Laufwerk fällt regelmäßig aus

    Liebe Mitglieder, Ich habe eine Frage zu PVE auf einem NUC 14 Pro in Kombination mit einer SSD und NVMe. Hardware: ASUS NUC 14 Pro CPU: i7-155H, RAM: 96GB (SO-DIMM, DDR5, 5600 MHz), SSD: 1x 2.5“ SAMSUNG EVO 870 (500GB), 1x M.2 SAMSUNG 990 PRO (2TB) Software: Proxmox Virtual Environment 8.3.3...
  9. A

    ext journal_check_start:84: comm poxcfs: Detected aborted journal

    Hello everyone, Since a couple of days I receive the following error message. It seems like a problem with The Proxmox Cluster file system (“pmxcfs”) and de NVME SSD. [16821.351985] EX14-fs error (device nvme0n1p2): ext journal_check_start:84: comm poxcfs: Detected aborted journal...
  10. H

    CEPH advise

    I want some advice regarding CEPH. I like to use it in the future when I have a 3 node cluster. The idea is to have 2 NVME ssd's per node. 1 1TB ssd for the OS and 1 4TB ssd for the CEPH storage. Is this a good approach? Btw I'm think of WD SN850X or Samsung 990 Pro SSD's
  11. H

    NVME SSD durability under Proxmox

    Probably asked before but I like to know for a homelab how is the NVME SSD durability under Proxmox. Is it for example wise to separate OS from VM storage because of the writes? So two separate NVME SSD's? I do not run CEPH or ZFS.
  12. L

    CEPH '1 PG inconsistent'

    Hi, I have a 3x node Proxmox cluster running ceph with a mixture of NVMe and SAS hard drives built from some used hardware. I've logged into the dashboard this morning and was greeted with an error in the CEPH dashboard saying 'Possible data damage: 1 pg inconsistent'. I've tried a few things...
  13. C

    Updating Proxmox led to NVMe-Bug

    Hi all, I have a server running on Proxmox, which uses four NVMe-Drives in a ZFS-Raid-Z2. Since I have recently updated my Proxmox, since then I have the Issue that the NVMe-Drives periodically go down, and the VMs running on that datastore are crashing. Typically the Issue occurs during a...
  14. D

    [SOLVED] NVME Drive Disappeared from Host After Thinpool Creation

    I'm learning by doing, so apologies if I've missed some basic steps or information. I have a three node cluster made up of Lenovo ThinkCentres (M900 TFF) that I got from a friend after his work decommissioned them. They've been working great, but only came with 128GB SSD SATA drives, which...
  15. S

    EXT4 I/O Error suddenly

    Hi there, After moving my proxmox server from my parents to my flat I have this new error that stops the server from booting. The boot drive is a nvme ssd. When booting in recovery mode, i can log into the system and do stuff. I tried a rw-test as well as nvme-cli and smartcl with everything...
  16. W

    vfio-pci early bindings for NVMe drive passthrough (TrueNAS VM)

    Hi again, I am still tinkering with the project of a TrueNAS VM on PVE 8.1.4. using a HP DL380 G10 with a PCIe/NVMe/U2 riser card and cage. I already posted here with not much success before getting some more info at the TrueNAS forums over here. It turned out that some people there would...
  17. L

    problems with KINGSTON_SFYRD4000G disks in ceph cluster

    Hello Community. Does anyone have KINGSTON SFYRD 4000G drives in ceph cluster? We have built a cluster on them and are seeing very high latency at low load. There are no network or CPU issues. Ceph version is 17.2.7, cluster is built on LACP inter 25G network cards, Dell R450 servers, 256Gb ram...
  18. D

    PCIe Passthrough NVMe Drive Causes Proxmox to Crash - Using Bifurcation Card

    Hello, I am attempting to pass through 2 NVMe drives to two different VMs. They are on an Asus bifurcation card with a total of 4 m.2 slots, all of which are populated. My BIOS has the PCIe slot set to x4x4x4x4 and all of the drives are seen correctly in Proxmox. Hardware is listed at the...
  19. G

    NVME passthrough - Errors

    Hi I have been recently having nvme passthrough problems. I have two samsung 990 in a bifurcation card (asus hyper) These are passed through to one of my virtual machines, but recently I have been getting Unable to change power state from D0 to D3hot, device inaccessible When restarting...
  20. C

    [TUTORIAL] Update Samsung consumer SSD/NVME firmware in Proxmox

    This is a short tutorial how to update the firmware of Samsung's consumer SSD/NVME drives in Proxmox. !! CAUTION !! Upgrading firmwares is always a risk! Before proceeding, make sure - to backup your VMs which are located on the target drive(s) - stop all your VMs/LXCs which are located on...