ssd

  1. G

    [SOLVED] Gelöst: Ceph-Pool schrumpft schnell nach Erweiterung mit OSDs (wahrscheinlich morgen Cluster-Ausfall)

    Hallo zusammen, nachdem ich einen SSD-Pool zu meinem bestehenden HDD-Pool hinzugefügt habe schrumpft der HDD-Pool extrem schnell, so dass vermutlich morgen ein Produktionsausfall bevorsteht. ursprüngliche Umgebung: 3-Node-Hyper-Converged-Cluster, (PVE Vers. 6.3-6) mit verteiltem Ceph (Vers...
  2. D

    optimal ceph configuration for a 4 server 2 location HA setup

    Hi All, We have 2 datacenters, one located in each building. We are setting up a new proxmox VE HA cluster of 4 machines. The idea being if one building goes down for an extended time, the other 2 servers will be able to keep everything up. In this setup each server has 8 ssd's. one ssd...
  3. A

    LVM zeroing slows SSD

    Hello. Recently i replaced SSD drives on hosts and faced a problem. Veey slow performance in VMs. Lot of time i couldn't understand what happens. I used Crucial CX500, but bought Samsung 870 QVO (as in tests it is faster?) So, after some time i found (in experimental way) that with "discard"...
  4. C

    VMs are converting to read-only file system once a day. Why is this happening?

    I have Home Assistant and Ubuntu 20.04 running in virtual machines and every day I come home to find the console flooded with these messages. I ran a SMART test on my SSD and it came back clean with no errors. Sometimes when I reboot the VM it starts back up fine, other times I have to run fsck...
  5. H

    Proxmox on ZFS - Should I be worried ?

    Hi, I self-host Proxmox on a dedicated server, running on 2 SSDs in ZFS mirror + 2 hard drive with independant pool. My SMART results on the SSDs start to worry me a bit, and I'm thinking about ditching ZFS. Some recap: It seems to me the "Power_On_Hours" is incorrect. This server has...
  6. T

    Richtige SSD und Festplatten Konfiguration für Proxmox (Cache, Swap usw.)

    Hallo Liebe Proxmoxer:innen, Ich mache ab Dezember eine Umschulung zum Fachinformatiker - Systemintegration und möchte mir zu den Lernzwecken eine Testserver basteln. Wenn ich Proxmox installiere verlangt dieser immer eine Swappartition, und da fängt meine Frage runde im Kopf an? Und zwar...
  7. S

    Poor ZFS SSD IO benchmark: RAID-Z1 4 x SSD similar to RAID-Z10 12 x HDD

    Hi there! I have two PVE 7.0 on ZFS, one with 12 x 4TB 7.2K SAS HDD in ZFS RAID 10, the other with 4 x 4TB SATA SSD in Z1 and they're coming out with near identical IO performance, which is suspicious! From benchmarking with FIO with caches and buffers disabled, on sequential read / writes, the...
  8. Y

    SSD Wearout negative %

    Hello, What does it mean when I see a negative percentage in the Wearout field ? (attached)
  9. G

    Hardware to buy (SSD)

    Hello, I want to buy SSDs for Proxmox Root-FS, VMs/ CTs, and Special Device for ZFS for two 6TB WD Red HDDs. I found this model online for 115$ (used): Seagate Nytro 1551 480GB 6G SATA Mainstream Endurance Is it recommendable? I am running Nextcloud and some very small CTs (e.g. Unifi...
  10. S

    very poor performance with consumer SSD

    I have a pair of HPE DL360 Gen8 dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD They're for internal use, and show absymal performances. At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily. Now I...
  11. X

    Proxmox Install - SSD not Detected

    Hi there, I'm new on Proxmox and I'm installing it on my Dell R720 server. I have only one Samsung SSD 870 disk on it and Proxmox didn't recognize this disk. The iDRAC is showing the disk as Active and looks good. I tried a couple of things (youtube video, etc) no luck. Does anyone know...
  12. R

    Disk overview: Wearout percentage shows 0%, IPMI shows 17% ...

    Hi, we are running an older Proxmox Ceph cluster here and I am currently looking through the disks. So the OS disks have a Waerout of two percent but the Ceph OSDs still have 0%?!?!?!? So I looked into the Lenovo XClarity Controller: So for the OS disks it looks the same, but the Ceph...
  13. 9

    [SOLVED] SSD performance issue on ext4

    Hi All, I installed proxmox (using ext4 filesystem) on a 5212MB SATA SSD (used consumer-grade Sandisk SD8SB8U512G1001) to evaluate it. I run some benchmarks using fio, to get an idea of how fast it is. Before installing proxmox, I benchmarked the disk with PassMark's Performance Test on Win1o...
  14. J

    [SOLVED] Welche Optionen zur Schonung von SSDs hat ZFS ?

    Ich habe 2x die P2 2000GB von Crucial als NVMe SSD. Der Pool den ich erstellt habe heißt whirl-pool. Muss ich noch was ändern ? ich habe etwas von ashift=off gehört... wie mache ich das? macht das Sinn?
  15. D

    Why "discard" isn't enabled by default?

    I was wondering, why the "discard" option at Hard Disk isn't set by default. Can that cause negative effects or why it's not set by default?
  16. S

    Best raid configuration for my setup (HDD/SSD)

    Hello, I would need your recommendations on what type of raid would be the best for my PBS install, I planned to use ZFS. Server hardware : - 64Go RAM - 24x 8To (HDD, 7,2k) - 2x 1To (Sata SSD), for the OS. For the OS, I guess I will use raid-1 on both SSD (2x 1To) Regarding the storage of...
  17. F

    Ceph select specific OSD to form a Pool

    Hi there! i'm needing a little help. I have a Proxmox + Ceph cluster with 4 nodes, with the following HDDs in each node 2x 900GB SAS 15.000K 2X 300GB SAS 10.000K 2x 480GB SSD 2x 240GB SSD I need to make a Pool for each class and size of HD, the classes of the HDs know how to separate them, but I...
  18. S

    command '/sbin/zpool create -o 'ashift=12' nvme mirror failed: exit code 1

    Hi, I installed Proxmox 7, and I am trying to create a new ZFS using two 1TB NVME drives via the GUI. However, I get the below error: command '/sbin/zpool create -o 'ashift=12' nvme mirror /dev/disk/by-id/nvme-Sabrent_1765071310FD00048263...
  19. G

    Gast Filesystem: XFS vs. ext4 auf ZFS-Host?

    Hallo, dank einem Beitrag habe auf eine Frage von mir habe ich nun meinen Server auf ZFS umgestellt. Nun stellt sich mir die Frage: XFS oder ext4 als Gast FS? Bisher war ich immer ein Freund von XFS: - defrag online möglich (auch unter Linux relevant; ggf. weniger/gar nicht bei SSD?) - stabil...
  20. B

    Setup data directory in a single HD/SSD environment

    Hi, I am currently switching to a new personal home server, using Proxmox VE, v7, with several VMs/CTs in the future. Hardware: Barebone, 16GB RAM, and one 2TB SSD (and no other drives). This is how the disk looks like: root@pve:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!