Recent content by mcdowellster

  1. M

    Western Digital Red NAS SATA SSDs - High latency

    I'm RMAing these. I've got four 860 Pros in the box still I'm going to slap in for now instead. What a pain in the rear.
  2. M

    Western Digital Red NAS SATA SSDs - High latency

    Comparing WD500 vs SEDC500 WD500 /dev/sdl: (groupid=0, jobs=1): err= 0: pid=2725304: Wed Sep 24 12:37:33 2025 write: IOPS=140, BW=561KiB/s (574kB/s)(164MiB/300348msec); 0 zone resets slat (usec): min=2, max=1456, avg=14.09, stdev=12.07 clat (nsec): min=1226, max=720941k...
  3. M

    Western Digital Red NAS SATA SSDs - High latency

    Thanks for this. Yeah I know... It's just amazing to me that a WD BLUE is out performing a "NAS" drive. It makes no sense to me.
  4. M

    Western Digital Red NAS SATA SSDs - High latency

    Heeelllpppp :D Please don't ask me to throw away these drives...
  5. M

    Western Digital Red NAS SATA SSDs - High latency

    Hello All, I've been getting the infamous warning about slow reads and ops on bluestore but only from these two (my NEWEST SSDs). OSD 7 and OSD 2 are 4TB WD Red NAS SATA SSDs. The rest are a mix of Kingston DC SATA SSDs and Samsung 860 Pros @ 2TB. This is causing IO delays. Yes before I get...
  6. M

    Input Discards

    Hello, I'm seeing 0.033x per second input discards on all my nodes in the cluster with a random jump to 0.1 per second. Its constant over the last 400 days of monitoring in check_mk and I see the same thing on the hypervisors. I've upp'd RX and TX to 4096 on all node. Unifi AGG switches...
  7. M

    Ceph Pacific - High Latency on Large Disks

    Last night I moved all my data off the spinning disks, blew away the spinning disk pools, spent several hours troubleshooting random weird problems. After finally getting everything going zapping and wiping the spinning discs and re-adding them I'm having a tremendous performance improvement...
  8. M

    Ceph Summary Page NO DATA

    Using the ceph monitoring tool on port 8443 I found 3 ghost OSDs... purging those fixed the GUI in proxmox!
  9. M

    KVM Windows VMs and losing network connectivity

    I switched to the latest virtio drivers and have been fine since. also turns out a lot of the "drops" were actually IO related issues with the spinning disks. The underlaying storage would "hang" which would cause SMB services to drop. Look at the logs in Windows to validate this. Since moved...
  10. M

    Ceph Summary Page NO DATA

    restarting pveproxy fixed it for only a few moments. The data was gone with the next refresh...
  11. M

    Ceph Summary Page NO DATA

    and for the record, there ARE PGs and pools created....
  12. M

    Ceph Summary Page NO DATA

    Hello again... decided to BLOW AWAY all spinning disks from my cluster -> seemed to be working okay except for the summary page... its completely blank. it shows 0 OSDs it shows blank pgs it shows no data about cluster activity... Reboot the ENTIRE cluster (full shutdown) and its still...
  13. M

    Ceph Pacific - High Latency on Large Disks

    Hello There, I've had a cluster running since 2016 and been upgrading storage. The SSD pool has been excellent since day one but my spinning disks are causing me some serious headaches. My largest drives are two 14TB WDC WD140EDFZ. These are almost ALWAYS at 250MS. They are on the same HBA as...
  14. M

    Proxmox + Nginx Reverse Proxy

    Here is what I setup (removed others). Keep in mind the spice section is minimal as I've been troubleshooting this: ##### /etc/nginx/nginx.conf ##### user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 1024; } http {...