Search results

  1. T

    SSD Wear

    I have 6 nodes in my Proxmox cluster which are exclusively Ceph storage nodes (no VMs). Each node as a pair of Samsung 860 Pro 256G SATA SSD cards with the OS installed on the drives as mirrored zfs. These have been in operation for about 5 years. I have noticed the SSD wearout indicator for...
  2. T

    Ceph Questions and Thoughts

    Recently I combined two separate Proxmox clusters into one. Both clusters prior had separate Ceph clusters of three nodes each with 10 OSDs. Earlier this week I finally added the three nodes and OSDs to my converged cluster. All nodes are running Proxmox 8.1.11 (I see 8.2 is now available)...
  3. T

    Ceph SSD recommendations

    I have been dragging my feet on this one, but I am looking for SSD recommendations for my Ceph servers. Currently each server has ten 5TB spinner drives with SSD Cache drives. The performance has been decent, but there are many times where guests give IO errors due to occasional high wait...
  4. T

    Combining two separate clusters

    Hi all. In our lab, we maintain two separate but identical Proxmox clusters with Ceph. Each cluster has 5 compute nodes and 3 storage nodes, so 8 total members per cluster. The storage nodes are cluster members but do not host any VMs. Each storage node has 10 5TB drives (spinners...I'll...
  5. T

    Merge two hyper-converged clusters into one

    Greetings, For many years we've been running two separate hyper-converged clusters with identical hardware. Each cluster has 5 compute nodes (running VMs only) and 3 Ceph storage nodes (not running any VMs). What I want to do is merge these two clusters into one. Does anyone have any best...
  6. T

    [SOLVED] Block emails that pass through a specific upstream server

    I have been searching and trying different solutions, but I can't seem to find the magic incantation that makes this work. I have a user getting blasted with various loosely related emails all from various email addresses and domains. However, they all are being used by the same email relay...
  7. T

    Guidance on Shared Storage

    I have been fighting I/O performance issues on our Ceph server for some time. Sometimes the VMs I/O performance is so bad that I have to move the VM image to a local drive in order to get performance back. I'm now exploring other shared storage methods. Running Proxmox 7.1-11. When using the...
  8. T

    Odd directory that cannot be 'stat'-ed by root

    I am trying to schedule a backup job for users' directories on a Linux desktop workstation. We utilize a network cloud storage solution called Seafile, and the Linux desktop utility that gives users access to their files creates and mounts to a directory, typically named "SeaDrive". The...
  9. T

    Nag screen suppression for quarantine logins

    As a long time user of Proxmox (since the 3.x days) and now a user of PMG, I am very familiar with the subscription nag screen and have long learned to click through it. However, I am wondering if there could be an exception for PMG to have it not show for non-admin users logging into the...
  10. T

    Proxmox as a base OS for a product

    Greetings, I have a question about AGPL3 and its application to my situation. I have been reading around and found a wide range of opinions on this matter. I have a product that currently uses Ubuntu 20.04 as the base OS, then a custom set of deployment scripts which setup and configure KVMs...
  11. T

    Replacing system drive in Ceph node

    Lots of questions now that I've got some decent hardware and upgrading to 6.0. Per a discussion in another thread, I would like to move the OS of my Ceph Nodes from a default LVM-based install on a large SSD (like 2 TB) ideally to a RAID 1 ZFS boot disk on much smaller SSDs (256GB). I'm fully...
  12. T

    [SOLVED] Ceph Luminous to Nautilus upgrade issues

    I have upgraded my 6 node cluster (3 ceph-only plus 3 compute-only nodes) from 5.4 to 6. The Ceph config was created on the Luminous release and I am following the upgrade instructions provided at https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus. During the upgrade the OSDs were...
  13. T

    BlueFS spillover detected on 30 OSD(s)

    Hi all, After an upgrade on one cluster from 5.4 to 6.0, I performed the Ceph upgrade procedures listed here: https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus Somewhere along the way, in the midst of all the messages, I got the following WARN: BlueFS spillover detected on 30 OSD(s). In...
  14. T

    Limit node usage through storage availabilty

    I have three ceph nodes in my Proxmox Cluster that I do NOT want users creating VMs on or the system automatically moving VMs to. At first I thought I could do that through the permissions system, but after reading some posts it looks like removing storage should do it. I'm just posting this...
  15. T

    Best Practices for new Ceph cluster

    Hi all. I have an existing PVE/Ceph cluster that I am currently upgrading. The PVE portion is rather straight forward, I'm adding the new nodes to the PVE cluster, moving VMs off the old ones, then removing the old ones from the cluster. Easy Peasy. However, what I don't know is the best...
  16. T

    Help understand relationship between ceph pools

    I'm trying to understand how the Ceph pools are all working together. Before Proxmox 5.3. On this cluster, I have three nodes with four 2TB drives in each node (for roughly 22 TB total disk space after overhead). I had a single pool with 512 PGs in 3/2 configuration that was used to create a...
  17. T

    Ceph performance troubleshooting

    Hi all, I've been running Proxmox for a number of years and now have a 13 node cluster where last year I added Ceph to the mix (after a 5.2 upgrade) using the empty drive bays in some of the Proxmox nodes. Last Friday I upgraded all nodes to version 5.3. The Ceph system has always felt slow...
  18. T

    Ceph OSD Performance Issue

    While investigating OSD performance issues on a new ceph cluster, I did the same analysis on my "good" cluster. I discovered something interesting and fixing it may be the solution to my new cluster issue. For the "good" cluster, I have three nearly identical servers. Each server has four...
  19. T

    Ceph OSD Journal on USB3.0 SSD?

    I built a ceph cluster earlier this year for one of my Proxmox clusters and it has been working just fine. I had enough drive slots in each storage node to include a dedicated SSD for the OSD journals and that cluster is working fine in terms of performance. On a second cluster, I only had...
  20. T

    HA max_restart and max_relocate best practices

    I have a 13 node cluster using HA. What are the best practices for setting the max_restart and max_relocate values? As it stands right now, for VMs that can run on any node, I've simply picked a restart value of 4, and a max_relocate of 10. My thinking is that the HA service will try to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!