Search results

  1. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hi, my NVMe disks are Micron 9300 MAX. Thanks
  2. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hello, I have a Cluster of 6 Nodes with 4 3.2 TB NVMe Disks for each Node. Now I want to add a Node but I have 4 6.4 NVMe Disks. Now I would like to keep the Cluster balanced and therefore I would like to use only 3.2 TB on the disks of the new node. The question is: how should I partition 6.4...
  3. Z

    Ceph 17.2 Quincy Available as Stable Release

    Hi All, I have a cluster of 5 nodes with Proxmox 7.1-12 and Ceph 16.2.7. This weekend I would like to upgrade Proxmox to 7.2 and Ceph to 17.2.1. My Ceph Cluster is made of 3 pools: device_health_metrics with 1 Placement Group Ceph-1-NVMe-Pool with 1024 Placement Groups Ceph-1-SSD-Pool with...
  4. Z

    PVE 7.1-12 Ceph n. pg not deep-scrubbed in time

    Hi, the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and...
  5. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, have used the command ceph pg but this command is incomplete, the output is: no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr>...
  6. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    The problem has not been resolved yet, and the /var/log/ceph/ceph.log is always full of messages as said in my previous post... Could someone help me, please? Thank you
  7. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, yesterday morning I updated my 5 nodes cluster from Proxmox 7.1-7 to 7.1-12 following this steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and nodeep-scrub I have 2 pools, one...
  8. Z

    Restore single Virtual Disk from PBS

    You can't restore, as Fabian said, a single virtual disk from the GUI. You can restore only files or directories.
  9. Z

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Hi, I have the same problem, Proxmox 7.1.7 and Windows Server Datacenter 2016/2019.
  10. Z

    Proxmox Ceph Pool specify disks

    In the last step in the Gui, in each node under Ceph --> OSD, when you add OSDs to Ceph...
  11. Z

    Restore single Virtual Disk from PBS

    Hi, are you planning, in an upcoming release, to allow the restore of a single virtual disk in addition to the entire VM and the single file? Another really useful feature would be being able to create VLANs in the GUI ... Thank you
  12. Z

    Proxmox Ceph Pool specify disks

    Hi, In my cluster I have 2 pools, one for NVMe disks and one for SSD disks. These are the step that I followed to achieve my goal: Create 2 rules, one for NVMe and one for SSD: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> So for NVMe disks the above...
  13. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, so if I understand correctly, you suggest to set the following Ceph flags: ceph osd set noscrub ceph osd set nodeep-scrub ceph osd set noout before starting the update of node 1 and to remove them with: ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout only when...
  14. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Now the question is: before update node 2 do you advice to unset the OSD flags with ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout wait until Ceph is OK and then repeat the procedure as done for node 1 (set the OSD flags again, upgrade node 2 and then unset the OSD...
  15. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, is there an official procedure to update a PVE7 Cluster with Ceph 16.2? I have a Cluster with 5 nodes PVE 7.0.10 with Ceph 16.2.5. Up to now this is the procedure I have used (for example to update node 1): 1. Migrate all VMs on node 1 to others nodes 2. apt update 3. apt dist-upgrade 4...
  16. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    So for my cluster you advice to run the following commands: ceph config set global osd_pool_default_pg_autoscale_mode off But how can I set pg_num and pgp_num at 1024 ? Is it safe to do it in production environment? Can I use this guide...
  17. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Hi, I'm using the driver of PVE7, I only upgraded the firmware that I found on mellanox site. Then I downloaded the mellanox tools at the following link: https://www.mellanox.com/products/adapter-software/firmware-tools you have also to download the firmware for your card... Follow this mini...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!