Hello,
I have a Cluster of 6 Nodes with 4 3.2 TB NVMe Disks for each Node. Now I want to add a Node but I have 4 6.4 NVMe Disks. Now I would like to keep the Cluster balanced and therefore I would like to use only 3.2 TB on the disks of the new node. The question is: how should I partition 6.4...
Hi All,
I have a cluster of 5 nodes with Proxmox 7.1-12 and Ceph 16.2.7. This weekend I would like to upgrade Proxmox to 7.2 and Ceph to 17.2.1. My Ceph Cluster is made of 3 pools:
device_health_metrics with 1 Placement Group
Ceph-1-NVMe-Pool with 1024 Placement Groups
Ceph-1-SSD-Pool with...
Hi,
the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps:
1. Set noout, noscrub and nodeep-scrub before start the update process;
2. I have updated all 5 nodes without problems;
3. Unset the flags noout, noscrub and...
Hi,
have used the command ceph pg but this command is incomplete, the output is:
no valid command found; 10 closest matches:
pg stat
pg getmap
pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]
pg dump_json [all|summary|sum|pools|osds|pgs...]
pg dump_pools_json
pg ls-by-pool <poolstr>...
The problem has not been resolved yet, and the /var/log/ceph/ceph.log is always full of messages as said in my previous post...
Could someone help me, please?
Thank you
Hi,
yesterday morning I updated my 5 nodes cluster from Proxmox 7.1-7 to 7.1-12 following this steps:
1. Set noout, noscrub and nodeep-scrub before start the update process;
2. I have updated all 5 nodes without problems;
3. Unset the flags noout, noscrub and nodeep-scrub
I have 2 pools, one...
Hi,
are you planning, in an upcoming release, to allow the restore of a single virtual disk in addition to the entire VM and the single file?
Another really useful feature would be being able to create VLANs in the GUI ...
Thank you
Hi,
In my cluster I have 2 pools, one for NVMe disks and one for SSD disks. These are the step that I followed to achieve my goal:
Create 2 rules, one for NVMe and one for SSD:
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
So for NVMe disks the above...
Hi,
so if I understand correctly, you suggest to set the following Ceph flags:
ceph osd set noscrub
ceph osd set nodeep-scrub
ceph osd set noout
before starting the update of node 1 and to remove them with:
ceph osd unset noscrub
ceph osd unset nodeep-scrub
ceph osd unset noout
only when...
Now the question is: before update node 2 do you advice to unset the OSD flags with
ceph osd unset noscrub
ceph osd unset nodeep-scrub
ceph osd unset noout
wait until Ceph is OK and then repeat the procedure as done for node 1 (set the OSD flags again, upgrade node 2 and then unset the OSD...
Hi,
is there an official procedure to update a PVE7 Cluster with Ceph 16.2?
I have a Cluster with 5 nodes PVE 7.0.10 with Ceph 16.2.5. Up to now this is the procedure I have used (for example to update node 1):
1. Migrate all VMs on node 1 to others nodes
2. apt update
3. apt dist-upgrade
4...
So for my cluster you advice to run the following commands:
ceph config set global osd_pool_default_pg_autoscale_mode off
But how can I set pg_num and pgp_num at 1024 ? Is it safe to do it in production environment?
Can I use this guide...
Hi,
I'm using the driver of PVE7, I only upgraded the firmware that I found on mellanox site. Then I downloaded the mellanox tools at the following link:
https://www.mellanox.com/products/adapter-software/firmware-tools
you have also to download the firmware for your card...
Follow this mini...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.