Search results

  1. Z

    Proxmox Ceph Pool specify disks

    Hi, In my cluster I have 2 pools, one for NVMe disks and one for SSD disks. These are the step that I followed to achieve my goal: Create 2 rules, one for NVMe and one for SSD: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> So for NVMe disks the above...
  2. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, so if I understand correctly, you suggest to set the following Ceph flags: ceph osd set noscrub ceph osd set nodeep-scrub ceph osd set noout before starting the update of node 1 and to remove them with: ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout only when...
  3. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Now the question is: before update node 2 do you advice to unset the OSD flags with ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout wait until Ceph is OK and then repeat the procedure as done for node 1 (set the OSD flags again, upgrade node 2 and then unset the OSD...
  4. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, is there an official procedure to update a PVE7 Cluster with Ceph 16.2? I have a Cluster with 5 nodes PVE 7.0.10 with Ceph 16.2.5. Up to now this is the procedure I have used (for example to update node 1): 1. Migrate all VMs on node 1 to others nodes 2. apt update 3. apt dist-upgrade 4...
  5. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    So for my cluster you advice to run the following commands: ceph config set global osd_pool_default_pg_autoscale_mode off But how can I set pg_num and pgp_num at 1024 ? Is it safe to do it in production environment? Can I use this guide...
  6. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Hi, I'm using the driver of PVE7, I only upgraded the firmware that I found on mellanox site. Then I downloaded the mellanox tools at the following link: https://www.mellanox.com/products/adapter-software/firmware-tools you have also to download the firmware for your card... Follow this mini...
  7. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Hi, I have a cluster of 5 PVE7 nodes with Ceph 16.2.5. The Hardware configuration of 4 of 5 nodes is: CPU: n.2 EPYC ROME 7402 RAM: 1 TB ECC 2 x SSD 960 GB ZFS Raid 1 for Proxmox 4 x Micron 9300 MAX 3.2 TB NVMe for Pool 1 named Pool-NVMe 2 x Micron 5300 PRO 3.8 TB SSD for Pool 2 named Pool-SSD...
  8. Z

    Restore single Virtual Disk from PBS

    Hi Fabian, so from the GUI of PVE7 it's possible to restore only the full VM and not a single virtual disk?
  9. Z

    Restore single Virtual Disk from PBS

    Hi, my configuration is: Cluster of 5 PVE nodes (PVE 7.0-10) whit Ceph 16.2.5 n. Proxmox Backup Server 1.0-5 (I will update it next month) I have some backups of a Windows Server 2019 guest wits 2 virtual disks (scsi0 and scsi1) and I want to restore only one virtual disk (scsi0). How can I...
  10. Z

    PVE7.0.11 and InfluxDB v2

    Hi, I would like to send PVE7.0.11 metrics to a InfluxDB v2 server and I tried the native plug-in InfluxDB in PVE7. Now the problem is that in my InfluxDB v2 Server I used a self signed certificate and when I try to "create" an instance of InfluxDB with protocol HTTPS I receive the following...
  11. Z

    PVE 7 and Mellanox EN driver

    Hi, i have a cluster of 5 PVE7 nodes whit mellanox nic (MCX516A-CCAT) and I wolud like to installa mellanox en driver, but on their site there isn't Debian 11 but debian 10.8 or Ubuntu 20.04 or 21.04 . Now the question is: which one should I use? Thank you
  12. Z

    How to create multiple Ceph storage pools in Proxmox?

    Hi, I did some tests in PVE7 and Ceph 16.2 and I managed to reach my goal, which is to create 2 pools, one for NVMe disks and one for SSD disks. These are the steps: Install Ceph 16.2 on all nodes; Create 2 rules, one for NVMe and one for SSD (name rule for NVMe: nvme_replicated - name rule...
  13. Z

    How to create multiple Ceph storage pools in Proxmox?

    Hi, in my PVE 7 Cluster I have 5 nodes and each node has 4 NVMe disks and 2 SSD disks. Now I would like to create 2 different pools, one for NVMe and one for SSD. I have carefully read the doc "Ceph CRUSH & device classes" but some steps are not clear to me. Which are the steps to achieve my...
  14. Z

    Proxmox 7 - RRD Graph empty

    Hi, I installed Proxmox 7, upgrading it from Proxmox 6.4 due to the problem described in this post: https://forum.proxmox.com/threads/proxmox-7-stable-installation-aborted.92443/#post-403342 Everything goes well, except the fact that all RRD Graph are empty, totally white! I installed chrony...
  15. Z

    Proxmox 7 stable installation aborted

    So I'm doomed to install PVE 6.4 first and then upgrade to 7 which was just what I wanted to avoid ...
  16. Z

    Proxmox 7 stable installation aborted

    Do you recommend installing Debian 11 first and then Proxmox 7 or installing PVE 6.4 first and then updating everything? Note that I would like the root partition in zfs raid1... Thank you
  17. Z

    Proxmox 7 stable installation aborted

    Could someone help me please? I would like to avoid installing 6.4 and then upgrading to 7. Thank you
  18. Z

    Proxmox VE 6.4 Installation Aborted

    It's not so simple to do it with a Big Twin. The Graphic Card must be Low Profile and I don't have one.
  19. Z

    Proxmox VE 6.4 Installation Aborted

    It's a Supermicro Big Twin and I can't attach a Dedicated GPU, all the PCI Express are already in use by 3 Mellanox NIC.