Search results

  1. Z

    Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Hello, is there any update on this issue? Thank you
  2. Z

    Ceph 17.2 Quincy Available as Stable Release

    Hi All, I have a cluster of 5 nodes with Proxmox 7.1-12 and Ceph 16.2.7. This weekend I would like to upgrade Proxmox to 7.2 and Ceph to 17.2.1. My Ceph Cluster is made of 3 pools: device_health_metrics with 1 Placement Group Ceph-1-NVMe-Pool with 1024 Placement Groups Ceph-1-SSD-Pool with...
  3. Z

    PVE 7.1-12 Ceph n. pg not deep-scrubbed in time

    Hi, the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and...
  4. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, have used the command ceph pg but this command is incomplete, the output is: no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr>...
  5. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    The problem has not been resolved yet, and the /var/log/ceph/ceph.log is always full of messages as said in my previous post... Could someone help me, please? Thank you
  6. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, yesterday morning I updated my 5 nodes cluster from Proxmox 7.1-7 to 7.1-12 following this steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and nodeep-scrub I have 2 pools, one...
  7. Z

    Restore single Virtual Disk from PBS

    You can't restore, as Fabian said, a single virtual disk from the GUI. You can restore only files or directories.
  8. Z

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Hi, I have the same problem, Proxmox 7.1.7 and Windows Server Datacenter 2016/2019.
  9. Z

    Proxmox Ceph Pool specify disks

    In the last step in the Gui, in each node under Ceph --> OSD, when you add OSDs to Ceph...
  10. Z

    Restore single Virtual Disk from PBS

    Hi, are you planning, in an upcoming release, to allow the restore of a single virtual disk in addition to the entire VM and the single file? Another really useful feature would be being able to create VLANs in the GUI ... Thank you
  11. Z

    Proxmox Ceph Pool specify disks

    Hi, In my cluster I have 2 pools, one for NVMe disks and one for SSD disks. These are the step that I followed to achieve my goal: Create 2 rules, one for NVMe and one for SSD: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> So for NVMe disks the above...
  12. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, so if I understand correctly, you suggest to set the following Ceph flags: ceph osd set noscrub ceph osd set nodeep-scrub ceph osd set noout before starting the update of node 1 and to remove them with: ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout only when...
  13. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Now the question is: before update node 2 do you advice to unset the OSD flags with ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout wait until Ceph is OK and then repeat the procedure as done for node 1 (set the OSD flags again, upgrade node 2 and then unset the OSD...
  14. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, is there an official procedure to update a PVE7 Cluster with Ceph 16.2? I have a Cluster with 5 nodes PVE 7.0.10 with Ceph 16.2.5. Up to now this is the procedure I have used (for example to update node 1): 1. Migrate all VMs on node 1 to others nodes 2. apt update 3. apt dist-upgrade 4...
  15. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    So for my cluster you advice to run the following commands: ceph config set global osd_pool_default_pg_autoscale_mode off But how can I set pg_num and pgp_num at 1024 ? Is it safe to do it in production environment? Can I use this guide...
  16. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Hi, I'm using the driver of PVE7, I only upgraded the firmware that I found on mellanox site. Then I downloaded the mellanox tools at the following link: https://www.mellanox.com/products/adapter-software/firmware-tools you have also to download the firmware for your card... Follow this mini...
  17. Z

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Hi, I have a cluster of 5 PVE7 nodes with Ceph 16.2.5. The Hardware configuration of 4 of 5 nodes is: CPU: n.2 EPYC ROME 7402 RAM: 1 TB ECC 2 x SSD 960 GB ZFS Raid 1 for Proxmox 4 x Micron 9300 MAX 3.2 TB NVMe for Pool 1 named Pool-NVMe 2 x Micron 5300 PRO 3.8 TB SSD for Pool 2 named Pool-SSD...
  18. Z

    Restore single Virtual Disk from PBS

    Hi Fabian, so from the GUI of PVE7 it's possible to restore only the full VM and not a single virtual disk?
  19. Z

    Restore single Virtual Disk from PBS

    Hi, my configuration is: Cluster of 5 PVE nodes (PVE 7.0-10) whit Ceph 16.2.5 n. Proxmox Backup Server 1.0-5 (I will update it next month) I have some backups of a Windows Server 2019 guest wits 2 virtual disks (scsi0 and scsi1) and I want to restore only one virtual disk (scsi0). How can I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!