Search results

  1. E

    HA Failover if underlying storage dies

    As of now there is no such provision in proxmox
  2. E

    [SOLVED] Tried my first 3Node CEPH Cluster on 6.4-6

    ceph crash archive-all Use this command to acknowledge all crash logs, it will change status from Warning to Healthy
  3. E

    How to check Proxmox logs of yesterday's

    There is no solution required. As memory reserved by Linux will be made available to processes as and when required. It is not a problem. You may ignore
  4. E

    cache vm hard disk

    Please enable write back + discard option and make sure Qemu agent is enabled in windows
  5. E

    Clone of VM vs Clone of VM using Snapshot

    Yaa I got that. Now I have follow up question. When the disks are deployed using ceph according to ceph document https://docs.ceph.com/en/latest/rbd/rbd-snapshot/#getting-started-with-layering The process is Create a block Image> Create Snapshot> Protect the Snapshot> Clone Snapshot. So...
  6. E

    Clone of VM vs Clone of VM using Snapshot

    I understand that, but when I have created a snapshot for example 8am in the morning and I want to take a backup at 10am. So 8am snapshot will have state which 2 hrs older than current state. Do you mean to see "current" means it will take current state at 10am or it will use the last snapshot...
  7. E

    [SOLVED] Replace SSD Raid 1

    Can you share status of zpool status
  8. E

    Clone of VM vs Clone of VM using Snapshot

    Option 1: No Snapshot of the Existing VM Option 2: Snapshot of Existing VM
  9. E

    Clone of VM vs Clone of VM using Snapshot

    When I want to make a clone of VM when there is no snapshot configured, I will only see an option to define target storage but when snapshot of VM is already existing, there is one more option in the wizard "saying choose snapshot" which you would like to use to make a clone that is what i was...
  10. E

    Clone of VM vs Clone of VM using Snapshot

    Can anyone explain me the difference between Clone of VM Directly vs Clone of VM using Snapshot
  11. E

    [SOLVED] Replace SSD Raid 1

    if it configured as RAID1 disk at BIOS Level, it will show as Single disk in Proxmox GUI. if it is showing as individual disks, just check whether you configured them as ZFS RAID1 and not Hardware Level RAID1
  12. E

    VM node Restrictions

    If you want elastic search to be on 3 different machine in normal scenario in 4 node setup do the following Group 1 : <host>:<priority> syntax if you follow Group1: Node1 Priority 2, Node2: Priority3 : Node3 Priority4. == Assign ElasticSearch VM1 Group2: Node3 Priority 2, Node2: Priority3 ...
  13. E

    How to check Proxmox logs of yesterday's

    According to this memory is assigned to buff/cache, which is normal You dont have any memory issue in VM. The reported memory usage is nominal as process wise memory usage is low, buff/cache is memory reserved by kernel
  14. E

    1st cluster creation

    Steps to be followed apt-get update && apt-get full-upgrade apt-get install ifupdown2
  15. E

    1st cluster creation

    Go ahead with clustering. Networks can be changed on the fly, provided you have installed ifupdown2 package
  16. E

    Slow CEPH Rebuild

    The following command appears to be sufficient to speed up backfilling/recovery. On the Admin node run: ceph tell 'osd.*' injectargs --osd-max-backfills=2 --osd-recovery-max-active=6 or ceph tell 'osd.*' injectargs --osd-max-backfills=3 --osd-recovery-max-active=9 To set back to default, run...
  17. E

    Slow CEPH Rebuild

    Ceph ensures that whenever recovery operation is happening, it shall not choke the cluster network with recovery data. The parameters are controlled by this flags osd max backfills: This is the maximum number of backfill operations allowed to/from OSD. The higher the number, the quicker the...
  18. E

    Proxmox + Ceph drive configuration

    In ceph normally weights are assigned based on the size of the disk, for example if your disk size is 1.92 TB and after right sizing OSD size as shown in osd tree is 1.75TB, you will see a weight as 1.75 Now in your case both SSD and NVME are with 500G capacity so after right sizing let us...
  19. E

    Proxmox + Ceph drive configuration

    It won't cause any problem just ensure proper weights are assigned to nvme Weight of nvme must be greater than ssd
  20. E

    Proxmox + Ceph drive configuration

    Considering your setup, you have 120GB Boot Disk per server ( which i believe is used for deploying proxmox + ceph) Total 3 NVMe Disks of 500GB each Total 3 SSD Disks of 500GB each Now if you combine them in ceph, it will result in 6 OSD ( 3 NVME + 3 SSD) ceph can allow mix use of different...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!