Search results

  1. E

    [SOLVED] Ceph Node down after upgrade to 6.2

    It look good , I don't see any issue in cluster , just verify ceph nodes are able to communicate check the logs
  2. E

    [SOLVED] Ceph Node down after upgrade to 6.2

    pvecm status whether it is shown activity blocked ??
  3. E

    [SOLVED] Ceph Node down after upgrade to 6.2

    Ok can you share output of the following ceph -s ceph osd dump corosync-cmapctl
  4. E

    Clearing Ceph OSD partition

    Follow this https://docs.ceph.com/docs/master/ceph-volume/lvm/zap/
  5. E

    [SOLVED] Ceph Node down after upgrade to 6.2

    Were u on ceph luminous or nautilus before upgrade?? Add a ceph repository and update
  6. E

    Proxmox Roadmap

    Thanks I have subscribed now
  7. E

    Proxmox Roadmap

    I have gone through the roadmap available on website it list the following in roadmap Roadmap Backup improvements Maintenance mode pveclient Container on ZFS over iSCSI btrfs storage plugin (postponed) improved SDN support Cross cluster authentication mechanism VM/CT Encryption Just...
  8. E

    [SOLVED] pvecm status showing only 1 node despite all node up

    Issue is resolved by change secauth: off in the /etc/corosync/corosync.conf file and restarting corosync
  9. E

    [SOLVED] pvecm status showing only 1 node despite all node up

    I have cluster of 8 servers but pvecm status is only showing one node and showing activity blocked in all the servers 1. I have synchronized time in all servers 2. I have restarted all servers 3. pvecm status in all servers showing different ring id corosync-cfgtool -s shows all connected...
  10. E

    Proxmox 6.2 cluster HA Vms don't migrate if node fails

    Paresh, As per that you have only 3 osd in the ceph pool and if one server goes down 33% placement groups are degraded causing the issue Once u make the server down, ha migrate will try to migrate the vm as u still have quorum but ceph is degraded or may be have stale and inactive pgs due to...
  11. E

    Proxmox 6.2 cluster HA Vms don't migrate if node fails

    Not capacity Pool size. Post output of the following command ceph osd dump ceph df pveceph lspools
  12. E

    Proxmox 6.2 cluster HA Vms don't migrate if node fails

    Are your vm running on ceph ?? If yes what is pool size
  13. E

    No-Subscription license - No valid subscription message

    I will be only removed once you buy a subscription
  14. E

    Passthrough hardcoded Windows license

    Just try using hosts instead of kvm64 in the vm
  15. E

    Ceph unstable Behaviour causing VM hanging

    Ok thanks @Alwin I have one more issue, raised a thread also https://forum.proxmox.com/threads/not-able-to-add-nfs-server-getting-error-nfs-is-not-online.72729/#post-325206 Can you have a look, not able to add nfs to pve 6 works fine with pve5
  16. E

    Ceph unstable Behaviour causing VM hanging

    Controller is virtio SCSI only do you mean virtio SCSI single and yes cache is write back, missed while copying
  17. E

    Ceph unstable Behaviour causing VM hanging

    @inc1pve25:~# qm config 3600 agent: 1 bootdisk: scsi0 cores: 10 cpu: kvm64 ide2: none,media=cdrom memory: 20480 name: server1 net0: virtio=EA:D2:42:2B:F4:43,bridge=vmbr0,firewall=1 net1: virtio=F6:28:36:D7:04:DA,bridge=vmbr3010,firewall=1 numa: 1 onboot: 1 ostype: l26 scsi0...
  18. E

    Not able to add nfs server, getting error nfs is not online

    please note I have installed one RHEL 7 vm in the proxmox and assign the IP in same network of proxmox, there i executed showmount -e 172.19.2.183 I got the result whereas i did not get anything on proxmox