Search results

  1. E

    [SOLVED] Ceph Node down after upgrade to 6.2

    Ok can you share output of the following ceph -s ceph osd dump corosync-cmapctl
  2. E

    Clearing Ceph OSD partition

    Follow this https://docs.ceph.com/docs/master/ceph-volume/lvm/zap/
  3. E

    [SOLVED] Ceph Node down after upgrade to 6.2

    Were u on ceph luminous or nautilus before upgrade?? Add a ceph repository and update
  4. E

    Proxmox Roadmap

    Thanks I have subscribed now
  5. E

    Proxmox Roadmap

    I have gone through the roadmap available on website it list the following in roadmap Roadmap Backup improvements Maintenance mode pveclient Container on ZFS over iSCSI btrfs storage plugin (postponed) improved SDN support Cross cluster authentication mechanism VM/CT Encryption Just...
  6. E

    [SOLVED] pvecm status showing only 1 node despite all node up

    Issue is resolved by change secauth: off in the /etc/corosync/corosync.conf file and restarting corosync
  7. E

    [SOLVED] pvecm status showing only 1 node despite all node up

    I have cluster of 8 servers but pvecm status is only showing one node and showing activity blocked in all the servers 1. I have synchronized time in all servers 2. I have restarted all servers 3. pvecm status in all servers showing different ring id corosync-cfgtool -s shows all connected...
  8. E

    Proxmox 6.2 cluster HA Vms don't migrate if node fails

    Paresh, As per that you have only 3 osd in the ceph pool and if one server goes down 33% placement groups are degraded causing the issue Once u make the server down, ha migrate will try to migrate the vm as u still have quorum but ceph is degraded or may be have stale and inactive pgs due to...
  9. E

    Proxmox 6.2 cluster HA Vms don't migrate if node fails

    Not capacity Pool size. Post output of the following command ceph osd dump ceph df pveceph lspools
  10. E

    Proxmox 6.2 cluster HA Vms don't migrate if node fails

    Are your vm running on ceph ?? If yes what is pool size
  11. E

    No-Subscription license - No valid subscription message

    I will be only removed once you buy a subscription
  12. E

    Passthrough hardcoded Windows license

    Just try using hosts instead of kvm64 in the vm
  13. E

    Ceph unstable Behaviour causing VM hanging

    Ok thanks @Alwin I have one more issue, raised a thread also https://forum.proxmox.com/threads/not-able-to-add-nfs-server-getting-error-nfs-is-not-online.72729/#post-325206 Can you have a look, not able to add nfs to pve 6 works fine with pve5
  14. E

    Ceph unstable Behaviour causing VM hanging

    Controller is virtio SCSI only do you mean virtio SCSI single and yes cache is write back, missed while copying
  15. E

    Ceph unstable Behaviour causing VM hanging

    @inc1pve25:~# qm config 3600 agent: 1 bootdisk: scsi0 cores: 10 cpu: kvm64 ide2: none,media=cdrom memory: 20480 name: server1 net0: virtio=EA:D2:42:2B:F4:43,bridge=vmbr0,firewall=1 net1: virtio=F6:28:36:D7:04:DA,bridge=vmbr3010,firewall=1 numa: 1 onboot: 1 ostype: l26 scsi0...
  16. E

    Not able to add nfs server, getting error nfs is not online

    please note I have installed one RHEL 7 vm in the proxmox and assign the IP in same network of proxmox, there i executed showmount -e 172.19.2.183 I got the result whereas i did not get anything on proxmox
  17. E

    Not able to add nfs server, getting error nfs is not online

    pvescan nfs 172.19.2.183 errors with pvesm command rpc mount export: RPC: Unable to receive; errno = No route to host command '/sbin/showmount --no-headers --exports 172.19.2.183' failed: exit code 1
  18. E

    Not able to add nfs server, getting error nfs is not online

    Yes, nfs did not work from beginning from UI, I tried adding through /etc/pve/storage.cfg same error root@inc1pve27:/mnt/pve/vm# ls -ltra total 20 drwxr-xr-x 18 root root 4096 May 20 07:29 .. drwxr-xr-x 2 root root 8192 Jul 11 06:10 .snapshot drwxrwx--- 3 root 10544 8192 Jul 12...
  19. E

    Not able to add nfs server, getting error nfs is not online

    mount -av command output mount.nfs: timeout set for Sun Jul 12 10:02:43 2020 mount.nfs: trying text-based options 'hard,vers=4.2,addr=172.19.2.183,clientaddr=172.19.2.32' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!