Search results

  1. E

    Not able to add nfs server, getting error nfs is not online

    errors with pvesm command rpc mount export: RPC: Unable to receive; errno = No route to host command '/sbin/showmount --no-headers --exports 172.19.2.183' failed: exit code 1
  2. E

    Not able to add nfs server, getting error nfs is not online

    yes mount is working root@inc1pve25:~# df -k | grep vm 172.19.2.183:/inc1fpg3/inc1vfs3/vm 4294967296 2343936 4292623360 1% /mnt/pve/vm
  3. E

    Not able to add nfs server, getting error nfs is not online

    root@inc1pve25:~# nmap -p 111,2049 172.19.2.183 Starting Nmap 7.70 ( https://nmap.org ) at 2020-07-11 07:18 UTC Nmap scan report for inc1vfs3 (172.19.2.183) Host is up (0.00010s latency). PORT STATE SERVICE 111/tcp open rpcbind 2049/tcp open nfs MAC Address...
  4. E

    Not able to add nfs server, getting error nfs is not online

    Able to mount manually root@inc1pve25:~# df -k | grep vm 172.19.2.183:/inc1fpg3/inc1vfs3/vm 4294967296 2343936 4292623360 1% /mnt/pve/vm
  5. E

    Not able to add nfs server, getting error nfs is not online

    /etc/pve/storage.cfg nfs: vmnfs content images,rootdir,backup server 172.19.2.183 export /inc1fpg/inc1vfs3/vm path /mnt/pve/vm options vers=4
  6. E

    Not able to add nfs server, getting error nfs is not online

    root@inc1pve25:~# showmount -e inc1vfs3 rpc mount export: RPC: Unable to receive; errno = No route to host
  7. E

    Not able to add nfs server, getting error nfs is not online

    Error in syslog Jul 11 07:12:22 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not online Jul 11 07:12:31 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not online Jul 11 07:12:41 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not online Jul 11 07:12:51 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not...
  8. E

    Not able to add nfs server, getting error nfs is not online

    pveversion proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-4 pve-kernel-helper: 6.2-4 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-2-pve: 5.4.44-2 pve-kernel-5.4.41-1-pve: 5.4.41-1 pve-kernel-5.0.21-5-pve: 5.0.21-10...
  9. E

    Ceph unstable Behaviour causing VM hanging

    2020-07-10 14:38:06.979125 mon.inc1pve25 [WRN] Health check update: Degraded data redundancy: 149781/71793 objects degraded (208.629%), 854 pgs degraded, 450 pgs undersized (PG_DEGRADED) 2020-07-10 14:38:11.983988 mon.inc1pve25 [WRN] Health check update: Degraded data redundancy: 129631/75819...
  10. E

    Ceph unstable Behaviour causing VM hanging

    10 mints later 2020-07-10 14:37:41.961802 mon.inc1pve25 [INF] Marking osd.44 out (has been down for 606 seconds) 2020-07-10 14:37:41.961822 mon.inc1pve25 [INF] Marking osd.45 out (has been down for 606 seconds) 2020-07-10 14:37:41.961831 mon.inc1pve25 [INF] Marking osd.46 out (has been down for...
  11. E

    Ceph unstable Behaviour causing VM hanging

    Yes it is same 1 node down ( ie 4 OSD down) ==> around 10 seconds-no write 2 node down ( ie 8 OSD Down) ==> 10 mints no write, not able to login to VM's also ceph status cluster: id: b020e833-3252-416a-b904-40bb4c97af5e health: HEALTH_WARN 8 osds down...
  12. E

    Ceph unstable Behaviour causing VM hanging

    ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 83.83191 - 84 TiB 65 GiB 17 GiB 742 KiB 48 GiB 84 TiB 0.08 1.00 - root default -3 6.98599 - 7.0 TiB 5.4 GiB 1.4 GiB 56 KiB...
  13. E

    Ceph unstable Behaviour causing VM hanging

    CrushMap After Applying is like this, I have taken a new dump after applying # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable...
  14. E

    Ceph unstable Behaviour causing VM hanging

    I followed this Read and write the map # Read ceph osd getcrushmap -o map.bin #Conversion crushtool -d map.bin -o map.txt #Editing vi map.txt [ #Removed the choose_args ] #Convert Again crushtool -c map.txt -o map.bin # Write ceph osd setcrushmap -i map.bin .
  15. E

    Ceph unstable Behaviour causing VM hanging

    Yes thats understandable, now if i just remove the section choose_args from the crushmap, will it do the needful? I will follow the procedure to apply again
  16. E

    Ceph unstable Behaviour causing VM hanging

    or simply removing choose_args from the map will do the job??
  17. E

    Ceph unstable Behaviour causing VM hanging

    can i use this ceph osd crush reweight {name} {weight} to reweight it can you just the ideal weight