Search results

  1. E

    Not able to add nfs server, getting error nfs is not online

    pvescan nfs 172.19.2.183 errors with pvesm command rpc mount export: RPC: Unable to receive; errno = No route to host command '/sbin/showmount --no-headers --exports 172.19.2.183' failed: exit code 1
  2. E

    Not able to add nfs server, getting error nfs is not online

    Yes, nfs did not work from beginning from UI, I tried adding through /etc/pve/storage.cfg same error root@inc1pve27:/mnt/pve/vm# ls -ltra total 20 drwxr-xr-x 18 root root 4096 May 20 07:29 .. drwxr-xr-x 2 root root 8192 Jul 11 06:10 .snapshot drwxrwx--- 3 root 10544 8192 Jul 12...
  3. E

    Not able to add nfs server, getting error nfs is not online

    mount -av command output mount.nfs: timeout set for Sun Jul 12 10:02:43 2020 mount.nfs: trying text-based options 'hard,vers=4.2,addr=172.19.2.183,clientaddr=172.19.2.32' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options...
  4. E

    Not able to add nfs server, getting error nfs is not online

    errors with pvesm command rpc mount export: RPC: Unable to receive; errno = No route to host command '/sbin/showmount --no-headers --exports 172.19.2.183' failed: exit code 1
  5. E

    Not able to add nfs server, getting error nfs is not online

    yes mount is working root@inc1pve25:~# df -k | grep vm 172.19.2.183:/inc1fpg3/inc1vfs3/vm 4294967296 2343936 4292623360 1% /mnt/pve/vm
  6. E

    Not able to add nfs server, getting error nfs is not online

    root@inc1pve25:~# nmap -p 111,2049 172.19.2.183 Starting Nmap 7.70 ( https://nmap.org ) at 2020-07-11 07:18 UTC Nmap scan report for inc1vfs3 (172.19.2.183) Host is up (0.00010s latency). PORT STATE SERVICE 111/tcp open rpcbind 2049/tcp open nfs MAC Address...
  7. E

    Not able to add nfs server, getting error nfs is not online

    Able to mount manually root@inc1pve25:~# df -k | grep vm 172.19.2.183:/inc1fpg3/inc1vfs3/vm 4294967296 2343936 4292623360 1% /mnt/pve/vm
  8. E

    Not able to add nfs server, getting error nfs is not online

    /etc/pve/storage.cfg nfs: vmnfs content images,rootdir,backup server 172.19.2.183 export /inc1fpg/inc1vfs3/vm path /mnt/pve/vm options vers=4
  9. E

    Not able to add nfs server, getting error nfs is not online

    root@inc1pve25:~# showmount -e inc1vfs3 rpc mount export: RPC: Unable to receive; errno = No route to host
  10. E

    Not able to add nfs server, getting error nfs is not online

    Error in syslog Jul 11 07:12:22 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not online Jul 11 07:12:31 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not online Jul 11 07:12:41 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not online Jul 11 07:12:51 inc1pve25 pvestatd[2319]: storage 'vmnfs' is not...
  11. E

    Not able to add nfs server, getting error nfs is not online

    pveversion proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-4 pve-kernel-helper: 6.2-4 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-2-pve: 5.4.44-2 pve-kernel-5.4.41-1-pve: 5.4.41-1 pve-kernel-5.0.21-5-pve: 5.0.21-10...
  12. E

    Ceph unstable Behaviour causing VM hanging

    2020-07-10 14:38:06.979125 mon.inc1pve25 [WRN] Health check update: Degraded data redundancy: 149781/71793 objects degraded (208.629%), 854 pgs degraded, 450 pgs undersized (PG_DEGRADED) 2020-07-10 14:38:11.983988 mon.inc1pve25 [WRN] Health check update: Degraded data redundancy: 129631/75819...
  13. E

    Ceph unstable Behaviour causing VM hanging

    10 mints later 2020-07-10 14:37:41.961802 mon.inc1pve25 [INF] Marking osd.44 out (has been down for 606 seconds) 2020-07-10 14:37:41.961822 mon.inc1pve25 [INF] Marking osd.45 out (has been down for 606 seconds) 2020-07-10 14:37:41.961831 mon.inc1pve25 [INF] Marking osd.46 out (has been down for...
  14. E

    Ceph unstable Behaviour causing VM hanging

    Yes it is same 1 node down ( ie 4 OSD down) ==> around 10 seconds-no write 2 node down ( ie 8 OSD Down) ==> 10 mints no write, not able to login to VM's also ceph status cluster: id: b020e833-3252-416a-b904-40bb4c97af5e health: HEALTH_WARN 8 osds down...
  15. E

    Ceph unstable Behaviour causing VM hanging

    ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 83.83191 - 84 TiB 65 GiB 17 GiB 742 KiB 48 GiB 84 TiB 0.08 1.00 - root default -3 6.98599 - 7.0 TiB 5.4 GiB 1.4 GiB 56 KiB...
  16. E

    Ceph unstable Behaviour causing VM hanging

    CrushMap After Applying is like this, I have taken a new dump after applying # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable...
  17. E

    Ceph unstable Behaviour causing VM hanging

    I followed this Read and write the map # Read ceph osd getcrushmap -o map.bin #Conversion crushtool -d map.bin -o map.txt #Editing vi map.txt [ #Removed the choose_args ] #Convert Again crushtool -c map.txt -o map.bin # Write ceph osd setcrushmap -i map.bin .