mount.nfs: requested NFS version or transport protocol is not supported

Discussion in 'Proxmox VE: Installation and configuration' started by acidrop, Nov 6, 2014.

  1. acidrop

    acidrop Member

    Joined:
    Jul 17, 2012
    Messages:
    194
    Likes Received:
    4
    Hello

    I have 2 proxmox nodes in a cluster.
    I am using debian wheezy with proxmox repos.
    One of nodes acts as nfs server for iso files.
    It was working normally but now it does not.
    On syslog I get the message: mount.nfs: requested NFS version or transport protocol is not supported

    On NFS server side:

    Code:
     pveversion -vproxmox-ve-2.6.32: 3.3-138 (running kernel: 3.10.0-4-pve)
    pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
    pve-kernel-3.10.0-4-pve: 3.10.0-17
    pve-kernel-2.6.32-32-pve: 2.6.32-136
    pve-kernel-2.6.32-33-pve: 2.6.32-138
    lvm2: 2.02.98-pve4
    clvm: 2.02.98-pve4
    corosync-pve: 1.4.7-1
    openais-pve: 1.1.4-3
    libqb0: 0.11.1-2
    redhat-cluster-pve: 3.2.0-2
    resource-agents-pve: 3.9.2-4
    fence-agents-pve: 4.0.10-1
    pve-cluster: 3.0-15
    qemu-server: 3.3-1
    pve-firmware: 1.1-3
    libpve-common-perl: 3.0-19
    libpve-access-control: 3.0-15
    libpve-storage-perl: 3.0-25
    pve-libspice-server1: 0.12.4-3
    vncterm: 1.1-8
    vzctl: 4.0-1pve6
    vzprocps: 2.0.11-2
    vzquota: 3.1-2
    pve-qemu-kvm: 2.1-10
    ksm-control-daemon: 1.1-1
    glusterfs-client: 3.5.2-1
    Code:
    root@proxmox2:/# rpcinfo -p   program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100021    1   udp  40475  nlockmgr
        100021    3   udp  40475  nlockmgr
        100021    4   udp  40475  nlockmgr
        100021    1   tcp  42364  nlockmgr
        100021    3   tcp  42364  nlockmgr
        100021    4   tcp  42364  nlockmgr
        100005    1   udp  16516  mountd
        100005    1   tcp  45270  mountd
        100005    2   udp  39428  mountd
        100005    2   tcp  47319  mountd
        100024    1   udp  17158  status
        100024    1   tcp  10537  status
    Code:
     service nfs-kernel-server status
    nfsd running
    On Client Side:

    Code:
    pveversion -vproxmox-ve-2.6.32: 3.3-138 (running kernel: 3.10.0-4-pve)
    pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
    pve-kernel-3.10.0-4-pve: 3.10.0-17
    pve-kernel-2.6.32-32-pve: 2.6.32-136
    pve-kernel-2.6.32-33-pve: 2.6.32-138
    lvm2: 2.02.98-pve4
    clvm: 2.02.98-pve4
    corosync-pve: 1.4.7-1
    openais-pve: 1.1.4-3
    libqb0: 0.11.1-2
    redhat-cluster-pve: 3.2.0-2
    resource-agents-pve: 3.9.2-4
    fence-agents-pve: 4.0.10-1
    pve-cluster: 3.0-15
    qemu-server: 3.3-1
    pve-firmware: 1.1-3
    libpve-common-perl: 3.0-19
    libpve-access-control: 3.0-15
    libpve-storage-perl: 3.0-25
    pve-libspice-server1: 0.12.4-3
    vncterm: 1.1-8
    vzctl: 4.0-1pve6
    vzprocps: 2.0.11-2
    vzquota: 3.1-2
    pve-qemu-kvm: 2.1-10
    ksm-control-daemon: 1.1-1
    glusterfs-client: 3.5.2-1
    Code:
    root@proxmox:/# nmap -T4 172.21.3.252
    
    Starting Nmap 6.00 ( http://nmap.org ) at 2014-11-06 11:38 EET
    Nmap scan report for 172.21.3.252
    Host is up (0.00021s latency).
    Not shown: 981 closed ports
    PORT      STATE SERVICE
    21/tcp    open  ftp
    22/tcp    open  ssh
    25/tcp    open  smtp
    53/tcp    open  domain
    80/tcp    open  http
    111/tcp   open  rpcbind
    139/tcp   open  netbios-ssn
    443/tcp   open  https
    445/tcp   open  microsoft-ds
    873/tcp   open  rsync
    902/tcp   open  iss-realsecure
    2049/tcp  open  nfs
    3128/tcp  open  squid-http
    3260/tcp  open  iscsi
    3389/tcp  open  ms-wbt-server
    8181/tcp  open  unknown
    8888/tcp  open  sun-answerbook
    10000/tcp open  snet-sensor-mgmt
    49152/tcp open  unknown
    Code:
    root@proxmox:/# mount -v -t nfs -o vers=3,nfsvers=3 172.21.3.252:/storage/nfs-zfs/nfs /mnt/pve/NFS-Share/mount.nfs: timeout set for Thu Nov  6 11:42:17 2014
    mount.nfs: trying text-based options 'vers=3,nvsvers=3,addr=172.21.3.252'
    mount.nfs: prog 100003, trying vers=3, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=3, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: requested NFS version or transport protocol is not supported
    Firewall is disabled on both nodes.

    I cannot mount even locally on nfs server itself:

    Code:
    root@proxmox2:~# mount -v -t nfs -o vers=3,nfsvers=3 172.21.3.252:/storage/nfs-zfs/nfs /mnt/pve/NFS-Share/mount.nfs: timeout set for Thu Nov  6 11:44:25 2014
    mount.nfs: trying text-based options 'vers=3,nfsvers=3,addr=172.21.3.252'
    mount.nfs: prog 100003, trying vers=3, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=3, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: requested NFS version or transport protocol is not supported
    Code:
    root@proxmox2:~# cat /etc/exports# /etc/exports: the access control list for filesystems which may be exported
    #               to NFS clients.  See exports(5).
    #
    # Example for NFSv2 and NFSv3:
    # /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
    #
    # Example for NFSv4:
    # /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
    # /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
    #
    /storage/nfs   192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
    /storage/nfs   172.21.3.0/24(rw,nohide,insecure,no_subtree_check,async)
    /storage/nfs2   192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
    /storage/nfs2   172.21.3.0/24(rw,nohide,insecure,no_subtree_check,async)
    /storage/nfs 10.3.3.0/24(rw,nohide,insecure,no_subtree_check,async)
    /storage/nfs2 10.3.3.0/24(rw,nohide,insecure,no_subtree_check,async)
    Code:
    root@proxmox2:~# cat /etc/default/nfs-kernel-server# Number of servers to start up
    RPCNFSDCOUNT=8
    
    
    # Runtime priority of server (see nice(1))
    RPCNFSDPRIORITY=0
    
    
    # Options for rpc.mountd.
    # If you have a port-based firewall, you might want to set up
    # a fixed port here using the --port option. For more information,
    # see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
    # To disable NFSv4 on the server, specify '--no-nfs-version 4' here
    RPCMOUNTDOPTS=--manage-gids
    
    
    # Do you want to start the svcgssd daemon? It is only required for Kerberos
    # exports. Valid alternatives are "yes" and "no"; the default is "no".
    NEED_SVCGSSD=
    
    
    # Options for rpc.svcgssd.
    RPCSVCGSSDOPTS=
    /etc/hosts.allow and /etc/host.deny are empty
     
  2. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    Try this instead: mount -v -t nfs -o vers=4,nfsvers=4 172.21.3.252:/storage/nfs-zfs/nfs /mnt/pve/NFS-Share/mount.nfs
     
  3. acidrop

    acidrop Member

    Joined:
    Jul 17, 2012
    Messages:
    194
    Likes Received:
    4
    Tried but I get this:

     
  4. sdutremble

    sdutremble Member

    Joined:
    Sep 29, 2011
    Messages:
    85
    Likes Received:
    0
    /storage/nfs-zfs does not appear in your exports file.

    Serge
     
  5. acidrop

    acidrop Member

    Joined:
    Jul 17, 2012
    Messages:
    194
    Likes Received:
    4
    thank you for pointing that, but actually /storage/nfs is a symbolic link to /storage/nfs-zfs.
    Although tried modifying /etc/exports with direct path but still the same error.

    If I try to restart nfs-kernel-server on NFS server I get this:

    Code:
    service nfs-kernel-server restartStopping NFS kernel daemon: mountd nfsd.
    Unexporting directories for NFS kernel daemon....
    Exporting directories for NFS kernel daemon....
    Starting NFS kernel daemon: nfsdrpc.nfsd: unable to bind inet TCP socket: errno 98 (Address already in use)
     mountd.
     
  6. sdutremble

    sdutremble Member

    Joined:
    Sep 29, 2011
    Messages:
    85
    Likes Received:
    0
    Looks like you have a port conflict.

    Use
    Code:
    rpcinfo -p | grep nfs
    to find out which port is used for nfs and then use the port numbers to see if any duplicate exists.

    Serge
     
  7. acidrop

    acidrop Member

    Joined:
    Jul 17, 2012
    Messages:
    194
    Likes Received:
    4
    rpcinfo -p | grep nfs on nfs server does not return any results.

    But rpcinfo -p shows:

    Code:
    root@proxmox2:~# rpcinfo -p   program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  51934  status
        100024    1   tcp  30217  status
        100021    1   udp  40013  nlockmgr
        100021    3   udp  40013  nlockmgr
        100021    4   udp  40013  nlockmgr
        100021    1   tcp  63316  nlockmgr
        100021    3   tcp  63316  nlockmgr
        100021    4   tcp  63316  nlockmgr
        100005    1   udp  62756  mountd
        100005    1   tcp  52782  mountd
        100005    2   udp  31191  mountd
        100005    2   tcp  32397  mountd
     
  8. sdutremble

    sdutremble Member

    Joined:
    Sep 29, 2011
    Messages:
    85
    Likes Received:
    0
    Port 2049 is usually used for nfs server.

    I do not see it in your list.

    I'll check at home when I get there after work and give you some more hints.

    Anyone can provide help?

    Serge
     
  9. acidrop

    acidrop Member

    Joined:
    Jul 17, 2012
    Messages:
    194
    Likes Received:
    4
    Thank you for the tip!
    Finally I found what was wrong.
    I have glusterfs also installed on these boxes.
    By default NFS was disabled on gluster but seems that after some apt-get upgrade it desided to enable it.
    This was conflicting with local NFS server.
    I gave:

    gluster volume set <VOLNAME> nfs.disable off

    and after:

    service nfs-kernel-server restart

    This time it completed successfully and I am able to mount nfs through Datastore -> Storage.

    Thank you again!
     
  10. rootus

    rootus New Member

    Joined:
    Dec 15, 2014
    Messages:
    6
    Likes Received:
    0
    Actually the correct way to DISABLE nfs on gluster volume is:
    Code:
     gluster volume set <VOLNAME> nfs.disable on
    Setting nfs.disable to off, actually enables gluster nfs (it sounds a bit counter intuitive I have to admit)
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice