can't mount nfs inside Ubuntu 8.04 guest (appliance)

jeebustrain

Renowned Member
Jan 14, 2010
24
0
66
St Louis
I am currently running VE 1.5 (yes, I know I need to upgrade) with the 2.6.18.4-pve kernel. I have an Ubuntu 8.04 guest that I am trying to give myself the ability to access an NFS mount. I've installed nfs-common inside the guest, but whenever I try to mount, I get this:
mount.nfs: No such device
now, I did some reading and came across this which suggests that it could be a kernel issue. My first instinct is to run a version upgrade (can I go to 1.8 from 1.5???), but I honestly need to get this working before I can devote the time to explore running a Proxmox upgrade and any issues that may come from it.

Is there any other way (short of mounting it using Fuse or something) to get this thing mounted inside the guest? I've run all guest and host updates possible and have rebooted both to make sure the kernel module is there, but it still doesn't seem to want to work.

Any ideas?
 
hmm.. thanks for that. I didn't see that in the wiki. It did not help, however. Portmap was already running, and I actually was able to install nfs-common without any errors. It just errors when I try to mount something. for example, mount -a returns "mount.nfs: No such device" when /etc/fstab contains an nfs mount line (same line lifted from another box). I also tried the nolock option as well that was referenced on the OpenVZ wiki.

Also, the symptoms listed seem to be a bit different - on the links you mentioned, it says it freezes whenever you try to mount - mine doesn't freeze at all.

I'll keep digging in with this. This is a VM that has been recycled from something else (small web server), but other than some extra apache modules, it's pretty stock. I'll try to spin up a new VM to see if I have the same problem.
 
you cannot load kernel modules inside a container - so this the standard behavior.
 
ah - that makes sense. Thanks for the clarification. I'll keep digging, but if I can't make this work soon, I might end up just saying screw it and making a KVM machine to do what I need.
 
hmm - as a last ditch effort, I tried to mount it as a samba share (it's on an openfiler NAS and shared out as both). trying both a manual mount and via fstab, I get the same sort of error

mount error: cifs filesystem not supported by the system
Refer to the mount.cifs(8) manual page (e.g.man mount.cifs)
I think it's about time I give up on this appliance and use a KVM VM
 
yes, a KVM guest is also a good alternative - using virtio for disks and net is super-fast on Linux.

but note, once you figured out how to work with containers you will never go away.
 
hmm.. I did some digging on my host machine and ran cat /proc/filesystems and pulled back this:

starless:~# cat /proc/filesystems
nodev sysfs
nodev rootfs
nodev bdev
nodev proc
nodev cpuset
nodev binfmt_misc
nodev debugfs
nodev sockfs
nodev usbfs
nodev pipefs
nodev anon_inodefs
nodev futexfs
nodev tmpfs
nodev inotifyfs
nodev eventpollfs
nodev devpts
ext3
ext2
nodev ramfs
nodev hugetlbfs
iso9660
nodev mqueue
nodev simfs

so just for grins, I ran modprobe nfs on the proxmox host and this happened:
starless:~# modprobe nfs
WARNING: Error inserting sunrpc (/lib/modules/2.6.18-4-pve/kernel/net/sunrpc/sunrpc.ko): Invalid module format
WARNING: Error inserting nfs_acl (/lib/modules/2.6.18-4-pve/kernel/fs/nfs_common/nfs_acl.ko): Invalid module format
WARNING: Error inserting lockd (/lib/modules/2.6.18-4-pve/kernel/fs/lockd/lockd.ko): Invalid module format
FATAL: Error inserting nfs (/lib/modules/2.6.18-4-pve/kernel/fs/nfs/nfs.ko): Invalid module format
is that expected behavior too?
 
but note, once you figured out how to work with containers you will never go away.

I certainly would love to, I have a whole bunch of them for various small servers, development and test machines, and other stuff. But this mounting thing is killing me. I even tried using sshfs (first time ever) and realized that fuse is required as a kernel module as well.

is there anyone out there using the standard Ubuntu 8.04 appliance template that might be able to try mounting an NFS share for me?
 
no, 'modprobe nfs' should not give any error (on the host). did you upgrade already to 1.8? If not, do it.
 
OK the upgrade to 1.8 went well - those modprobe errors are gone. Still can't mount nfs inside of the openVZ containers, but I kind of figured that. I also went ahead and tried the 2.6.32 kernel as well. So far, so good.
 
hi @ll, I have the same troubles. I've installed new Proxmox VE from CD, installed a debian 6 OVZ host and installed the nfs-kernel-server package on the proxmox host. When I do the nfs mount i got this error:
root@syslog:~# mount -v xx.xxx.xx.xx:/DATA/syslog /mnt
mount: no type was given - I'll assume nfs because of the colon
mount.nfs: timeout set for Mon Apr 18 09:55:26 2011
mount.nfs: trying text-based options 'vers=4,addr=xx.xxx.xx.xx,clientaddr=xx.xxx.xx.xx'
mount.nfs: mount(2): No such device
mount.nfs: No such device

when I do the same mount on the proxmox-host it works.

I have no idea...
 
Can somebody help us with this virtual appliance cifs/nfs mounting trouble? I'm having a headache with this as well - mounting via CIFS on an Ubuntu 10.04 OpenVZ appliance:

me@myva:~$ sudo mount -t cifs //myshare mymount
mount error: cifs filesystem not supported by the system
mount error(19): No such device
So I went ahead and downloaded and built the cifs kernel module and tried to insmod and got:

me@myva:~$ sudo insmod /lib/modules/2.6.32-30-generic/kernel/fs/cifs/cifs.ko
insmod: error inserting '/lib/modules/2.6.32-30-generic/kernel/fs/cifs/cifs.ko': -1 Operation not permitted
Then I found out you couldn't do that inside a container, so I looked on the hardware node only to find cifs happily loaded:

me@myvmh:~$ lsmod | grep cifs
cifs 218272 0
 
why don´t you use bind mounts? (mount whatever you need on the host and then use bind mount to the container)
 
Woo-hoo! You rock, Tom. I've never heard of bind mounts. And thanks for the link, jeebustrain - easy-peasy.

But can anybody explain why these bind mounts are necessary and why containers don't seem to have access to kernel modules on the hardware node, like CIFS?

Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!