NFS mount error

I’m trying to export a file system to an esxi box, using SUSE 11.1 as the nfs server, but when I try to create the datastore from the esxi server I get an error that says the nfs server does not support mount version 3 over tcp. I have the nfs service started. Anyone know what could be causing this?

entry in fstab:
IpOfEsxi:/images /home/nfs_local nfs nfsvers=3,proto=tcp,mountvers=3,mountproto=tcp defaults 0 0

exports entry:
/images IpOfEsxi

And what when you do not use mountvers=3?

I have an nfs mount without any parameters. I do not know why you decided to use all those, but when you do not realy need them, just leave them out:

pOfEsxi:/images /home/nfs_local   nfs       defaults 0 0

As an aftertghought, I see that server and client have the same names (see the red above). Is that correct?

wow, that’s bad, which entry is supposed to be the client? Also I appear to be using mount version 2.x, so that error makes sense. Do you know which package I need to install to update the mount version? Thanks for the replies.

The one exporting is the server and the one mounting is the client (seems rather obvious to me). It seems also obvious to me that you mention always the other system on a system, no need to mention a name when it should be the system’s own name.

And btw, those names must be solvabale to IP addresses.

Of couse the error makes sense. That is where it is for. But when one does nopte read it …:expressionless:
Did you try if it works by removing all those optiions? (and of course with the proper system names on the proper places). When you have the strong urge to go for a newer level we can try that later.

I guess my first question should have been, which version of nfs do I need to use to get mount version 3? I tried restarting nfs without the options and then tried the mount from the esxi and got the same error. At the risk of looking even more inept, the mount point corresponds to a directory on the client side, correct?

updated exports entry:
/images ipOfNFSserver

updated fstab entry:
ipOfEsxi:/images /home/images_mount nfs defaults 0 0

On the server site, the export should mention an existing directory.

On the client site, the mount point should be an existing directory.

And the error is on what you call an exsi box. No openSUSE there? Does it require that version 3? Then why is it an option?

The server running openSUSE is intended to be the nfs. The esxi server, which is intended to be the client, appears to require version 3 of mount. For your last question, I assume you’re referring to the options that I was specifying in the fstab file. I think you’re right, those options were unnecessary. Also, the fstab and the exports file should both be located on the openSUSe server, correct? I should also mention that from the esxi side, the mount is done through a GUI and I can’t see what specific commands it is running to communicate with the openSUSE server.

I do understand less and less of it. Please use the word server only for the NFS-server and the word client for the NFS-client. For our problem it is of no relevance if these systems do serve or client other services.

And the NFS-server exports directories to the network (it serves them) and this is configured in its /etc/exports.

The NFS-client mounts them and that is configuredd in its /etc/fstab.

When that is what you mean above, you are correct.

When you want to let me loook into this further then give me the ouput:
from the server:

cat /etc/exports

and from the client:

cat /etc/fstab


mount -a

Anf those please copy/pasted into the post and surrounded by CODE tags.

Output from ‘cat exports’ (from server):

/images clientIPAddress

Output from ‘cat fstab’ (from client):

none      /proc    procfs    defaults        0 0
none      /vmfs/volumes      vcfs      defaults        0 0
none      /tmp     visorfs   2,128,tmp       0 0
ServerIP:/images /images_mount nfs defaults  0 0

As for ‘mount -a’, the mount command is not supported on the client, the equivalent is ‘esxcfg-nas -a’, which does nothing except print out the different options that are available for the command.

‘esxcfg-nas -a’ output

Missing label for operation
esxcfg-nas <options> <label>]
-a|--add                Add a new NAS filesystem to /vmfs volumes.  
                        Requires --host and --share options.
                        Use --readonly option only for readonly access.
-o|--host <host>        Set the host name or ip address for a NAS mount.
-s|--share <share>      Set the name of the NAS share on the remote system.
-y|--readonly           Add the new NAS filesystem with readonly access.
-d|--delete             Unmount and delete a filesystem.
-l|--list               List the currently mounted NAS file systems.
-r|--restore            Restore all NAS mounts from the configuration file. 
                        (FOR INTERNAL USE ONLY).
-h|--help               Show this message.

Hmm… just noticed this, the mount daemon only starts running when I run:

/etc/init.d/nfsserver start

rather than:


as specified in the openSUSE instructions on NFS…and as I was typing this post, the file system successfully mounted on the client. Thanks for tireless efforts hcw. I’d still be trying to mount a NFS on my intended NFS-server if not for you. That error made a lot of sense didn’t it?

And for starting the nfs-server, one goes to YaST > System > System services (runlevel), searches for nfs-server, switches it on and GO.

YaST will then look after everything. Not only is it started, but it will also be started on next boot (I suppose that is what you want). What you did is NOT sufficient to achieve this.

Those instructions must be very old and out of date I am afraid.