NFS is like a magic trick that lets your computer borrow a filing cabinet from another computer over the network, so it looks like the files are actually on your own machine.

Let’s see it in action. Imagine you have a server, nfs-server.example.com, with some data you want to share. On your client machine, nfs-client.example.com, you want to access this data.

First, on the server, you need to install the NFS server package. On a Debian/Ubuntu system, this is sudo apt update && sudo apt install nfs-kernel-server. On RHEL/CentOS/Fedora, it’s sudo dnf install nfs-utils.

Next, you need to tell NFS what to share and who can access it. This is done in the /etc/exports file. Let’s say we want to share the directory /srv/shared_data with our client. We’ll also restrict access to only nfs-client.example.com and allow read/write access.

/srv/shared_data  nfs-client.example.com(rw,sync,no_subtree_check)
  • /srv/shared_data: This is the directory on the server that will be exported.
  • nfs-client.example.com: This is the hostname or IP address of the client that is allowed to mount this share. You can use wildcards like *.example.com or IP ranges.
  • rw: Grants read and write permissions. Use ro for read-only.
  • sync: This is crucial for data integrity. It means the server must write changes to disk before replying to the client. async is faster but risks data loss if the server crashes.
  • no_subtree_check: This option disables subtree checking, which can improve performance by avoiding checks on every file operation. It’s generally safe if the exported directory is not a subdirectory of another exported directory.

After editing /etc/exports, you need to apply these changes. On most systems, this is sudo exportfs -a followed by sudo systemctl restart nfs-kernel-server (or nfs on older systems).

Now, on the client machine, you need to install the NFS client package. For Debian/Ubuntu: sudo apt update && sudo apt install nfs-common. For RHEL/CentOS/Fedora: sudo dnf install nfs-utils.

Before you can mount, you need a directory on the client where the NFS share will be attached. Let’s create one: sudo mkdir /mnt/nfs_share.

Finally, you can mount the NFS share from the server onto your client. The command looks like this:

sudo mount -t nfs nfs-server.example.com:/srv/shared_data /mnt/nfs_share
  • -t nfs: Specifies the filesystem type as NFS.
  • nfs-server.example.com:/srv/shared_data: This is the remote NFS share. It’s in the format server:/path/to/exported/directory.
  • /mnt/nfs_share: This is the local directory on the client where the remote share will be mounted.

Now, if you cd /mnt/nfs_share and ls, you should see the contents of /srv/shared_data from the server. Any files you create or modify in /mnt/nfs_share on the client will actually be written to /srv/shared_data on the server.

To make this mount persistent across reboots, you’ll add an entry to /etc/fstab on the client.

nfs-server.example.com:/srv/shared_data  /mnt/nfs_share  nfs  defaults,_netdev  0  0
  • defaults: A standard set of mount options.
  • _netdev: This is important! It tells the system that this is a network filesystem and should not be mounted until the network is available. Without this, your system might hang during boot if the network isn’t ready.

The core problem NFS solves is shared access to files across a network without needing to copy them or use less efficient protocols. It allows for centralized storage and management of data, making it easier for multiple clients to work with the same datasets, like shared project directories, home directories, or application data.

Internally, NFS operates on a client-server model. The NFS server exports directories, making them available for mounting. The NFS client then requests to mount these exported directories. When a client accesses a file on the mounted share, it makes RPC (Remote Procedure Call) requests to the NFS server. These requests are translated into file operations on the server’s local filesystem. The NFS protocol has evolved over versions (NFSv2, NFSv3, NFSv4), with each version bringing improvements in performance, security, and features like statefulness and better handling of network interruptions. NFSv4, in particular, introduced features like Kerberos integration for stronger security and a stateful protocol, which is more robust than the stateless nature of NFSv2/v3.

When you use NFS, the client doesn’t actually store the files locally. It maintains a pointer to the files on the server. Operations like read and write are intercepted by the client’s kernel and translated into network requests to the server. The server then performs the operation on its local disk and sends the result back. This means that the performance of your NFS share is heavily dependent on the network speed and latency between the client and server, as well as the performance of the server’s disk subsystem.

A common pitfall is encountering "Stale file handle" errors. This often happens when the exported directory on the server is moved, renamed, or deleted and then recreated with the same name, or if the server’s filesystem is remounted or rebooted in a way that invalidates the file handle the client is using. The fix is usually to unmount the share on the client (sudo umount /mnt/nfs_share) and then remount it. If that doesn’t work, you might need to restart the NFS services on both the client and server.

The next thing you’ll likely grapple with is optimizing NFS performance, especially for high-throughput workloads or when dealing with high latency networks.

Want structured learning?

Take the full Storage Systems course →